• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 66
  • 47
  • 19
  • 8
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 312
  • 81
  • 43
  • 40
  • 36
  • 32
  • 32
  • 32
  • 32
  • 31
  • 29
  • 27
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

The Flex Representation Method: Versatile Modeling for Isogeometric Analysis

Whetten, Christopher David 13 December 2022 (has links)
The Flex Representation Method (FRM) leverages unique computational advantages of splines to address limitations in the process of building CAE simulation models from CAD geometric models. Central to the approach is the envelope CAD domain that encapsulates a CAD model. An envelope CAD domain can be of arbitrary topological and geometric complexity. Envelope domains are constructed from spline representations, like U-splines, that are analysis-suitable. The envelope CAD domain can be used to approximate none, some, or all of the features in a CAD model. This yields additional simulation modeling options that simplify the model-building process while leveraging the properties of splines to control the accuracy and robustness of computed solutions. Modern integration techniques are adapted to envelope domains to maintain accurate solutions regardless of the CAD envelope chosen. The potential of the method is illustrated through several carefully selected benchmark problems.
232

Group Specific Dynamic Models of Time Varying Exposures on a Time-to-Event Outcome

Tong, Yan 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Time-to-event outcomes are widely utilized in medical research. Assessing the cumulative effects of time-varying exposures on time-to-event outcomes poses challenges in statistical modeling. First, exposure status, intensity, or duration may vary over time. Second, exposure effects may be delayed over a latent period, a situation that is not considered in traditional survival models. Third, exposures that occur within a time window may cumulatively in uence an outcome. Fourth, such cumulative exposure effects may be non-linear over exposure latent period. Lastly, exposure-outcome dynamics may differ among groups defined by individuals' characteristics. These challenges have not been adequately addressed in current statistical models. The objective of this dissertation is to provide a novel approach to modeling group-specific dynamics between cumulative timevarying exposures and a time-to-event outcome. A framework of group-specific dynamic models is introduced utilizing functional time-dependent cumulative exposures within an etiologically relevant time window. Penalizedspline time-dependent Cox models are proposed to evaluate group-specific outcome-exposure dynamics through the associations of a time-to-event outcome with functional cumulative exposures and group-by-exposure interactions. Model parameter estimation is achieved by penalized partial likelihood. Hypothesis testing for comparison of group-specific exposure effects is performed by Wald type tests. These models are extended to group-specific non-linear exposure intensity-latency-outcome relationship and group-specific interaction effect from multiple exposures. Extensive simulation studies are conducted and demonstrate satisfactory model performances. The proposed methods are applied to the analyses of group-specific associations between antidepressant use and time to coronary artery disease in a depression-screening cohort using data extracted from electronic medical records.
233

TESTING FOR DIFFERENTIALLY EXPRESSED GENES AND KEY BIOLOGICAL CATEGORIES IN DNA MICROARRAY ANALYSIS

SARTOR, MAUREEN A. January 2007 (has links)
No description available.
234

Efficient Inference for Periodic Autoregressive Coefficients with Polynomial Spline Smoothing Approach

Tang, Lin January 2015 (has links)
No description available.
235

Profile Monitoring with Fixed and Random Effects using Nonparametric and Semiparametric Methods

Abdel-Salam, Abdel-Salam Gomaa 20 November 2009 (has links)
Profile monitoring is a relatively new approach in quality control best used where the process data follow a profile (or curve) at each time period. The essential idea for profile monitoring is to model the profile via some parametric, nonparametric, and semiparametric methods and then monitor the fitted profiles or the estimated random effects over time to determine if there have been changes in the profiles. The majority of previous studies in profile monitoring focused on the parametric modeling of either linear or nonlinear profiles, with both fixed and random effects, under the assumption of correct model specification. Our work considers those cases where the parametric model for the family of profiles is unknown or at least uncertain. Consequently, we consider monitoring profiles via two techniques, a nonparametric technique and a semiparametric procedure that combines both parametric and nonparametric profile fits, a procedure we refer to as model robust profile monitoring (MRPM). Also, we incorporate a mixed model approach to both the parametric and nonparametric model fits. For the mixed effects models, the MMRPM method is an extension of the MRPM method which incorporates a mixed model approach to both parametric and nonparametric model fits to account for the correlation within profiles and to deal with the collection of profiles as a random sample from a common population. For each case, we formulated two Hotelling's T 2 statistics, one based on the estimated random effects and one based on the fitted values, and obtained the corresponding control limits. In addition,we used two different formulas for the estimated variancecovariance matrix: one based on the pooled sample variance-covariance matrix estimator and a second one based on the estimated variance-covariance matrix based on successive differences. A Monte Carlo study was performed to compare the integrated mean square errors (IMSE) and the probability of signal of the parametric, nonparametric, and semiparametric approaches. Both correlated and uncorrelated errors structure scenarios were evaluated for varying amounts of model misspecification, number of profiles, number of observations per profile, shift location, and in- and out-of-control situations. The semiparametric (MMRPM) method for uncorrelated and correlated scenarios was competitive and, often, clearly superior with the parametric and nonparametric over all levels of misspecification. For a correctly specified model, the IMSE and the simulated probability of signal for the parametric and theMMRPM methods were identical (or nearly so). For the severe modelmisspecification case, the nonparametric andMMRPM methods were identical (or nearly so). For the mild model misspecification case, the MMRPM method was superior to the parametric and nonparametric methods. Therefore, this simulation supports the claim that the MMRPM method is robust to model misspecification. In addition, the MMRPM method performed better for data sets with correlated error structure. Also, the performances of the nonparametric and MMRPM methods improved as the number of observations per profile increases since more observations over the same range of X generally enables more knots to be used by the penalized spline method, resulting in greater flexibility and improved fits in the nonparametric curves and consequently, the semiparametric curves. The parametric, nonparametric and semiparametric approaches were utilized for fitting the relationship between torque produced by an engine and engine speed in the automotive industry. Then, we used a Hotelling's T 2 statistic based on the estimated random effects to conduct Phase I studies to determine the outlying profiles. The parametric, nonparametric and seminonparametric methods showed that the process was stable. Despite the fact that all three methods reach the same conclusion regarding the –in-control– status of each profile, the nonparametric and MMRPM results provide a better description of the actual behavior of each profile. Thus, the nonparametric and MMRPM methods give the user greater ability to properly interpret the true relationship between engine speed and torque for this type of engine and an increased likelihood of detecting unusual engines in future production. Finally, we conclude that the nonparametric and semiparametric approaches performed better than the parametric approach when the user's model is misspecified. The case study demonstrates that, the proposed nonparametric and semiparametric methods are shown to be more efficient, flexible and robust to model misspecification for Phase I profile monitoring in a practical application. Thus, our methods are robust to the common problem of model misspecification. We also found that both the nonparametric and the semiparametric methods result in charts with good abilities to detect changes in Phase I data, and in charts with easily calculated control limits. The proposed methods provide greater flexibility and efficiency than current parametric methods used in profile monitoring for Phase I that rely on correct model specification, an unrealistic situation in many practical problems in industrial applications. / Ph. D.
236

Semiparametric Varying Coefficient Models for Matched Case-Crossover Studies

Ortega Villa, Ana Maria 23 November 2015 (has links)
Semiparametric modeling is a combination of the parametric and nonparametric models in which some functions follow a known form and some others follow an unknown form. In this dissertation we made contributions to semiparametric modeling for matched case-crossover data. In matched case-crossover studies, it is generally accepted that the covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model. Any stratum effect is removed by the conditioning on the fixed number of sets of the case and controls in the stratum. However, some matching covariates such as time, and/or spatial location often play an important role as an effect modification. Failure to include them makes incorrect statistical estimation, prediction and inference. Hence in this dissertation, we propose several approaches that will allow the inclusion of time and spatial location as well as other effect modifications such as heterogeneous subpopulations among the data. To address modification due to time, three methods are developed: the first is a parametric approach, the second is a semiparametric penalized approach and the third is a semiparametric Bayesian approach. We demonstrate the advantage of the one stage semiparametric approaches using both a simulation study and an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis with drinking water turbidity. To address modifications due to time and spatial location, two methods are developed: the first one is a semiparametric spatial-temporal varying coefficient model for a small number of locations. The second method is a semiparametric spatial-temporal varying coefficient model, and is appropriate when the number of locations among the subjects is medium to large. We demonstrate the accuracy of these approaches by using simulation studies, and when appropriate, an epidemiological example of a 1-4 bi-directional case-crossover study. Finally, to explore further effect modifications by heterogeneous subpopulations among strata we propose a nonparametric Bayesian approach constructed with Dirichlet process priors, which clusters subpopulations and assesses heterogeneity. We demonstrate the accuracy of our approach using a simulation study, as well a an example of a 1-4 bi-directional case-crossover study. / Ph. D.
237

Splined Speed Control using SpAM (Speed-based Acceleration Maps) for an Autonomous Ground Vehicle

Anderson, David 15 April 2008 (has links)
There are many forms of speed control for an autonomous ground vehicle currently in development. Most use a simple PID controller to achieve a speed specified by a higher-level motion planning algorithm. Simple controllers may not provide a desired acceleration profile for a ground vehicle. Also, without extensive tuning the PID controller may cause excessive speed overshoot and oscillation. This paper examines an approach that was designed to allow a greater degree of control while reducing the computing load on the motion planning software. The SpAM+PI (Speed-based Acceleration Map + Proportional Integral controller) algorithm outlined in this paper uses three inputs: current velocity, desired velocity and desired maximum acceleration, to determine throttle and brake commands that will allow the vehicle to achieve its correct speed. Because this algorithm resides on an external controller it does not add to the computational load of the motion planning computer. Also, with only two inputs that are needed only when there is a change in desired speed or maximum desired acceleration, network traffic between the computers can be greatly reduced. The algorithm uses splines to smoothly plan a speed profile from the vehicle's current speed to its desired speed. It then uses a lookup table to determine the correct pedal position (throttle or brake) using the current vehicle speed and a desired instantaneous acceleration that was determined in the splining step of the algorithm. Once the pedal position is determined a PI controller is used to minimize error in the system. The SpAM+PI approach is a novel approach to the speed control of an autonomous vehicle. This academic experiment is tested using Odin, Team Victor Tango's entry into the 2007 DARPA Urban Challenge which won 3rd place and a $500,000 prize. The evaluation of the algorithm exposed both strengths and weaknesses that guide the next step in the development of a speed control algorithm. / Master of Science
238

Smooth Finite Element Methods with Polynomial Reproducing Shape Functions

Narayan, Shashi January 2013 (has links) (PDF)
A couple of discretization schemes, based on an FE-like tessellation of the domain and polynomial reproducing, globally smooth shape functions, are considered and numerically explored to a limited extent. The first one among these is an existing scheme, the smooth DMS-FEM, that employs Delaunay triangulation or tetrahedralization (as approximate) towards discretizing the domain geometry employs triangular (tetrahedral) B-splines as kernel functions en route to the construction of polynomial reproducing functional approximations. In order to verify the numerical accuracy of the smooth DMS-FEM vis-à-vis the conventional FEM, a Mindlin-Reissner plate bending problem is numerically solved. Thanks to the higher order continuity in the functional approximant and the consequent removal of the jump terms in the weak form across inter-triangular boundaries, the numerical accuracy via the DMS-FEM approximation is observed to be higher than that corresponding to the conventional FEM. This advantage notwithstanding, evaluations of DMS-FEM based shape functions encounter singularity issues on the triangle vertices as well as over the element edges. This shortcoming is presently overcome through a new proposal that replaces the triangular B-splines by simplex splines, constructed over polygonal domains, as the kernel functions in the polynomial reproduction scheme. Following a detailed presentation of the issues related to its computational implementation, the new method is numerically explored with the results attesting to a higher attainable numerical accuracy in comparison with the DMS-FEM.
239

Algorithmes numériques en temps réel appliqués à l'identification de cristaux et à la mesure de l'estampe du temps scanner TEP/TDM tout-numérique à base de photodiodes à avalanche

Semmaoui, Hichman January 2009 (has links)
La tomographie d'émission par positrons (TEP) est devenue un outil important dans les diagnostics de la médecine nucléaire. Avec le développement et l'utilisation de différents radiotraceurs qui permettent de visualiser les processus métaboliques et les structures organiques par des procédés non invasifs, les caméras TEP cliniques sont largement utilisées et fournissent une résolution spatiale et temporelle suffisante pour les diagnostics humains. De plus, la recherche en pharmacologie et en médecine sont d'autres champs d'applications en développement. En effet, par l'utilisation de la TEP dans les expérimentations avec des petits animaux, l'efficacité de nouveaux médicaments peut être facilement vérifiée. Cependant, le problème avec les tomographes TEP pour petits animaux est la nécessité d'une résolution spatiale et temporelle beaucoup plus grande que celle pour les examens cliniques sur les humains. Ceci requiert de nouveaux concepts de détecteurs et de traitement de signal dans le développement des systèmes TEP dédiés pour les petits animaux. En outre, ces concepts sont complémentés, pour résoudre ce problème, par la fusion d'une image morphologique (tomodensitométrie-TDM) à une image métabolique (TEP). Le LabPET[exposant TM], un scanner TEP dont l'aspect bimodal TEP/TDM est en développement. Ce scanner, dédié aux petits animaux, est développé à l'Université de Sherbrooke. Il utilise des photodiodes à avalanche (PDA) connectées individuellement à des scintillateurs et combinés à de nouveaux algorithmes numériques. Ce scanner vise à répondre aux besoins relatifs à la résolution spatiale et temporelle de l'imagerie TEP pour petits animaux. Dans cette thèse, de nouveaux algorithmes sont développés et testés afin d'augmenter la résolution spatiale et temporelle du LabPET. L'augmentation de la résolution spatiale est basée sur des algorithmes d'identification de cristaux, excités, au sein d'un détecteur multicristaux. Tandis que, l'augmentation de la résolution temporelle est basée sur un concept de déconvolution utilisant le résultat de l'identification de cristaux.
240

Approximation du problème diffusion en tomographie optique et problème inverse

Addam, Mohamed 09 December 2009 (has links) (PDF)
Cette thèse porte sur l'approximation des équations aux dérivées partielles, en particulier l'équation de diffusion en tomographie optique. Elle peut se présenter en deux parties essentielles. Dans la première partie on discute le problème direct alors que le problème inverse est abordé dans la seconde partie. Pour le problème direct, on suppose que les paramètres optiques et les fonctions sources sont donnés. On résout alors le problème de diffusion dans un domaine où la densité du flux lumineux est considérée comme une fonction inconnue à approcher numériquement. Le plus souvent, pour reconstruire le signal numérique dans ce genre de problème, une discrétisation dans le temps est nécessaire. Nous avons proposé d'utiliser la transformée de Fourier et son inverse afin d'éviter une telle discrétisation. Les techniques que nous avons utilisées sont la quadrature de Gauss-Hermite ainsi que la méthode de Galerkin basée sur les B-splines ou les B-splines tensorielles ainsi que sur les fonctions radiales. Les B-splines sont utilisées en dimension un alors que les B-splines tensorielles sont utilisées lorsque le domaine est rectangulaire avec un maillage uniforme. Lorsque le domaine n'est plus rectangulaire, nous avons proposé de remplacer la base des B-splines tensorielles par les fonctions à base radiale construites à partir d'un nuage de points dispersés dans le domaine. Du point de vue théorique, nous avons étudié l'existence, l'unicité et la régularité de la solution puis nous avons proposé quelques résultats sur l'estimation de l'erreur dans les espaces de type Sobolev ainsi que sur la convergence de la méthode. Dans la seconde partie de notre travail, nous nous sommes intéressés au problème inverse. Il s'agit d'un problème inverse non-linéaire dont la non-linéarité est liée aux paramètres optiques. On suppose qu'on dispose des mesures du flux lumineux aux bords du domaine étudié et des fonctions sources. On veut alors résoudre le problème inverse de façon à simuler numériquement l'indice de réfraction ainsi que les coefficients de diffusion et d'absorption. Du point de vue théorique, nous avons discuté certains résultats tels que la continuité et la dérivabilité, au sens de Fréchet, de l'opérateur mesurant le flux lumineux reçu aux bords. Nous avons établi les propriétés lipschitzienne de la dérivée de Fréchet en fonction des paramètres optiques. Du point de vue numérique nous nous somme intéressés au problème discret dans la base des B-splines et la base des fonctions radiales. En suite, nous avons abordé la résolution du problème inverse non-linéaire par la méthode de Newton-Gauss.

Page generated in 0.0385 seconds