• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 15
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 163
  • 163
  • 83
  • 63
  • 46
  • 36
  • 27
  • 26
  • 26
  • 25
  • 23
  • 23
  • 22
  • 22
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Advancing the Formulation and Testing of Multilevel Mediation and Moderated Mediation Models

Rockwood, Nicholas John 26 May 2017 (has links)
No description available.
92

Pavement Service Life Estimation And Condition Prediction

Yu, Jianxiong January 2005 (has links)
No description available.
93

Linear Mixed Effects Model for a Longitudinal Genome Wide Association Study of Lipid Measures in Type 1 Diabetes

Wang, Tao 10 1900 (has links)
<p>Hypercholesterolemia is the presence of high levels of cholesterol in the blood, and it is one of the major factors for the development of long-term complications in T1D patients.</p> <p>In the thesis, we studied 1303 Caucasians with type 1 diabetes in the Diabetes Control and Complications Trial (DCCT). With the experience of diabetes study, many factors are associated with diabetes complications, they are age, gender, cohort, treatment, diabetes duration, body mass index (BMI), exercise, insulin dose, etc. We mainly focus on which factors are associated with total cholesterol (CHL) analysis in the thesis.</p> <p>Many measures were collected monthly, quarterly or yearly for average 6.5 years from 1983 to 1993. We used annually lipid measures of DCCT because of their values are sufficient and complete, and they belong to longitudinal data.</p> <p>Different methods are discussed in the study, and linear mixed effect models are the appropriate approach to the study. The details of model selection with CHL model analysis are shown, which includes fixed effect selection, random effects selection, and residual correlation structure selection. Then the SNPs were added on three models individually in GWAS. We found locus (rs7412) is not only genome-wide associated with CHL, but also genome-wide associated with LDL.</p> <p>We will assess whether these SNPs are diabetes-specific in the future, and we will add dietary data in the three models to identify locus are associated with the interaction of diet and SNPs.</p> / Master of Science (MSc)
94

Bayesian Approach Dealing with Mixture Model Problems

Zhang, Huaiye 05 June 2012 (has links)
In this dissertation, we focus on two research topics related to mixture models. The first topic is Adaptive Rejection Metropolis Simulated Annealing for Detecting Global Maximum Regions, and the second topic is Bayesian Model Selection for Nonlinear Mixed Effects Model. In the first topic, we consider a finite mixture model, which is used to fit the data from heterogeneous populations for many applications. An Expectation Maximization (EM) algorithm and Markov Chain Monte Carlo (MCMC) are two popular methods to estimate parameters in a finite mixture model. However, both of the methods may converge to local maximum regions rather than the global maximum when multiple local maxima exist. In this dissertation, we propose a new approach, Adaptive Rejection Metropolis Simulated Annealing (ARMS annealing), to improve the EM algorithm and MCMC methods. Combining simulated annealing (SA) and adaptive rejection metropolis sampling (ARMS), ARMS annealing generate a set of proper starting points which help to reach all possible modes. ARMS uses a piecewise linear envelope function for a proposal distribution. Under the SA framework, we start with a set of proposal distributions, which are constructed by ARMS, and this method finds a set of proper starting points, which help to detect separate modes. We refer to this approach as ARMS annealing. By combining together ARMS annealing with the EM algorithm and with the Bayesian approach, respectively, we have proposed two approaches: an EM ARMS annealing algorithm and a Bayesian ARMS annealing approach. EM ARMS annealing implement the EM algorithm by using a set of starting points proposed by ARMS annealing. ARMS annealing also helps MCMC approaches determine starting points. Both approaches capture the global maximum region and estimate the parameters accurately. An illustrative example uses a survey data on the number of charitable donations. The second topic is related to the nonlinear mixed effects model (NLME). Typically a parametric NLME model requires strong assumptions which make the model less flexible and often are not satisfied in real applications. To allow the NLME model to have more flexible assumptions, we present three semiparametric Bayesian NLME models, constructed with Dirichlet process (DP) priors. Dirichlet process models often refer to an infinite mixture model. We propose a unified approach, the penalized posterior Bayes factor, for the purpose of model comparison. Using simulation studies, we compare the performance of two of the three semiparametric hierarchical Bayesian approaches with that of the parametric Bayesian approach. Simulation results suggest that our penalized posterior Bayes factor is a robust method for comparing hierarchical parametric and semiparametric models. An application to gastric emptying studies is used to demonstrate the advantage of our estimation and evaluation approaches. / Ph. D.
95

Testing methods for calibrating Forest Vegetation Simulator (FVS) diameter growth predictions

Cankaya, Ergin Cagatay 20 September 2018 (has links)
The Forest Vegetation Simulator (FVS) is a growth and yield modeling system widely-used for predicting stand and tree-level attributes for management and planning applications in North American forests. The accuracy of FVS predictions for a range of tree and stand level attributes depends a great deal on the performance of the diameter increment model and its predictions of change in diameter at breast height (DBH) over time. To address the challenge of predicting growth in highly variable and geographically expansive forest systems, FVS was designed to include an internal calibration algorithm that makes use of growth observations, when available, from permanent inventory plots. The basic idea is that observed growth rates on a collection of remeasured trees are used to adjust or "calibrate" FVS diameter growth predictions. Therefore, DBH modeling was the focus of this investigation. Five methods were proposed for local calibration of individual tree DBH growth predictions and compared to two sets of results generated without calibration. Data from the US Forest Service's Forest Inventory and Analysis (FIA) program were used to test the methods for eleven widely-distributed forest tree species in Virginia. Two calibration approaches were based on median prediction errors from locally-observed DBH increments spanning a five year average time interval. Two were based on simple linear regression models fitted to the locally-observed prediction errors, and one method employed a mixed effects regression model with a random intercept term estimated from locally-observed DBH increments. Data witholding, specifically a leave-one-out cross-validation was used to compare results of the methods tested. Results showed that any of the calibration approaches tested in general led to improved accuracy of DBH growth predictions, with either of the median-based methods or regression based methods performing better than the random-effects-based approach. Equivalence testing showed that median or regression-based local calibration methods met error tolerances within ± 12% of observed DBH increments for all species with the random effects approach meeting a larger tolerance of ± 17%. These results showed improvement over uncalibrated models, which failed to meet tolerances as high as ± 30% for some species in a newly-fitted DBH growth model for Virginia, and as high as ± 170% for an existing model fitted to data from a much larger region of the Southeastern United States. Local calibration of regional DBH increment models provides an effective means of substantially reducing prediction errors when a relatively small set of observations are available from local sources such as permanent forest inventory plots, or the FIA database. / MS / The Forest Vegetation Simulator (FVS) is a growth and yield model widely-used for predicting stand dynamics, management and decision support in North American forests. Diameter increment is a major component in modeling tree growth. The system of integrated analytical tools in FVS is primarily based on the performance of the diameter increment model and the subsequent use of predicted in diameter at breast height (DBH) over time in forecasting tree attributes. To address the challenge of predicting growth in highly variable and geographically expansive forest systems, FVS was designed to include an internal calibration algorithm that makes use of growth observations, when available, from permanent inventory plots. The basic idea was that observed growth rates on a small set of remeasured trees are used to adjust or “calibrate” FVS growth predictions. The FVS internal calibration was the subject being investigated here. Five alternative methods were proposed attributed to a specific site or stand of interest and compared to two sets of results, which were based on median prediction errors, generated without calibration. Results illustrated that median-based methods or regression based methods performed better than the random-effects-based approach using independently observed growth data from Forest Service FIA re-measurements in Virginia. Local calibration of regional DBH increment models provides an effective means of substantially reducing prediction errors. The results of this study should also provide information to evaluate the efficiency of FVS calibration alternatives and a possible method for future implementation.
96

Comparison of four methods for deriving hospital standardised mortality ratios from a single hierarchical logistic regression model

Mohammed, Mohammed A., Manktelow, B.N., Hofer, T.P. January 2012 (has links)
No / There is interest in deriving case-mix adjusted standardised mortality ratios so that comparisons between healthcare providers, such as hospitals, can be undertaken in the controversial belief that variability in standardised mortality ratios reflects quality of care. Typically standardised mortality ratios are derived using a fixed effects logistic regression model, without a hospital term in the model. This fails to account for the hierarchical structure of the data - patients nested within hospitals - and so a hierarchical logistic regression model is more appropriate. However, four methods have been advocated for deriving standardised mortality ratios from a hierarchical logistic regression model, but their agreement is not known and neither do we know which is to be preferred. We found significant differences between the four types of standardised mortality ratios because they reflect a range of underlying conceptual issues. The most subtle issue is the distinction between asking how an average patient fares in different hospitals versus how patients at a given hospital fare at an average hospital. Since the answers to these questions are not the same and since the choice between these two approaches is not obvious, the extent to which profiling hospitals on mortality can be undertaken safely and reliably, without resolving these methodological issues, remains questionable.
97

Ecological Responses to Severe Flooding in Coastal Ecosystems: Determining the Vegetation Response to Hurricane Harvey within a Texas Coast Salt Marsh

Hudman, Kenneth Russell 08 1900 (has links)
Vegetative health was measured both before and after Hurricane Harvey using remotely sensed vegetation indices on the coastal marshland surrounding Galveston Island's West Bay. Data were recorded on a monthly basis following the hurricane from September of 2005 until September of 2019 in order to document the vegetation response to this significant disturbance event. Both initial impact and recovery were found to be dependent on a variety of factors, including elevation zone, spatial proximity to the bay, the season during which recovery took place, as well as the amount of time since the hurricane. Slope was also tested as a potential variable using a LiDAR-derived slope raster, and while unable to significantly explain variations in vegetative health immediately following the hurricane, it was able to explain some degree of variability among spatially close data points. Among environmental factors, elevation zone appeared to be the most key in determining the degree of vegetation impact, suggesting that the different plant assemblages that make up different portions of the marsh react differently to the severe flooding that took place during Harvey.
98

Semi-mechanistic models of glucose homeostasis and disease progression in type 2 diabetes

Choy, Steve January 2016 (has links)
Type 2 diabetes mellitus (T2DM) is a metabolic disorder characterized by consistently high blood glucose, resulting from a combination of insulin resistance and reduced capacity of β-cells to secret insulin. While the exact causes of T2DM is yet unknown, obesity is known to be a major risk factor as well as co-morbidity for T2DM. As the global prevalence of obesity continues to increase, the association between obesity and T2DM warrants further study. Traditionally, mathematical models to study T2DM were mostly empirical and thus fail to capture the dynamic relationship between glucose and insulin. More recently, mechanism-based population models to describe glucose-insulin homeostasis with a physiological basis were proposed and offered a substantial improvement over existing empirical models in terms of predictive ability. The primary objectives of this thesis are (i) examining the predictive usefulness of semi-mechanistic models in T2DM by applying an existing population model to clinical data, and (ii) exploring the relationship between obesity and T2DM and describe it mathematically in a novel semi-mechanistic model to explain changes to the glucose-insulin homeostasis and disease progression of T2DM. Through the use of non-linear mixed effects modelling, the primary mechanism of action of an antidiabetic drug has been correctly identified using the integrated glucose-insulin model, reinforcing the predictive potential of semi-mechanistic models in T2DM. A novel semi-mechanistic model has been developed that incorporated a relationship between weight change and insulin sensitivity to describe glucose, insulin and glycated hemoglobin simultaneously in a clinical setting. This model was also successfully adapted in a pre-clinical setting and was able to describe the pathogenesis of T2DM in rats, transitioning from healthy to severely diabetic. This work has shown that a previously unutilized biomarker was found to be significant in affecting glucose homeostasis and disease progression in T2DM, and that pharmacometric models accounting for the effects of obesity in T2DM would offer a more complete physiological understanding of the disease.
99

Uncertainty intervals and sensitivity analysis for missing data

Genbäck, Minna January 2016 (has links)
In this thesis we develop methods for dealing with missing data in a univariate response variable when estimating regression parameters. Missing outcome data is a problem in a number of applications, one of which is follow-up studies. In follow-up studies data is collected at two (or more) occasions, and it is common that only some of the initial participants return at the second occasion. This is the case in Paper II, where we investigate predictors of decline in self reported health in older populations in Sweden, the Netherlands and Italy. In that study, around 50% of the study participants drop out. It is common that researchers rely on the assumption that the missingness is independent of the outcome given some observed covariates. This assumption is called data missing at random (MAR) or ignorable missingness mechanism. However, MAR cannot be tested from the data, and if it does not hold, the estimators based on this assumption are biased. In the study of Paper II, we suspect that some of the individuals drop out due to bad health. If this is the case the data is not MAR. One alternative to MAR, which we pursue, is to incorporate the uncertainty due to missing data into interval estimates instead of point estimates and uncertainty intervals instead of confidence intervals. An uncertainty interval is the analog of a confidence interval but wider due to a relaxation of assumptions on the missing data. These intervals can be used to visualize the consequences deviations from MAR have on the conclusions of the study. That is, they can be used to perform a sensitivity analysis of MAR. The thesis covers different types of linear regression. In Paper I and III we have a continuous outcome, in Paper II a binary outcome, and in Paper IV we allow for mixed effects with a continuous outcome. In Paper III we estimate the effect of a treatment, which can be seen as an example of missing outcome data.
100

Approximation de la distribution a posteriori d'un modèle Gamma-Poisson hiérarchique à effets mixtes

Nembot Simo, Annick Joëlle 01 1900 (has links)
La méthode que nous présentons pour modéliser des données dites de "comptage" ou données de Poisson est basée sur la procédure nommée Modélisation multi-niveau et interactive de la régression de Poisson (PRIMM) développée par Christiansen et Morris (1997). Dans la méthode PRIMM, la régression de Poisson ne comprend que des effets fixes tandis que notre modèle intègre en plus des effets aléatoires. De même que Christiansen et Morris (1997), le modèle étudié consiste à faire de l'inférence basée sur des approximations analytiques des distributions a posteriori des paramètres, évitant ainsi d'utiliser des méthodes computationnelles comme les méthodes de Monte Carlo par chaînes de Markov (MCMC). Les approximations sont basées sur la méthode de Laplace et la théorie asymptotique liée à l'approximation normale pour les lois a posteriori. L'estimation des paramètres de la régression de Poisson est faite par la maximisation de leur densité a posteriori via l'algorithme de Newton-Raphson. Cette étude détermine également les deux premiers moments a posteriori des paramètres de la loi de Poisson dont la distribution a posteriori de chacun d'eux est approximativement une loi gamma. Des applications sur deux exemples de données ont permis de vérifier que ce modèle peut être considéré dans une certaine mesure comme une généralisation de la méthode PRIMM. En effet, le modèle s'applique aussi bien aux données de Poisson non stratifiées qu'aux données stratifiées; et dans ce dernier cas, il comporte non seulement des effets fixes mais aussi des effets aléatoires liés aux strates. Enfin, le modèle est appliqué aux données relatives à plusieurs types d'effets indésirables observés chez les participants d'un essai clinique impliquant un vaccin quadrivalent contre la rougeole, les oreillons, la rub\'eole et la varicelle. La régression de Poisson comprend l'effet fixe correspondant à la variable traitement/contrôle, ainsi que des effets aléatoires liés aux systèmes biologiques du corps humain auxquels sont attribués les effets indésirables considérés. / We propose a method for analysing count or Poisson data based on the procedure called Poisson Regression Interactive Multilevel Modeling (PRIMM) introduced by Christiansen and Morris (1997). The Poisson regression in the PRIMM method has fixed effects only, whereas our model incorporates random effects. As well as Christiansen and Morris (1997), the model studied aims at doing inference based on adequate analytical approximations of posterior distributions of the parameters. This avoids the use of computationally expensive methods such as Markov chain Monte Carlo (MCMC) methods. The approximations are based on the Laplace's method and asymptotic theory. Estimates of Poisson mixed effects regression parameters are obtained through the maximization of their joint posterior density via the Newton-Raphson algorithm. This study also provides the first two posterior moments of the Poisson parameters involved. The posterior distributon of these parameters is approximated by a gamma distribution. Applications to two datasets show that our model can be somehow considered as a generalization of the PRIMM method since it also allows clustered count data. Finally, the model is applied to data involving many types of adverse events recorded by the participants of a drug clinical trial which involved a quadrivalent vaccine containing measles, mumps, rubella and varicella. The Poisson regression incorporates the fixed effect corresponding to the covariate treatment/control as well as a random effect associated with the biological system of the body affected by the adverse events.

Page generated in 0.0482 seconds