Spelling suggestions: "subject:"mixedeffects codels"" "subject:"mixedeffects 2models""
11 |
Constrained clustering and cognitive decline detection /Lu, Zhengdong. January 2008 (has links)
Thesis (Ph.D.) OGI School of Science & Engineering at OHSU, June 2008. / Includes bibliographical references (leaves 138-145).
|
12 |
Modelo não linear misto aplicado a análise de dados longitudinais em um solo localizado em Paragominas, PA / Nonlinear mixed model applied in longitudinal data analysis in a soil located in Paragominas, PAMarcello Neiva de Mello 22 January 2014 (has links)
Este trabalho tem como objetivo aplicar a teoria de modelos mistos ao estudo do teor de nitrogênio e carbono no solo, em diversas profundidades. Devido a grande quantidade de matéria orgânica no solo, o teor de nitrogênio e carbono apresentam alta variabilidade nas primeiras profundidades, além de apresentar um comportamento não linear. Assim, fez-se necessário utilizar a abordagem de modelos não lineares mistos a dados longitudinais. A utilização desta abordagem proporciona um modelo que permite modelar dados não lineares, com heterogeneidade de variâncias, fornecendo uma curva para cada amostra. / This paper has as an objective to apply the theory of mixed models to the content of nitrogen and carbon in the soil at various depths. Due to the large amount of organic material in the soil, the content of nitrogen and carbon present high variability in the depths of soil surface, and present a nonlinear behavior. Thus, it was necessary to use the approach of nonlinear mixed models to longitudinal data analysis. The use of this approach provides a model that allows to model nonlinear data with heterogeneity of variances by providing a curve for each sample.
|
13 |
Benefits of Non-Linear Mixed Effect Modeling and Optimal Design : Pre-Clinical and Clinical Study ApplicationsErnest II, Charles January 2013 (has links)
Despite the growing promise of pharmaceutical research, inferior experimentation or interpretation of data can inhibit breakthrough molecules from finding their way out of research institutions and reaching patients. This thesis provides evidence that better characterization of pre-clinical and clinical data can be accomplished using non-linear mixed effect modeling (NLMEM) and more effective experiments can be conducted using optimal design (OD). To demonstrate applicability of NLMEM and OD in pre-clinical applications, in vitro ligand binding studies were examined. NLMEMs were used to evaluate precision and accuracy of ligand binding parameter estimation from different ligand binding experiments using sequential (NLR) and simultaneous non-linear regression (SNLR). SNLR provided superior resolution of parameter estimation in both precision and accuracy compared to NLR. OD of these ligand binding experiments for one and two binding site systems including commonly encountered experimental errors was performed. OD was employed using D- and ED-optimality. OD demonstrated that reducing the number of samples, measurement times, and separate ligand concentrations provides robust parameter estimation and more efficient and cost effective experimentation. To demonstrate applicability of NLMEM and OD in clinical applications, a phase advanced sleep study formed the basis of this investigation. A mixed-effect Markov-chain model based on transition probabilities as multinomial logistic functions using polysomnography data in phase advanced subjects was developed and compared the sleep architecture between this population and insomniac patients. The NLMEM was sufficiently robust for describing the data characteristics in phase advanced subjects, and in contrast to aggregated clinical endpoints, which provide an overall assessment of sleep behavior over the night, described the dynamic behavior of the sleep process. OD of a dichotomous, non-homogeneous, Markov-chain phase advanced sleep NLMEM was performed using D-optimality by computing the Fisher Information Matrix for each Markov component. The D-optimal designs improved the precision of parameter estimates leading to more efficient designs by optimizing the doses and the number of subjects in each dose group. This thesis provides examples how studies in drug development can be optimized using NLMEM and OD. This provides a tool than can lower the cost and increase the overall efficiency of drug development. / <p>My name should be listed as "Charles Steven Ernest II" on cover.</p>
|
14 |
Bayesian modeling of neuropsychological test scoresDu, Mengtian 06 October 2021 (has links)
In this dissertation we propose novel Bayesian methods of analysis of patterns of neuropsychological testing. We first focus attention to situations in which the goal of the analysis is to discover risk factors of cognitive decline using longitudinal assessment of tests scores. Variable selection in the Bayesian setting is still challenging, particularly for analysis of longitudinal data. We propose a novel approach to selection of the fixed effects in mixed effect models that combines a backward selection algorithm and a metrics based on the posterior credible intervals of the model parameters. The heuristic of this approach is based on searching for those parameters that are most likely to be different from zero based on their posterior credible intervals, without requiring ad hoc approximations of model parameters or informative prior distributions. We show via a simulation study that this approach produces more parsimonious models than other popular criteria such as the Bayesian deviance information criterion. We then apply this approach to test the hypothesis that genotypes of the APOE gene have different effects on the rate of cognitive decline of participants in the Long Life Family Study. In the second part of the dissertation we shift focus on analysis of neuropsychological tests administered using emerging digital technologies. The challenge of analyzing these data is that for each study participant the test is a data stream that records time and spatial coordinates of the digitally executed test and the goal is to extract some useful and informative summary univariate variables that can be used for analysis. Toward this goal, we propose a novel application of Bayesian Hidden Markov Models to analyze digitally recorded Trail Making Tests. Applying the Hidden Markov Model enables us to perform automatic segmentation of the digital data stream and allows us to extract meaningful metrics that correlate the Trail Making Tests performance to other cognitive and physical function test scores. We show that the extracted metrics provide information in addition to the traditionally used scores. / 2023-10-06T00:00:00Z
|
15 |
Addressing the Variable Selection Bias and Local Optimum Limitations of Longitudinal Recursive Partitioning with Time-Efficient ApproximationsJanuary 2019 (has links)
abstract: Longitudinal recursive partitioning (LRP) is a tree-based method for longitudinal data. It takes a sample of individuals that were each measured repeatedly across time, and it splits them based on a set of covariates such that individuals with similar trajectories become grouped together into nodes. LRP does this by fitting a mixed-effects model to each node every time that it becomes partitioned and extracting the deviance, which is the measure of node purity. LRP is implemented using the classification and regression tree algorithm, which suffers from a variable selection bias and does not guarantee reaching a global optimum. Additionally, fitting mixed-effects models to each potential split only to extract the deviance and discard the rest of the information is a computationally intensive procedure. Therefore, in this dissertation, I address the high computational demand, variable selection bias, and local optimum solution. I propose three approximation methods that reduce the computational demand of LRP, and at the same time, allow for a straightforward extension to recursive partitioning algorithms that do not have a variable selection bias and can reach the global optimum solution. In the three proposed approximations, a mixed-effects model is fit to the full data, and the growth curve coefficients for each individual are extracted. Then, (1) a principal component analysis is fit to the set of coefficients and the principal component score is extracted for each individual, (2) a one-factor model is fit to the coefficients and the factor score is extracted, or (3) the coefficients are summed. The three methods result in each individual having a single score that represents the growth curve trajectory. Therefore, now that the outcome is a single score for each individual, any tree-based method may be used for partitioning the data and group the individuals together. Once the individuals are assigned to their final nodes, a mixed-effects model is fit to each terminal node with the individuals belonging to it.
I conduct a simulation study, where I show that the approximation methods achieve the goals proposed while maintaining a similar level of out-of-sample prediction accuracy as LRP. I then illustrate and compare the methods using an applied data. / Dissertation/Thesis / Doctoral Dissertation Psychology 2019
|
16 |
Pharmacometric Models to Improve Treatment of TuberculosisSvensson, Elin M January 2016 (has links)
Tuberculosis (TB) is the world’s most deadly infectious disease and causes enormous public health problems. The comorbidity with HIV and the rise of multidrug-resistant TB strains impede successful therapy through drug-drug interactions and the lack of efficient second-line treatments. The aim of this thesis was to support the improvement of anti-TB therapy through development of pharmacometric models, specifically focusing on the novel drug bedaquiline, pharmacokinetic interactions and methods for pooled population analyses. A population pharmacokinetic model of bedaquiline and its metabolite M2, linked to semi-mechanistic models of body weight and albumin concentrations, was developed and used for exposure-response analysis. Treatment response was quantified by measurements of mycobacterial load and early bedaquiline exposure was found to significantly impact the half-life of bacterial clearance. The analysis represents the first successful characterization of a concentration-effect relationship for bedaquiline. Single-dose Phase I studies investigating potential interactions between bedaquiline and efavirenz, nevirapine, ritonavir-boosted lopinavir, rifampicin and rifapentine were analyzed with a model-based approach. Substantial effects were detected in several cases and dose-adjustments mitigating the impact were suggested after simulations. The interaction effects of nevirapine and ritonavir-boosted lopinavir were also confirmed in patients with multidrug-resistant TB on long-term treatment combining the antiretrovirals and bedaquiline. Furthermore, the outcomes from model-based analysis were compared to results from conventional non-compartmental analysis in a simulation study. Non-compartmental analysis was found to consistently underpredict the interaction effect when most of the concentration-time profile was not observed, as commonly is the case for compounds with very long terminal half-life such as bedaquiline. To facilitate pooled analyses of individual patient data from multiple sources a structured development procedure was outlined and a fast diagnostic tool for extensions of the stochastic model components was developed. Pooled analyses of nevirapine and rifabutin pharmacokinetics were performed; the latter generating comprehensive dosing recommendations for combined administration of rifabutin and antiretroviral protease inhibitors. The work presented in this thesis demonstrates the usefulness of pharmacometric techniques to improve treatment of TB and especially contributes evidence to inform optimized dosing regimens of new and old anti-TB drugs in various clinical contexts.
|
17 |
Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical UseStrömberg, Eric January 2016 (has links)
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs. It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible. Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters. Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage. Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage. The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
|
18 |
Improved Methods for Pharmacometric Model-Based Decision-Making in Clinical Drug DevelopmentDosne, Anne-Gaëlle January 2016 (has links)
Pharmacometric model-based analysis using nonlinear mixed-effects models (NLMEM) has to date mainly been applied to learning activities in drug development. However, such analyses can also serve as the primary analysis in confirmatory studies, which is expected to bring higher power than traditional analysis methods, among other advantages. Because of the high expertise in designing and interpreting confirmatory studies with other types of analyses and because of a number of unresolved uncertainties regarding the magnitude of potential gains and risks, pharmacometric analyses are traditionally not used as primary analysis in confirmatory trials. The aim of this thesis was to address current hurdles hampering the use of pharmacometric model-based analysis in confirmatory settings by developing strategies to increase model compliance to distributional assumptions regarding the residual error, to improve the quantification of parameter uncertainty and to enable model prespecification. A dynamic transform-both-sides approach capable of handling skewed and/or heteroscedastic residuals and a t-distribution approach allowing for symmetric heavy tails were developed and proved relevant tools to increase model compliance to distributional assumptions regarding the residual error. A diagnostic capable of assessing the appropriateness of parameter uncertainty distributions was developed, showing that currently used uncertainty methods such as bootstrap have limitations for NLMEM. A method based on sampling importance resampling (SIR) was thus proposed, which could provide parameter uncertainty in many situations where other methods fail such as with small datasets, highly nonlinear models or meta-analysis. SIR was successfully applied to predict the uncertainty in human plasma concentrations for the antibiotic colistin and its prodrug colistin methanesulfonate based on an interspecies whole-body physiologically based pharmacokinetic model. Lastly, strategies based on model-averaging were proposed to enable full model prespecification and proved to be valid alternatives to standard methodologies for studies assessing the QT prolongation potential of a drug and for phase III trials in rheumatoid arthritis. In conclusion, improved methods for handling residual error, parameter uncertainty and model uncertainty in NLMEM were successfully developed. As confirmatory trials are among the most demanding in terms of patient-participation, cost and time in drug development, allowing (some of) these trials to be analyzed with pharmacometric model-based methods will help improve the safety and efficiency of drug development.
|
19 |
Statistical models for estimating the intake of nutrients and foods from complex survey dataPell, David Andrew January 2019 (has links)
Background: The consequences of poor nutrition are well known and of wide concern. Governments and public health agencies utilise food and diet surveillance data to make decisions that lead to improvements in nutrition. These surveys often utilise complex sample designs for efficient data collection. There are several challenges in the statistical analysis of dietary intake data collected using complex survey designs, which have not been fully addressed by current methods. Firstly, the shape of the distribution of intake can be highly skewed due to the presence of outlier observations and a large proportion of zero observations arising from the inability of the food diary to capture consumption within the period of observation. Secondly, dietary data is subject to variability arising from day-to-day individual variation in food consumption and measurement error, to be accounted for in the estimation procedure for correct inferences. Thirdly, the complex sample design needs to be incorporated into the estimation procedure to allow extrapolation of results into the target population. This thesis aims to develop novel statistical methods to address these challenges, applied to the analysis of iron intake data from the UK National Diet and Nutrition Survey Rolling Programme (NDNS RP) and UK national prescription data of iron deficiency medication. Methods: 1) To assess the nutritional status of particular population groups a two-part model with a generalised gamma (GG) distribution was developed for intakes that show high frequencies of zero observations. The two-part model accommodated the sources of data variation of dietary intake with a random intercept in each component, which could be correlated to allow a correlation between the probability of consuming and the amount consumed. 2) To identify population groups at risk of low nutrient intakes, a linear quantile mixed-effects model was developed to model quantiles of the distribution of intake as a function of explanatory variables. The proposed approach was illustrated by comparing the quantiles of iron intake with Lower Reference Nutrient Intakes (LRNI) recommendations using NDNS RP. This thesis extended the estimation procedures of both the two-part model with GG distribution and the linear quantile mixed-effects model to incorporate the complex sample design in three steps: the likelihood function was multiplied by the sample weightings; bootstrap methods for the estimation of the variance and finally, the variance estimation of the model parameters was stratified by the survey strata. 3) To evaluate the allocation of resources to alleviate nutritional deficiencies, a quantile linear mixed-effects model was used to analyse the distribution of expenditure on iron deficiency medication across health boards in the UK. Expenditure is likely to depend on the iron status of the region; therefore, for a fair comparison among health boards, iron status was estimated using the method developed in objective 2) and used in the specification of the median amount spent. Each health board is formed by a set of general practices (GPs), therefore, a random intercept was used to induce correlation between expenditure from two GPs from the same health board. Finally, the approaches in objectives 1) and 2) were compared with the traditional approach based on weighted linear regression modelling used in the NDNS RP reports. All analyses were implemented using SAS and R. Results: The two-part model with GG distribution fitted to amount of iron consumed from selected episodically food, showed that females tended to have greater odds of consuming iron from foods but consumed smaller amounts. As age groups increased, consumption tended to increase relative to the reference group though odds of consumption varied. Iron consumption also appeared to be dependent on National Statistics Socio-Economic Classification (NSSEC) group with lower social groups consuming less, in general. The quantiles of iron intake estimated using the linear quantile mixed-effects model showed that more than 25% of females aged 11-50y are below the LRNI, and that 11-18y girls are the group at highest of deficiency in the UK. Predictions of spending on iron medication in the UK based on the linear quantile mixed-effects model showed areas of higher iron intake resulted in lower spending on treating iron deficiency. In a geographical display of expenditure, Northern Ireland featured the lowest amount spent. Comparing the results from the methods proposed here showed that using the traditional approach based on weighted regression analysis could result in spurious associations. Discussion: This thesis developed novel approaches to the analysis of dietary complex survey data to address three important objectives of diet surveillance, namely the mean estimation of food intake by population groups, identification of groups at high risk of nutrient deficiency and allocation of resources to alleviate nutrient deficiencies. The methods provided models of good fit to dietary data, accounted for the sources of data variability and extended the estimation procedures to incorporate the complex sample survey design. The use of a GG distribution for modelling intake is an important improvement over existing methods, as it includes many distributions with different shapes and its domain takes non-negative values. The two-part model accommodated the sources of data variation of dietary intake with a random intercept in each component, which could be correlated to allow a correlation between the probability of consuming and the amount consumed. This also improves existing approaches that assume a zero correlation. The linear quantile mixed-effects model utilises the asymmetric Laplace distribution which can also accommodate many different distributional shapes, and likelihood-based estimation is robust to model misspecification. This method is an important improvement over existing methods used in nutritional research as it explicitly models the quantiles in terms of explanatory variables using a novel quantile regression model with random effects. The application of these models to UK national data confirmed the association of poorer diets and lower social class, identified the group of 11-50y females as a group at high risk of iron deficiency, and highlighted Northern Ireland as the region with the lowest expenditure on iron prescriptions.
|
20 |
Statistical modeling and design in forestry : The case of single tree modelsBerhe, Leakemariam January 2008 (has links)
<p>Forest quantification methods have evolved from a simple graphical approach to complex regression models with stochastic structural components. Currently, mixed effects models methodology is receiving attention in the forestry literature. However, the review work (Paper I) indicates a tendency to overlook appropriate covariance structures in the NLME modeling process.</p><p>A nonlinear mixed effects modeling process is demonstrated in Paper II using Cupressus lustanica tree merchantable volume data and compared several models with and without covariance structures. For simplicity and clarity of the nonlinear mixed effects modeling, four phases of modeling were introduced. The nonlinear mixed effects model for C. lustanica tree merchantable volume with the covariance structures for both the random effects and within group errors has shown a significant improvement over the model with simplified covariance matrix. However, this statistical significance has little to explain in the prediction performance of the model.</p><p>In Paper III, using several performance indicator statistics, tree taper models were compared in an effort to propose the best model for the forest management and planning purpose of the C. lustanica plantations. Kozak's (1988) tree taper model was found to be the best for estimating C. lustanica taper profile.</p><p>Based on the Kozak (1988) tree taper model, a Ds optimal experimental design study is carried out in Paper IV. In this study, a Ds-optimal (sub) replication free design is suggested for the Kozak (1988) tree taper model.</p>
|
Page generated in 0.059 seconds