• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 15
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 163
  • 163
  • 83
  • 63
  • 46
  • 36
  • 27
  • 26
  • 26
  • 25
  • 23
  • 23
  • 22
  • 22
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical Use

Strömberg, Eric January 2016 (has links)
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs.  It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible.   Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters.  Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage.  Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage.   The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
62

Improved Methods for Pharmacometric Model-Based Decision-Making in Clinical Drug Development

Dosne, Anne-Gaëlle January 2016 (has links)
Pharmacometric model-based analysis using nonlinear mixed-effects models (NLMEM) has to date mainly been applied to learning activities in drug development. However, such analyses can also serve as the primary analysis in confirmatory studies, which is expected to bring higher power than traditional analysis methods, among other advantages. Because of the high expertise in designing and interpreting confirmatory studies with other types of analyses and because of a number of unresolved uncertainties regarding the magnitude of potential gains and risks, pharmacometric analyses are traditionally not used as primary analysis in confirmatory trials. The aim of this thesis was to address current hurdles hampering the use of pharmacometric model-based analysis in confirmatory settings by developing strategies to increase model compliance to distributional assumptions regarding the residual error, to improve the quantification of parameter uncertainty and to enable model prespecification. A dynamic transform-both-sides approach capable of handling skewed and/or heteroscedastic residuals and a t-distribution approach allowing for symmetric heavy tails were developed and proved relevant tools to increase model compliance to distributional assumptions regarding the residual error. A diagnostic capable of assessing the appropriateness of parameter uncertainty distributions was developed, showing that currently used uncertainty methods such as bootstrap have limitations for NLMEM. A method based on sampling importance resampling (SIR) was thus proposed, which could provide parameter uncertainty in many situations where other methods fail such as with small datasets, highly nonlinear models or meta-analysis. SIR was successfully applied to predict the uncertainty in human plasma concentrations for the antibiotic colistin and its prodrug colistin methanesulfonate based on an interspecies whole-body physiologically based pharmacokinetic model. Lastly, strategies based on model-averaging were proposed to enable full model prespecification and proved to be valid alternatives to standard methodologies for studies assessing the QT prolongation potential of a drug and for phase III trials in rheumatoid arthritis. In conclusion, improved methods for handling residual error, parameter uncertainty and model uncertainty in NLMEM were successfully developed. As confirmatory trials are among the most demanding in terms of patient-participation, cost and time in drug development, allowing (some of) these trials to be analyzed with pharmacometric model-based methods will help improve the safety and efficiency of drug development.
63

A Mixed Effects Multinomial Logistic-Normal Model for Forecasting Baseball Performance

Eric A Gerber (7043036) 13 August 2019 (has links)
<div>Prediction of player performance is a key component in the construction of baseball team rosters. Traditionally, the problem of predicting seasonal plate appearance outcomes has been approached univariately. That is, focusing on each outcome separately rather than jointly modeling the collection of outcomes. More recently, there has been a greater emphasis on joint modeling, thereby accounting for the correlations between outcomes. However, most of these state of the art prediction models are the proprietary property of teams or industrial sports entities and so little is available in open publications.</div><div><br></div><div>This dissertation introduces a joint modeling approach to predict seasonal plate appearance outcome vectors using a mixed-effects multinomial logistic-normal model. This model accounts for positive and negative correlations between outcomes both across and within player seasons. It is also applied to the important, yet unaddressed, problem of predicting performance for players moving between the Japanese and American major leagues.</div><div><br></div>This work begins by motivating the methodological choices through a comparison of state of the art procedures followed by a detailed description of the modeling and estimation approach that includes model t assessments. We then apply the method to longitudinal multinomial count data of baseball player-seasons for players moving between the Japanese and American major leagues and discuss the results. Extensions of this modeling framework to other similar data structures are also discussed.<br>
64

Identification de systèmes dynamiques linéaires à effets mixtes : applications aux dynamiques de populations cellulaires / Mixed effects dynamical linear system identification : applications to cell population dynamics

Batista, Levy 06 December 2017 (has links)
L’identification de systèmes dynamiques est une approche de modélisation fondée uniquement sur la connaissance des signaux d’entrée et de sortie de plus en plus utilisée en biologie. Dans ce même domaine d’application, des plans d’expériences sont souvent appliqués pour tester les effets de facteurs qualitatifs sur la réponse et chaque expérience est répétée plusieurs fois pour estimer la reproductibilité des résultats. Dans un objectif d’inférence, il est important de prendre en compte dans la procédure de modélisation les variabilités expliquées (effets fixes) et inexpliquées (effets aléatoires) entre les réponses individuelles. Une solution consiste à utiliser des modèles à effets mixtes mais jusqu’à présent il n’existe aucune approche similaire dans la communauté automaticienne de l’identification de systèmes. L’objectif de la thèse est de combler ce manque grâce à l’utilisation de structures de modèle hiérarchiques introduisant des effets mixtes au sein des représentations polynomiales boites noires de systèmes dynamiques linéaires. Une nouvelle méthode d’estimation des paramètres adaptée aussi bien à des structures simples comme ARX qu’à des structures plus complètes comme celle de Box-Jenkins est développée. Une solution au calcul de la matrice d’information de Fisher est également proposée. Finalement, une application à trois cas d’étude en biologie a permis de valider l’interêt pratique de l’approche d’identification de populations de systèmes dynamiques / System identification is a data-driven input-output modeling approach more and more used in biology and biomedicine. In this application context, methods of experimental design are often used to test effects of qualitative factors on the response and each assay is always replicated to estimate the reproducibility of outcomes. The inference of the modeling conclusions to the whole population requires to account within the modeling procedure for the explained variability (fixed effects) and the unexplained variabilities (random effects) between the individual responses. One solution consists in using mixed effects models but up to now no similar approach exists in the system identification literature. The objective of this thesis is to fill this gap by using hierarchical model structures introducing mixed effects within polynomial black-box representations of linear dynamical systems. A new method is developed to estimate parameters of model structures such as ARX or Box-Jenkins. A solution is also proposed to compute the Fisher’s matrix. Finally, three application studies are carried out and emphasize the practical relevance of the proposed approach to identify populations of dynamical systems
65

Klasifikace na základě longitudinálních pozorování / Classification based on longitudinal observations

Bandas, Lukáš January 2012 (has links)
The concern of this thesis is to discuss classification of different objects based on longitudinal observations. In the first instance the reader is introduced to a linear mixed-effects model which is useful for longitudinal data modeling. Description of discriminant analysis methods follows. These methods ares usually used for classification based on longitudinal observations. Individual methods are introduced in the theoretic aspect. Random effects approach is generalized to continuous time. Subsequently the methods and features of the linear mixed-effects model are applied to real data. Finally features of the methods are studied with help of simulations.
66

Statistical models for estimating the intake of nutrients and foods from complex survey data

Pell, David Andrew January 2019 (has links)
Background: The consequences of poor nutrition are well known and of wide concern. Governments and public health agencies utilise food and diet surveillance data to make decisions that lead to improvements in nutrition. These surveys often utilise complex sample designs for efficient data collection. There are several challenges in the statistical analysis of dietary intake data collected using complex survey designs, which have not been fully addressed by current methods. Firstly, the shape of the distribution of intake can be highly skewed due to the presence of outlier observations and a large proportion of zero observations arising from the inability of the food diary to capture consumption within the period of observation. Secondly, dietary data is subject to variability arising from day-to-day individual variation in food consumption and measurement error, to be accounted for in the estimation procedure for correct inferences. Thirdly, the complex sample design needs to be incorporated into the estimation procedure to allow extrapolation of results into the target population. This thesis aims to develop novel statistical methods to address these challenges, applied to the analysis of iron intake data from the UK National Diet and Nutrition Survey Rolling Programme (NDNS RP) and UK national prescription data of iron deficiency medication. Methods: 1) To assess the nutritional status of particular population groups a two-part model with a generalised gamma (GG) distribution was developed for intakes that show high frequencies of zero observations. The two-part model accommodated the sources of data variation of dietary intake with a random intercept in each component, which could be correlated to allow a correlation between the probability of consuming and the amount consumed. 2) To identify population groups at risk of low nutrient intakes, a linear quantile mixed-effects model was developed to model quantiles of the distribution of intake as a function of explanatory variables. The proposed approach was illustrated by comparing the quantiles of iron intake with Lower Reference Nutrient Intakes (LRNI) recommendations using NDNS RP. This thesis extended the estimation procedures of both the two-part model with GG distribution and the linear quantile mixed-effects model to incorporate the complex sample design in three steps: the likelihood function was multiplied by the sample weightings; bootstrap methods for the estimation of the variance and finally, the variance estimation of the model parameters was stratified by the survey strata. 3) To evaluate the allocation of resources to alleviate nutritional deficiencies, a quantile linear mixed-effects model was used to analyse the distribution of expenditure on iron deficiency medication across health boards in the UK. Expenditure is likely to depend on the iron status of the region; therefore, for a fair comparison among health boards, iron status was estimated using the method developed in objective 2) and used in the specification of the median amount spent. Each health board is formed by a set of general practices (GPs), therefore, a random intercept was used to induce correlation between expenditure from two GPs from the same health board. Finally, the approaches in objectives 1) and 2) were compared with the traditional approach based on weighted linear regression modelling used in the NDNS RP reports. All analyses were implemented using SAS and R. Results: The two-part model with GG distribution fitted to amount of iron consumed from selected episodically food, showed that females tended to have greater odds of consuming iron from foods but consumed smaller amounts. As age groups increased, consumption tended to increase relative to the reference group though odds of consumption varied. Iron consumption also appeared to be dependent on National Statistics Socio-Economic Classification (NSSEC) group with lower social groups consuming less, in general. The quantiles of iron intake estimated using the linear quantile mixed-effects model showed that more than 25% of females aged 11-50y are below the LRNI, and that 11-18y girls are the group at highest of deficiency in the UK. Predictions of spending on iron medication in the UK based on the linear quantile mixed-effects model showed areas of higher iron intake resulted in lower spending on treating iron deficiency. In a geographical display of expenditure, Northern Ireland featured the lowest amount spent. Comparing the results from the methods proposed here showed that using the traditional approach based on weighted regression analysis could result in spurious associations. Discussion: This thesis developed novel approaches to the analysis of dietary complex survey data to address three important objectives of diet surveillance, namely the mean estimation of food intake by population groups, identification of groups at high risk of nutrient deficiency and allocation of resources to alleviate nutrient deficiencies. The methods provided models of good fit to dietary data, accounted for the sources of data variability and extended the estimation procedures to incorporate the complex sample survey design. The use of a GG distribution for modelling intake is an important improvement over existing methods, as it includes many distributions with different shapes and its domain takes non-negative values. The two-part model accommodated the sources of data variation of dietary intake with a random intercept in each component, which could be correlated to allow a correlation between the probability of consuming and the amount consumed. This also improves existing approaches that assume a zero correlation. The linear quantile mixed-effects model utilises the asymmetric Laplace distribution which can also accommodate many different distributional shapes, and likelihood-based estimation is robust to model misspecification. This method is an important improvement over existing methods used in nutritional research as it explicitly models the quantiles in terms of explanatory variables using a novel quantile regression model with random effects. The application of these models to UK national data confirmed the association of poorer diets and lower social class, identified the group of 11-50y females as a group at high risk of iron deficiency, and highlighted Northern Ireland as the region with the lowest expenditure on iron prescriptions.
67

Multilevel Models for Longitudinal Data

Khatiwada, Aastha 01 August 2016 (has links)
Longitudinal data arise when individuals are measured several times during an ob- servation period and thus the data for each individual are not independent. There are several ways of analyzing longitudinal data when different treatments are com- pared. Multilevel models are used to analyze data that are clustered in some way. In this work, multilevel models are used to analyze longitudinal data from a case study. Results from other more commonly used methods are compared to multilevel models. Also, comparison in output between two software, SAS and R, is done. Finally a method consisting of fitting individual models for each individual and then doing ANOVA type analysis on the estimated parameters of the individual models is proposed and its power for different sample sizes and effect sizes is studied by simulation.
68

Statistical modeling and design in forestry : The case of single tree models

Berhe, Leakemariam January 2008 (has links)
<p>Forest quantification methods have evolved from a simple graphical approach to complex regression models with stochastic structural components. Currently, mixed effects models methodology is receiving attention in the forestry literature. However, the review work (Paper I) indicates a tendency to overlook appropriate covariance structures in the NLME modeling process.</p><p>A nonlinear mixed effects modeling process is demonstrated in Paper II using Cupressus lustanica tree merchantable volume data and compared several models with and without covariance structures. For simplicity and clarity of the nonlinear mixed effects modeling, four phases of modeling were introduced. The nonlinear mixed effects model for C. lustanica tree merchantable volume with the covariance structures for both the random effects and within group errors has shown a significant improvement over the model with simplified covariance matrix. However, this statistical significance has little to explain in the prediction performance of the model.</p><p>In Paper III, using several performance indicator statistics, tree taper models were compared in an effort to propose the best model for the forest management and planning purpose of the C. lustanica plantations. Kozak's (1988) tree taper model was found to be the best for estimating C. lustanica taper profile.</p><p>Based on the Kozak (1988) tree taper model, a Ds optimal experimental design study is carried out in Paper IV. In this study, a Ds-optimal (sub) replication free design is suggested for the Kozak (1988) tree taper model.</p>
69

Mixed effects regression for snow distribution modelling in the central Yukon

Kasurak, Andrew January 2009 (has links)
To date, remote sensing estimates of snow water equivalent (SWE) in mountainous areas are very uncertain. To test passive microwave algorithm estimations of SWE, a validation data set must exist for a broad geographic area. This study aims to build a data set through field measurements and statistical techniques, as part of the Canadian IPY observations theme to help develop an improved algorithm. Field measurements are performed at, GIS based, pre-selected sites in the Central Yukon. At each location a transect was taken, with sites measuring snow depth (SD), density, and structure. A mixed effects multiple regression was chosen to analyze and then predict these field measurements over the study area. This modelling strategy is best capable of handling the hierarchical structure of the field campaign. A regression model was developed to predict SD from elevation derived variables, and transformed Landsat data. The final model is: SD = horizontal curvature + cos( aspect) + log10(elevation range, 270m) + tassel cap: greenness, brightness (from Landsat imagery) + interaction of elevation and landcover.This model is used to predict over the study area. A second, simpler regression links SD with density giving the desired SWE measurements. The Root Mean Squared Error (RMSE) of this SD estimation is 25 cm over a domain of 200 x 200 km. This instantaneous end of season, peak accumulation, snow map will enable the vali- dation of satellite remote sensing observations, such as passive microwave (AMSR-E), in a generally inaccessible area.
70

Mixed effects regression for snow distribution modelling in the central Yukon

Kasurak, Andrew January 2009 (has links)
To date, remote sensing estimates of snow water equivalent (SWE) in mountainous areas are very uncertain. To test passive microwave algorithm estimations of SWE, a validation data set must exist for a broad geographic area. This study aims to build a data set through field measurements and statistical techniques, as part of the Canadian IPY observations theme to help develop an improved algorithm. Field measurements are performed at, GIS based, pre-selected sites in the Central Yukon. At each location a transect was taken, with sites measuring snow depth (SD), density, and structure. A mixed effects multiple regression was chosen to analyze and then predict these field measurements over the study area. This modelling strategy is best capable of handling the hierarchical structure of the field campaign. A regression model was developed to predict SD from elevation derived variables, and transformed Landsat data. The final model is: SD = horizontal curvature + cos( aspect) + log10(elevation range, 270m) + tassel cap: greenness, brightness (from Landsat imagery) + interaction of elevation and landcover.This model is used to predict over the study area. A second, simpler regression links SD with density giving the desired SWE measurements. The Root Mean Squared Error (RMSE) of this SD estimation is 25 cm over a domain of 200 x 200 km. This instantaneous end of season, peak accumulation, snow map will enable the vali- dation of satellite remote sensing observations, such as passive microwave (AMSR-E), in a generally inaccessible area.

Page generated in 0.0544 seconds