• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 37
  • 28
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Carry-over and interaction effects of different hand-milking techniques and milkers on milk

HE, Ran January 1986 (has links)
The main idea of this thesis is studying the importance of the carry-over effects and interaction effects in statistical models. To investigate it, a hand-milking experiment in Burkina Faso was studied. In many no electricity access countries, such as Burkina Faso, the amount of milk and milk compositions are still highly  relying on hand-milking techniques and milkers. Moreover, the time effects also plays a important role in stockbreeding system. Therefore, falling all effects, carry-over effects and interaction effects into a linear mixed effects model, it is concluded that the carry-over effects of milker and hand-milking techniques cannot be neglected, and the interaction effects among hand-milking techniques, different milkers, days and periods can be substantial.
12

Study Design and Dose Regimen Evaluation of Antibiotics based on Pharmacokinetic and Pharmacodynamic Modelling

Kristoffersson, Anders January 2015 (has links)
Current excessive use and abuse of antibiotics has resulted in increasing bacterial resistance to common treatment options which is threatening to deprive us of a pillar of modern medicine. In this work methods to optimize the use of existing antibiotics and to help development of new antibiotics were developed and applied. Semi-mechanistic pharmacokinetic-pharmacodynamic (PKPD) models were developed to describe the time course of the dynamic effect and interaction of combinations of antibiotics. The models were applied to illustrate that colistin combined with a high dose of meropenem may overcome meropenem-resistant P. aeruginosa infections. The results from an in vivo dose finding study of meropenem was successfully predicted by the meropenem PKPD model in combination with a murine PK model, which supports model based dosage selection. However, the traditional PK/PD index based dose selection was predicted to have poor extrapolation properties from pre-clinical to clinical settings, and across patient populations. The precision of the model parameters, and hence the model predictions, is dependent on the experimental design. A limited study design is dictated by cost and, for in vivo studies, ethical reasons. In this work optimal design (OD) was demonstrated to be able to reduce the experimental effort in time-kill curve experiments and was utilized to suggest the experimental design for identification and estimation of an interaction between antibiotics. OD methods to handle inter occasion variability (IOV) in optimization of individual PK parameter estimates were proposed. The strategy was applied in the design of a sparse sampling schedule that aim to estimate individual exposures of colistin in a multi-centre clinical study. Plasma concentration samples from the first 100 patients have been analysed and indicate that the performance of the design is close to the predicted. The methods described in this thesis holds promise to facilitate the development of new antibiotics and to improve the use of existing antibiotics.
13

MRI Signal Intensity Analysis of Novel Protein-based MRI Contrast Agents

Qian, Yan 12 August 2014 (has links)
Contrast agents are of great importance in clinical applications of Magnetic Resonance Imaging (MRI) to improve the contrast of internal body structures and to obtain tissue-specific image. However, current approved contrast agents still have limitations including low relaxivity, low specificity and uncontrolled blood circulation time, which motivated researchers to develop novel contrast agents with higher relaxivity, improved targeting abilities and optimal retention time. This thesis uses animal experimental data from Dr. Jenny J. Yang’s lab at the Department of Chemistry in Georgia State University to study effects of a class of newly designed protein-based MRI contrast agents (ProCAs). Models for the longitudinal data on MRI intensity are constructed to evaluate the efficiency of different MRI contrast agents. Statistically significant results suggest that ProCA1B14 has the great potential to be a tumor specific contrast agent and ProCA32 could be a promising MRI contrast agent for the liver imaging in clinical applications.
14

Model Selection via Minimum Description Length

Li, Li 10 January 2012 (has links)
The minimum description length (MDL) principle originated from data compression literature and has been considered for deriving statistical model selection procedures. Most existing methods utilizing the MDL principle focus on models consisting of independent data, particularly in the context of linear regression. The data considered in this thesis are in the form of repeated measurements, and the exploration of MDL principle begins with classical linear mixed-effects models. We distinct two kinds of research focuses: one concerns the population parameters and the other concerns the cluster/subject parameters. When the research interest is on the population level, we propose a class of MDL procedures which incorporate the dependence structure within individual or cluster with data-adaptive penalties and enjoy the advantages of Bayesian information criteria. When the number of covariates is large, the penalty term is adjusted by data-adaptive structure to diminish the under selection issue in BIC and try to mimic the behaviour of AIC. Theoretical justifications are provided from both data compression and statistical perspectives. Extensions to categorical response modelled by generalized estimating equations and functional data modelled by functional principle components are illustrated. When the interest is on the cluster level, we use group LASSO to set up a class of candidate models. Then we derive a MDL criterion for this LASSO technique in a group manner to selection the final model via the tuning parameters. Extensive numerical experiments are conducted to demonstrate the usefulness of the proposed MDL procedures on both population level and cluster level.
15

Model Selection via Minimum Description Length

Li, Li 10 January 2012 (has links)
The minimum description length (MDL) principle originated from data compression literature and has been considered for deriving statistical model selection procedures. Most existing methods utilizing the MDL principle focus on models consisting of independent data, particularly in the context of linear regression. The data considered in this thesis are in the form of repeated measurements, and the exploration of MDL principle begins with classical linear mixed-effects models. We distinct two kinds of research focuses: one concerns the population parameters and the other concerns the cluster/subject parameters. When the research interest is on the population level, we propose a class of MDL procedures which incorporate the dependence structure within individual or cluster with data-adaptive penalties and enjoy the advantages of Bayesian information criteria. When the number of covariates is large, the penalty term is adjusted by data-adaptive structure to diminish the under selection issue in BIC and try to mimic the behaviour of AIC. Theoretical justifications are provided from both data compression and statistical perspectives. Extensions to categorical response modelled by generalized estimating equations and functional data modelled by functional principle components are illustrated. When the interest is on the cluster level, we use group LASSO to set up a class of candidate models. Then we derive a MDL criterion for this LASSO technique in a group manner to selection the final model via the tuning parameters. Extensive numerical experiments are conducted to demonstrate the usefulness of the proposed MDL procedures on both population level and cluster level.
16

A state-space approach in analyzing longitudinal neuropsychological outcomes

Chua, Alicia S. 06 October 2021 (has links)
Longitudinal assessments are crucial in evaluating the disease state and trajectory in patients of neurodegenerative diseases. Neuropsychological outcomes measured over time often have a non-linear trajectory with autocorrelated residuals and skewed distributions. Due to these issues, statistical analysis and interpretation involving longitudinal cognitive outcomes can be a difficult and controversial task, thus hindering most convenient transformations (e.g. logarithmic) to avoid the assumption violations of common statistical modelling techniques. We propose the Adjusted Local Linear Trend (ALLT) model, an extended state space model in lieu of the commonly-used linear mixed-effects model (LMEM) in modeling longitudinal neuropsychological outcomes. Our contributed model has the capability to utilize information from the stochasticity of the data while accounting for subject-specific trajectories with the inclusion of covariates and unequally-spaced time intervals. The first step of model fitting involves a likelihood maximization step to estimate the unknown variances in the model before parsing these values into the Kalman Filter and Kalman Smoother recursive algorithms. Results from simulation studies showed that the ALLT model is able to attain lower bias, lower standard errors and high power, particularly in short longitudinal studies with equally-spaced time intervals, as compared to the LMEM. The ALLT model also outperforms the LMEM when data is missing completely at random (MCAR), missing at random (MAR) and, in certain cases, even in data with missing not at random (MNAR). In terms of model selection, likelihood-based inference is applicable for the ALLT model. Although a Chi-Square distribution with k degrees of freedom, where k is the number of parameter lost during estimation, was not the asymptotic distribution in the case of ALLT, we were able to derive an asymptotic distribution approximation of the likelihood ratio test statistics using the power transformation method for the utility of a Gaussian distribution to facilitate model selections for ALLT. In light of these findings, we believe that our proposed model will shed light into longitudinal data analysis not only in the neuropsychological data realm but also on a broader scale for statistical analysis of longitudinal data. / 2023-10-05T00:00:00Z
17

Population Pharmacokinetics of Linezolid for Optimization of the Treatment for Multidrug Resistant Tuberculosis

Hansen, Viktor January 2022 (has links)
Tuberculosis is one the leading causes of death globally and was before the COVID-19 pandemic the leading cause of death from a single infectious agent. Developing active tuberculosis is life threatening and therefore is the rise of drug-resistant tuberculosis alarming as this risk causing current treatments to become ineffective. Linezolid is a promising drug for treatment of drug-resistant pulmonary tuberculosis, but the effect of linezolid treatment for pulmonary tuberculosis subjects is still not understood well enough and the World Health Organization has requested this knowledge gap to be filled. In this project we support the closing of this knowledge gap by describing the pharmacokinetics of linezolid for treatment of pulmonary tuberculosis using data collected from a phase two clinical trial in a South African population. This was done by creating a pop-PK model and resulted in the PK of linezolid in pulmonary tuberculosis patients from South Africa was best described using a one-compartment model, with first-order absorption process preceded by a series of transit compartments and saturable elimination. However, the diagnostics of the model still show that there are room for improvements and future work is necessary to further optimize the model.
18

Pavement Service Life Estimation And Condition Prediction

Yu, Jianxiong January 2005 (has links)
No description available.
19

Testing methods for calibrating Forest Vegetation Simulator (FVS) diameter growth predictions

Cankaya, Ergin Cagatay 20 September 2018 (has links)
The Forest Vegetation Simulator (FVS) is a growth and yield modeling system widely-used for predicting stand and tree-level attributes for management and planning applications in North American forests. The accuracy of FVS predictions for a range of tree and stand level attributes depends a great deal on the performance of the diameter increment model and its predictions of change in diameter at breast height (DBH) over time. To address the challenge of predicting growth in highly variable and geographically expansive forest systems, FVS was designed to include an internal calibration algorithm that makes use of growth observations, when available, from permanent inventory plots. The basic idea is that observed growth rates on a collection of remeasured trees are used to adjust or "calibrate" FVS diameter growth predictions. Therefore, DBH modeling was the focus of this investigation. Five methods were proposed for local calibration of individual tree DBH growth predictions and compared to two sets of results generated without calibration. Data from the US Forest Service's Forest Inventory and Analysis (FIA) program were used to test the methods for eleven widely-distributed forest tree species in Virginia. Two calibration approaches were based on median prediction errors from locally-observed DBH increments spanning a five year average time interval. Two were based on simple linear regression models fitted to the locally-observed prediction errors, and one method employed a mixed effects regression model with a random intercept term estimated from locally-observed DBH increments. Data witholding, specifically a leave-one-out cross-validation was used to compare results of the methods tested. Results showed that any of the calibration approaches tested in general led to improved accuracy of DBH growth predictions, with either of the median-based methods or regression based methods performing better than the random-effects-based approach. Equivalence testing showed that median or regression-based local calibration methods met error tolerances within ± 12% of observed DBH increments for all species with the random effects approach meeting a larger tolerance of ± 17%. These results showed improvement over uncalibrated models, which failed to meet tolerances as high as ± 30% for some species in a newly-fitted DBH growth model for Virginia, and as high as ± 170% for an existing model fitted to data from a much larger region of the Southeastern United States. Local calibration of regional DBH increment models provides an effective means of substantially reducing prediction errors when a relatively small set of observations are available from local sources such as permanent forest inventory plots, or the FIA database. / MS / The Forest Vegetation Simulator (FVS) is a growth and yield model widely-used for predicting stand dynamics, management and decision support in North American forests. Diameter increment is a major component in modeling tree growth. The system of integrated analytical tools in FVS is primarily based on the performance of the diameter increment model and the subsequent use of predicted in diameter at breast height (DBH) over time in forecasting tree attributes. To address the challenge of predicting growth in highly variable and geographically expansive forest systems, FVS was designed to include an internal calibration algorithm that makes use of growth observations, when available, from permanent inventory plots. The basic idea was that observed growth rates on a small set of remeasured trees are used to adjust or “calibrate” FVS growth predictions. The FVS internal calibration was the subject being investigated here. Five alternative methods were proposed attributed to a specific site or stand of interest and compared to two sets of results, which were based on median prediction errors, generated without calibration. Results illustrated that median-based methods or regression based methods performed better than the random-effects-based approach using independently observed growth data from Forest Service FIA re-measurements in Virginia. Local calibration of regional DBH increment models provides an effective means of substantially reducing prediction errors. The results of this study should also provide information to evaluate the efficiency of FVS calibration alternatives and a possible method for future implementation.
20

Semi-mechanistic models of glucose homeostasis and disease progression in type 2 diabetes

Choy, Steve January 2016 (has links)
Type 2 diabetes mellitus (T2DM) is a metabolic disorder characterized by consistently high blood glucose, resulting from a combination of insulin resistance and reduced capacity of β-cells to secret insulin. While the exact causes of T2DM is yet unknown, obesity is known to be a major risk factor as well as co-morbidity for T2DM. As the global prevalence of obesity continues to increase, the association between obesity and T2DM warrants further study. Traditionally, mathematical models to study T2DM were mostly empirical and thus fail to capture the dynamic relationship between glucose and insulin. More recently, mechanism-based population models to describe glucose-insulin homeostasis with a physiological basis were proposed and offered a substantial improvement over existing empirical models in terms of predictive ability. The primary objectives of this thesis are (i) examining the predictive usefulness of semi-mechanistic models in T2DM by applying an existing population model to clinical data, and (ii) exploring the relationship between obesity and T2DM and describe it mathematically in a novel semi-mechanistic model to explain changes to the glucose-insulin homeostasis and disease progression of T2DM. Through the use of non-linear mixed effects modelling, the primary mechanism of action of an antidiabetic drug has been correctly identified using the integrated glucose-insulin model, reinforcing the predictive potential of semi-mechanistic models in T2DM. A novel semi-mechanistic model has been developed that incorporated a relationship between weight change and insulin sensitivity to describe glucose, insulin and glycated hemoglobin simultaneously in a clinical setting. This model was also successfully adapted in a pre-clinical setting and was able to describe the pathogenesis of T2DM in rats, transitioning from healthy to severely diabetic. This work has shown that a previously unutilized biomarker was found to be significant in affecting glucose homeostasis and disease progression in T2DM, and that pharmacometric models accounting for the effects of obesity in T2DM would offer a more complete physiological understanding of the disease.

Page generated in 0.0799 seconds