• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 7
  • 4
  • 2
  • 2
  • Tagged with
  • 54
  • 14
  • 12
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Covariate Model Building in Nonlinear Mixed Effects Models

Ribbing, Jakob January 2007 (has links)
Population pharmacokinetic-pharmacodynamic (PK-PD) models can be fitted using nonlinear mixed effects modelling (NONMEM). This is an efficient way of learning about drugs and diseases from data collected in clinical trials. Identifying covariates which explain differences between patients is important to discover patient subpopulations at risk of sub-therapeutic or toxic effects and for treatment individualization. Stepwise covariate modelling (SCM) is commonly used to this end. The aim of the current thesis work was to evaluate SCM and to develop alternative approaches. A further aim was to develop a mechanistic PK-PD model describing fasting plasma glucose, fasting insulin, insulin sensitivity and beta-cell mass. The lasso is a penalized estimation method performing covariate selection simultaneously to shrinkage estimation. The lasso was implemented within NONMEM as an alternative to SCM and is discussed in comparison with that method. Further, various ways of incorporating information and propagating knowledge from previous studies into an analysis were investigated. In order to compare the different approaches, investigations were made under varying, replicated conditions. In the course of the investigations, more than one million NONMEM analyses were performed on simulated data. Due to selection bias the use of SCM performed poorly when analysing small datasets or rare subgroups. In these situations, the lasso method in NONMEM performed better, was faster, and additionally validated the covariate model. Alternatively, the performance of SCM can be improved by propagating knowledge or incorporating information from previously analysed studies and by population optimal design. A model was also developed on a physiological/mechanistic basis to fit data from three phase II/III studies on the investigational drug, tesaglitazar. This model described fasting glucose and insulin levels well, despite heterogeneous patient groups ranging from non-diabetic insulin resistant subjects to patients with advanced diabetes. The model predictions of beta-cell mass and insulin sensitivity were well in agreement with values in the literature.
32

Statistical inference for rankings in the presence of panel segmentation

Xie, Lin January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Paul Nelson / Panels of judges are often used to estimate consumer preferences for m items such as food products. Judges can either evaluate each item on several ordinal scales and indirectly produce an overall ranking, or directly report a ranking of the items. A complete ranking orders all the items from best to worst. A partial ranking, as we use the term, only reports rankings of the best q out of m items. Direct ranking, the subject of this report, does not require the widespread but questionable practice of treating ordinal measurement as though they were on ratio or interval scales. Here, we develop and study segmentation models in which the panel may consist of relatively homogeneous subgroups, the segments. Judges within a subgroup will tend to agree among themselves and differ from judges in the other subgroups. We develop and study the statistical analysis of mixture models where it is not known to which segment a judge belongs or in some cases how many segments there are. Viewing segment membership indicator variables as latent data, an E-M algorithm was used to find the maximum likelihood estimators of the parameters specifying a mixture of Mallow’s (1957) distance models for complete and partial rankings. A simulation study was conducted to evaluate the behavior of the E-M algorithm in terms of such issues as the fraction of data sets for which the algorithm fails to converge and the sensitivity of initial values to the convergence rate and the performance of the maximum likelihood estimators in terms of bias and mean square error, where applicable. A Bayesian approach was developed and credible set estimators was constructed. Simulation was used to evaluate the performance of these credible sets as confidence sets. A method for predicting segment membership from covariates measured on a judge was derived using a logistic model applied to a mixture of Mallows probability distance models. The effects of covariates on segment membership were assessed. Likelihood sets for parameters specifying mixtures of Mallows distance models were constructed and explored.
33

Comparing methods for modeling longitudinal and survival data, with consideration of mediation analysis

Ngwa, Julius S. 14 March 2016 (has links)
Joint modeling of longitudinal and survival data has received much attention and is becoming increasingly useful. In clinical studies, longitudinal biomarkers are used to monitor disease progression and to predict survival. These longitudinal measures are often missing at failure times and may be prone to measurement errors. In previous studies these two types of data are frequently analyzed separately where a mixed effects model is used for longitudinal data and a survival model is applied to event outcomes. The argument in favor of a joint model has been the efficient use of the data as the survival information goes into modeling the longitudinal process and vice versa. In this thesis, we present joint maximum likelihood methods, a two stage approach and time dependent covariate methods that link longitudinal data to survival data. First, we use simulation studies to explore and assess the performance of these methods with bias, accuracy and coverage probabilities. Then, we focus on four time dependent methods considering models that are unadjusted and adjusted for time. Finally, we consider mediation analysis for longitudinal and survival data. Mediation analysis is introduced and applied in a research framework based on genetic variants, longitudinal measures and disease risk. We implement accelerated failure time regression using the joint maximum likelihood approach (AFT-joint) and an accelerated failure time regression model using the observed longitudinal measures as time dependent covariates (AFT-observed) to assess the mediated effect. We found that the two stage approach (TSA) performed best at estimating the link parameter. The joint maximum likelihood methods that used the predicted values of the longitudinal measures, similar to the TSA, provided larger estimates. The time dependent covariate methods that used the observed longitudinal measures in the survival analysis underestimated the true estimates. The mediation results showed that the AFT-joint and the AFT-observed underestimated the mediated effect. Comparison of the methods in Framingham Heart Study data revealed similar patterns. We recommend adjusting for time when estimating the association parameter in time dependent Cox and logistic models. Additional work is needed for estimating the mediated effect with longitudinal and survival data.
34

Stratégies palliatives à la non-randomisation en santé mentale : score de propension et techniques d’ajustement apparentées. Méthodologie appliquée à la prise en compte des facteurs de confusion dans le cas de la schizophrénie / Palliative management to non-randomisation in Mental Health : propensity score and related control methods. Methodology applied in the field of schizophrenia

Sarlon, Emmanuelle 09 January 2014 (has links)
Objectif : L’objectif est l’étude de plusieurs méthodes de prise en compte des facteurs de confusion, mesurés ou non mesurés, ce en situation observationnelle de population de patients psychotiques ou schizophrènes. Méthodes : Deux méthodes ont été utilisées : le score de propension (adaptés aux données mesurées) et les analyses de sensibilité (pour les informations non mesurées). Le champ d’application est celui de l’épidémiologie clinique en psychiatrie, et plus spécifiquement celui de la schizophrénie. Le développement s’appuie sur trois parties successives. La première partie met en exergue la question de la discussion du biais résiduel. Pour cela, on s’appuie sur les résultats d’une étude transversale d’exposition à un facteur contextuel (la prison), ce dans le cadre de la présence de troubles psychotiques (au sens axe DSM IV), à partir d’une méthodologie d’ajustement conventionnelle classique. La deuxième partie est une comparaison d’une technique d’ajustement classique à un ajustement par score de propension. Pour cela, on utilise les résultats issus d’une étude de cohorte avec la survenue d’un évènement selon l’exposition à un traitement en population schizophrène, à partir de l’utilisation du score de propension comme outil d’ajustement. La troisième partie est une synthèse sur la modélisation de l’incertitude et des biais de confusion non mesurés multiples. Les théories et méthodes sont décrites, puis appliquées aux résultats des deux études précédentes. Résultats : L’étude transversale, dont les résultats non montrés jusqu’à présent, permet de poser la problématique de la qualité de l’ajustement dans le cadre d’une exposition à un facteur en situation observationnelle. L’étude de cohorte permet de comparer une technique d’ajustement classique à un ajustement par score de propension (SP). Nous avons étudié plusieurs méthodes d’ajustement (multivarié standard, avec ajustement sur SP, avec appariement sur SP). Et nous mettons en évidence que, selon la méthode d’ajustement utilisée, les résultats obtenus sont différents. La méthode de stratification sur SP semble être la meilleure. Les méthodes de prise en compte des facteurs de confusion non mesurés sont ensuite étudiées. Une première étape fait état de l’apport des théories probabilistes et des techniques apparentées, ensuite une combinaison de ces théories est proposée avec une application pratique aux deux études présentées précédemment. Conclusion : Dans le cas des études observationnelles, l’objectif de ce travail a été d’étudier, de décrire et d’appliquer des techniques de modélisation pour mieux prendre en compte les différences initiales, potentiellement source de confusion. C’est un travail à la frontière entre la méthodologie, les biostatistiques et l’épidémiologie. Nous nous appuyons sur des difficultés rencontrées, en pratique en épidémiologie psychiatrique (pathologies mentales à étiologies multifactorielles et interdépendantes) pour proposer une approche pragmatique de la prise en compte optimale des facteurs de confusion potentiels, mesurés ou non mesurés. / Objective : To evaluate control methods for measured or unmeasured confusion bias, in observational situation of psychotic or schizophrenic patients. Methods : Propensity score method (for measured confusion bias) and analyses of sensibility (for unmeasured confusion bias) were applied in the field of psychiatric epidemiology, specifically in schizophrenia. In first, the question of residual bias was underlined by the results of a transversal study. The exposition at a contextual parameter (prison) was studied in link with psychotic disorders (DSM IV), with a classic control method.Second, to lead to an unbiased estimation of treatment effect, we compared a classic control method with a method based on propensity score. These approach were applied to a cohort of French schizophrenic patients where we studied the event (relapse) by the treatment exposition (polypharmacy or not).Third, we developed a synthesis on modelisation of uncertainty and non-measured confusion bias. Theories and methods were described, and then applied on results of previous studies. Results : The transversal study, with non-demonstrated results still then, allow us to reach the question of control quality in the case of exposition to a parameter in observational situation. The cohort study permit to compare a classic control method and propensity score (PS). We highlighted different results according to some control method. Stratification method on PS seemed to be the best method to predict relapse according to treatment exposition. Non-measured bias control methods were then described. And a combination of probabilistic methods was applied to the previous studies. Conclusion : In the case of observational studies, the objective was to study, to describe and to apply modelisation methods to take in account differences at baseline, potentially source of confusion bias. This research is at the crossroads of methodology, biostatistics and epidemiology.
35

An?lise de fontes de incerteza na modelagem espacial do solo / Analysis of sources of uncertainty in soil spatial modelling.

SAMUEL-ROSA, Alessandro 24 February 2016 (has links)
Submitted by Jorge Silva (jorgelmsilva@ufrrj.br) on 2016-10-21T17:28:48Z No. of bitstreams: 1 2016 - Alessandro Samuel-Rosa.pdf: 15092171 bytes, checksum: bbe06c922805d4196e0a50c4f2aee7a5 (MD5) / Made available in DSpace on 2016-10-21T17:28:48Z (GMT). No. of bitstreams: 1 2016 - Alessandro Samuel-Rosa.pdf: 15092171 bytes, checksum: bbe06c922805d4196e0a50c4f2aee7a5 (MD5) Previous issue date: 2016-02-24 / CNPq / Modern soil spatial modelling is based on statistical models to explore the empirical relation-ship among environmental conditions and soil properties. These models are a simplification of reality, and their outcome (soil map) will always be in error. What a soil map conveys is what we expect the soil to be, acknowledging that we are uncertain about it. The objective of this thesis is to evaluate important sources of uncertainty in spatial soil modelling, with emphasis on soil and covariate data. Case studies were developed using data from a catchment located in Southern Brazil. The soil spatial distribution in the study area is highly variable, being deter-mined by the geology and geomorphology (coarse spatial scales), and by agricultural practices (fine spatial scales). Four topsoil properties were explored: clay content, organic carbon con-tent, effective cation exchange capacity and bulk density. Five covariates, each with two levels of spatial detail, were used: area-class soil maps, digital elevation models, geologic maps, land use maps, and satellite images. These soil and covariate data constitute the Santa Maria dataset. Two packages for R were created in support to the case studies, the first (pedometrics) con-taining various functions for spatial exploratory data analysis and model calibration, the second (spsann) designed for the optimization of spatial samples using simulated annealing. The case studies illustrated that existing covariates are suitable for calibrating soil spatial models, and that using more detailed covariates results in only a modest increase in the prediction ac-curacy that may not outweigh the extra costs. More efficient means of increasing prediction accuracy should be explored, such as obtaining more soil observations. For this end, one should use objective means for selecting observation locations to minimize the effects of psycholog-ical responses of soil modellers to conceptual and operational factors on the sampling design. This because conceptual and operational difficulties encountered in the field determine how the motivation of soil modellers shifts between learning/verifying soil-landscape relationships and maximizing the number of observations and geographic coverage. For the sole purpose of spa-tial trend estimation, it should suffice to optimize spatial samples aiming only at reproducing the marginal distribution of the covariates. For the joint purpose of optimizing sample configu-rations for spatial trend and variogram estimation, and spatial interpolation, one can formulate a sound multi-objective optimization problem using robust versions of existing sampling algo-rithms. Overall, we have learned that a single, universal recipe for reducing our uncertainty in soil spatial modelling cannot be formulated. Deciding upon efficient ways of reducing our uncertainty requires, first, that we explore the full potential of existing soil and covariate data using sound spatial modelling techniques. / A modelagem espacial do solo moderna usa modelos estat?sticos para explorar a rela??o em-p?rica entre as condi??es ambientais e as propriedades do solo. Esses modelos s?o uma sim-plifica??o da realidade, e seu resultado (mapa do solo) estar? sempre errado. O que um mapa do solo transmite ? o que esperamos que o solo seja, reconhecendo que somos incertos sobre ele. O objetivo dessa tese ? avaliar importantes fontes de incerteza na modelagem espacial do solo, com ?nfase nos dados do solo e covari?veis. Estudos de caso foram desenvolvidos usando dados de uma bacia hidrogr?fica do sul do Brasil. A distribui??o espacial do solo na ?rea de estudo ? vari?vel, sendo determinada pela geologia e geomorfologia (escalas espaciais maiores) e pr?ticas agr?colas (escalas espaciais menores). Quatro propriedades do solo foram explora-das: teor de argila, teor de carbono org?nico, capacidade de troca cati?nica efetiva e densidade. Cinco covari?veis, cada um com dois n?veis de detalhe espacial, foram utilizadas: mapas areais de classes de solo, modelos digitais de eleva??o, mapas geol?gicos, mapas de uso da terra, e imagens de sat?lite. Esses dados constituem o conjunto de dados de Santa Maria. Dois paco-tes para R foram criados, o primeiro (pedometrics) contendo v?rias fun??es para a an?lise explorat?ria espacial de dados e calibra??o de modelos, o segundo (spann) projetado para a optimiza??o de amostras espaciais usando recozimento simulado. Os estudos de caso ilustraram que as covari?veis existentes s?o apropriadas para calibrar modelos espaciais do solo, e que o uso de covari?veis mais detalhadas resulta em modesto aumento na acur?cia de predi??o que pode n?o compensar os custos adicionais. Meios mais eficientes de aumentar a acur?cia de pre-di??o devem ser explorados, como obter mais observa??es do solo. Para esse fim, deve-se usar meios objetivos para a sele??o dos locais de observa??o a fim de minimizar os efeitos das res-postas psicol?gicas dos modeladores do solo a fatores conceituais e operacionais sobre o plano de amostragem. Isso porque as dificuldades conceituais e operacionais encontradas no campo determinam mudan?as na motiva??o dos modeladores do solo entre aprendizagem/verifica??o das rela??es solo-paisagem e maximiza??o do n?mero de observa??es e cobertura geogr?fica. Para estimar a tend?ncia espacial, deve ser suficiente otimizar as amostras espaciais visando so-mente reproduzir a distribui??o marginal das covari?veis. Para otimizar configura??es amostrais para estimar a tend?ncia espacial e o variograma, e interpola??o espacial, pode-se formular um problema de otimiza??o multi-objetivo s?lido usando vers?es robustas de algoritmos de amos-tragem existentes. No geral, aprendemos que uma receita ?nica, universal para a redu??o da incerteza na modelagem espacial do solo n?o pode ser formulada. Decidir sobre formas efi-cazes de redu??o da incerteza requer, em primeiro lugar, que exploremos todo o potencial dos dados existentes usando t?cnicas de modelagem espacial s?lidas.
36

Bayesian Semiparametric Models For Nonignorable Missing Datamechanisms In Logistic Regression

Ozturk, Olcay 01 May 2011 (has links) (PDF)
In this thesis, Bayesian semiparametric models for the missing data mechanisms of nonignorably missing covariates in logistic regression are developed. In the missing data literature, fully parametric approach is used to model the nonignorable missing data mechanisms. In that approach, a probit or a logit link of the conditional probability of the covariate being missing is modeled as a linear combination of all variables including the missing covariate itself. However, nonignorably missing covariates may not be linearly related with the probit (or logit) of this conditional probability. In our study, the relationship between the probit of the probability of the covariate being missing and the missing covariate itself is modeled by using a penalized spline regression based semiparametric approach. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm to estimate the parameters is established. A WinBUGS code is constructed to sample from the full conditional posterior distributions of the parameters by using Gibbs sampling. Monte Carlo simulation experiments under different true missing data mechanisms are applied to compare the bias and efficiency properties of the resulting estimators with the ones from the fully parametric approach. These simulations show that estimators for logistic regression using semiparametric missing data models maintain better bias and efficiency properties than the ones using fully parametric missing data models when the true relationship between the missingness and the missing covariate has a nonlinear form. They are comparable when this relationship has a linear form.
37

Measure of Dependence for Length-Biased Survival Data

Bentoumi, Rachid January 2017 (has links)
In epidemiological studies, subjects with disease (prevalent cases) differ from newly diseased (incident cases). They tend to survive longer due to sampling bias, and related covariates will also be biased. Methods for regression analyses have recently been proposed to measure the potential effects of covariates on survival. The goal is to extend the dependence measure of Kent (1983), based on the information gain, in the context of length-biased sampling. In this regard, to estimate information gain and dependence measure for length-biased data, we propose two different methods namely kernel density estimation with a regression procedure and parametric copulas. We will assess the consistency for all proposed estimators. Algorithms detailing how to generate length-biased data, using kernel density estimation with regression procedure and parametric copulas approaches, are given. Finally, the performances of the estimated information gain and dependence measure, under length-biased sampling, are demonstrated through simulation studies.
38

Visual Biofeedback Training Reduces Quantitative Drugs Index Scores Associated With Fall Risk

Anson, Eric, Thompson, Elizabeth, Karpen, Samuel C., Odle, Brian L., Seier, Edith, Jeka, John, Panus, Peter C. 22 October 2018 (has links)
Objective: Drugs increase fall risk and decrease performance on balance and mobility tests. Conversely, whether biofeedback training to reduce fall risk also decreases scores on a published drug-based fall risk index has not been documented. Forty-eight community-dwelling older adults underwent either treadmill gait training plus visual feedback (+VFB), or walked on a treadmill without feedback. The Quantitative Drug Index (QDI) was derived from each participant's drug list and is based upon all cause drug-associated fall risk. Analysis of covariance assessed changes in the QDI during the study, and data is presented as mean ± standard error of the mean. Results: The QDI scores decreased significantly (p = 0.031) for participants receiving treadmill gait training +VFB (- 0.259 ± 0.207), compared to participants who walked on the treadmill without VFB (0.463 ± 0.246). Changes in participants QDI scores were dependent in part upon their age, which was a significant covariate (p = 0.007). These preliminary results demonstrate that rehabilitation to reduce fall risk may also decrease use of drugs associated with falls. Determination of which drugs or drug classes that contribute to the reduction in QDI scores for participants receiving treadmill gait training +VFB, compared to treadmill walking only, will require a larger participant investigation. Trial Registration ISRNCT01690611, ClinicalTrials.gov #366151-1, initial 9/24/2012, completed 4/21/2016
39

Application of pharmacometric methods to assess treatment related outcomes following the standard of care in multiple myeloma

Irby, Donald January 2020 (has links)
No description available.
40

Three essays on long run movements of real exchange rates

Park, Sungwook 25 June 2007 (has links)
No description available.

Page generated in 0.0854 seconds