• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An?lise de res?duos em modelos de tempo de falha acelerado com efeito aleat?rio

Rodrigues, Elis?ngela da Silva 15 April 2013 (has links)
Made available in DSpace on 2014-12-17T15:26:39Z (GMT). No. of bitstreams: 1 ElisangelaSR_Parcial.pdf: 3958937 bytes, checksum: ab2ba14c6737760ff8b9b0b1ac7f9db2 (MD5) Previous issue date: 2013-04-15 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / We present residual analysis techniques to assess the fit of correlated survival data by Accelerated Failure Time Models (AFTM) with random effects. We propose an imputation procedure for censored observations and consider three types of residuals to evaluate different model characteristics. We illustrate the proposal with the analysis of AFTM with random effects to a real data set involving times between failures of oil well equipment / Apresentamos t?cnicas de an?lise de res?duos para avaliar o ajuste de dados de sobreviv?ncia correlacionados por meio de Modelos de Tempo de Falha Acelerado (MTFA) com efeitos aleat?rios. Propomos um procedimento de imputa??o para as informa??es censuradas e consideramos tr?s tipos de res?duos para avaliar diferentes caracter?sticas do modelo. Ilustramos as propostas com a an?lise do ajuste de um MTFA com efeito aleat?rio a um conjunto de dados reais envolvendo tempos entre falhas de equipamentos de po?os de petr?leo / 2020-01-01
22

Contribution à la sélection de variables en présence de données longitudinales : application à des biomarqueurs issus d'imagerie médicale / Contribution to variable selection in the presence of longitudinal data : application to biomarkers derived from medical imaging

Geronimi, Julia 13 December 2016 (has links)
Les études cliniques permettent de mesurer de nombreuses variables répétées dans le temps. Lorsque l'objectif est de les relier à un critère clinique d'intérêt, les méthodes de régularisation de type LASSO, généralisées aux Generalized Estimating Equations (GEE) permettent de sélectionner un sous-groupe de variables en tenant compte des corrélations intra-patients. Les bases de données présentent souvent des données non renseignées et des problèmes de mesures ce qui entraîne des données manquantes inévitables. L'objectif de ce travail de thèse est d'intégrer ces données manquantes pour la sélection de variables en présence de données longitudinales. Nous utilisons la méthode d'imputation multiple et proposons une fonction d'imputation pour le cas spécifique des variables soumises à un seuil de détection. Nous proposons une nouvelle méthode de sélection de variables pour données corrélées qui intègre les données manquantes : le Multiple Imputation Penalized Generalized Estimating Equations (MI-PGEE). Notre opérateur utilise la pénalité group-LASSO en considérant l'ensemble des coefficients de régression estimés d'une même variable sur les échantillons imputés comme un groupe. Notre méthode permet une sélection consistante sur l'ensemble des imputations, et minimise un critère de type BIC pour le choix du paramètre de régularisation. Nous présentons une application sur l'arthrose du genoux où notre objectif est de sélectionner le sous-groupe de biomarqueurs qui expliquent le mieux les différences de largeur de l'espace articulaire au cours du temps. / Clinical studies enable us to measure many longitudinales variables. When our goal is to find a link between a response and some covariates, one can use regularisation methods, such as LASSO which have been extended to Generalized Estimating Equations (GEE). They allow us to select a subgroup of variables of interest taking into account intra-patient correlations. Databases often have unfilled data and measurement problems resulting in inevitable missing data. The objective of this thesis is to integrate missing data for variable selection in the presence of longitudinal data. We use mutiple imputation and introduce a new imputation function for the specific case of variables under detection limit. We provide a new variable selection method for correlated data that integrate missing data : the Multiple Imputation Penalized Generalized Estimating Equations (MI-PGEE). Our operator applies the group-LASSO penalty on the group of estimated regression coefficients of the same variable across multiply-imputed datasets. Our method provides a consistent selection across multiply-imputed datasets, where the optimal shrinkage parameter is chosen by minimizing a BIC-like criteria. We then present an application on knee osteoarthritis aiming to select the subset of biomarkers that best explain the differences in joint space width over time.
23

Statistical Evaluation of Correlated Measurement Data in Longitudinal Setting Based on Bilateral Corneal Cross-Linking

Herber, Robert, Graehlert, Xina, Raiskup, Frederik, Veselá, Martina, Pillunat, Lutz E., Spoerl, Eberhard 13 April 2023 (has links)
Purpose In ophthalmology, data from both eyes of a person are frequently included in the statistical evaluation. This violates the requirement of data independence for classical statistical tests (e.g. t-Test or analysis of variance (ANOVA)) because it is correlated data. Linear mixed models (LMM) were used as a possibility to include the data of both eyes in the statistical evaluation. Methods The LMM is available for a variety of statistical software such as SPSS or R. The application was applied to a retrospective longitudinal analysis of an accelerated corneal cross-linking (ACXL (9*10)) treatment in progressive keratoconus (KC) with a follow-up period of 36 months. Forty eyes of 20 patients were included, whereas sequential bilateral CXL treatment was performed within 12 months. LMM and ANOVA for repeated measurements were used for statistical evaluation of topographical and tomographical data measured by Pentacam (Oculus, Wetzlar, Germany). Results Both eyes were classified into a worse and better eye concerning corneal topography. Visual acuity, keratometric values and minimal corneal thickness were statistically significant between them at baseline (p < 0.05). A significant correlation between worse and better eye was shown (p < 0.05). Therefore, analyzing the data at each follow-up visit using ANOVA partially led to an overestimation of the statistical effect that could be avoided by using LMM. After 36 months, ACXL has significantly improved BCVA and flattened the cornea. Conclusion The evaluation of data of both eyes without considering their correlation using classical statistical tests leads to an overestimation of the statistical effect, which can be avoided by using the LMM.
24

Spatially Correlated Data Accuracy Estimation Models in Wireless Sensor Networks

Karjee, Jyotirmoy January 2013 (has links) (PDF)
One of the major applications of wireless sensor networks is to sense accurate and reliable data from the physical environment with or without a priori knowledge of data statistics. To extract accurate data from the physical environment, we investigate spatial data correlation among sensor nodes to develop data accuracy models. We propose three data accuracy models namely Estimated Data Accuracy (EDA) model, Cluster based Data Accuracy (CDA) model and Distributed Cluster based Data Accuracy (DCDA) model with a priori knowledge of data statistics. Due to the deployment of high density of sensor nodes, observed data are highly correlated among sensor nodes which form distributed clusters in space. We describe two clustering algorithms called Deterministic Distributed Clustering (DDC) algorithm and Spatial Data Correlation based Distributed Clustering (SDCDC) algorithm implemented under CDA model and DCDA model respectively. Moreover, due to data correlation in the network, it has redundancy in data collected by sensor nodes. Hence, it is not necessary for all sensor nodes to transmit their highly correlated data to the central node (sink node or cluster head node). Even an optimal set of sensor nodes are capable of measuring accurate data and transmitting the accurate, precise data to the central node. This reduces data redundancy, energy consumption and data transmission cost to increase the lifetime of sensor networks. Finally, we propose a fourth accuracy model called Adaptive Data Accuracy (ADA) model that doesn't require any a priori knowledge of data statistics. ADA model can sense continuous data stream at regular time intervals to estimate accurate data from the environment and select an optimal set of sensor nodes for data transmission to the network. Data transmission can be further reduced for these optimal sensor nodes by transmitting a subset of sensor data using a methodology called Spatio-Temporal Data Prediction (STDP) model under data reduction strategies. Furthermore, we implement data accuracy model when the network is under a threat of malicious attack.

Page generated in 0.0821 seconds