• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 24
  • 14
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 23
  • 18
  • 17
  • 15
  • 14
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Caractérisation des performances minimales d'estimation pour des modèles d'observations non-standards / Minimal performance analysis for non standard estimation models

Ren, Chengfang 28 September 2015 (has links)
Dans le contexte de l'estimation paramétrique, les performances d'un estimateur peuvent être caractérisées, entre autre, par son erreur quadratique moyenne (EQM) et sa résolution limite. La première quantifie la précision des valeurs estimées et la seconde définit la capacité de l'estimateur à séparer plusieurs paramètres. Cette thèse s'intéresse d'abord à la prédiction de l'EQM "optimale" à l'aide des bornes inférieures pour des problèmes d'estimation simultanée de paramètres aléatoires et non-aléatoires (estimation hybride), puis à l'extension des bornes de Cramér-Rao pour des modèles d'observation moins standards. Enfin, la caractérisation des estimateurs en termes de résolution limite est également étudiée. Ce manuscrit est donc divisé en trois parties :Premièrement, nous complétons les résultats de littérature sur les bornes hybrides en utilisant deux bornes bayésiennes : la borne de Weiss-Weinstein et une forme particulière de la famille de bornes de Ziv-Zakaï. Nous montrons que ces bornes "étendues" sont plus précises pour la prédiction de l'EQM optimale par rapport à celles existantes dans la littérature.Deuxièmement, nous proposons des bornes de type Cramér-Rao pour des contextes d'estimation moins usuels, c'est-à-dire : (i) Lorsque les paramètres non-aléatoires sont soumis à des contraintes d'égalité linéaires ou non-linéaires (estimation sous contraintes). (ii) Pour des problèmes de filtrage à temps discret où l'évolution des états (paramètres) est régit par une chaîne de Markov. (iii) Lorsque la loi des observations est différente de la distribution réelle des données.Enfin, nous étudions la résolution et la précision des estimateurs en proposant un critère basé directement sur la distribution des estimées. Cette approche est une extension des travaux de Oh et Kashyap et de Clark pour des problèmes d'estimation de paramètres multidimensionnels. / In the parametric estimation context, estimators performances can be characterized, inter alia, by the mean square error and the resolution limit. The first quantities the accuracy of estimated values and the second defines the ability of the estimator to allow a correct resolvability. This thesis deals first with the prediction the "optimal" MSE by using lower bounds in the hybrid estimation context (i.e. when the parameter vector contains both random and non-random parameters), second with the extension of Cramér-Rao bounds for non-standard estimation problems and finally to the characterization of estimators resolution. This manuscript is then divided into three parts :First, we fill some lacks of hybrid lower bound on the MSE by using two existing Bayesian lower bounds: the Weiss-Weinstein bound and a particular form of Ziv-Zakai family lower bounds. We show that these extended lower bounds are tighter than the existing hybrid lower bounds in order to predict the optimal MSE.Second, we extend Cramer-Rao lower bounds for uncommon estimation contexts. Precisely: (i) Where the non-random parameters are subject to equality constraints (linear or nonlinear). (ii) For discrete-time filtering problems when the evolution of states are defined by a Markov chain. (iii) When the observation model differs to the real data distribution.Finally, we study the resolution of the estimators when their probability distributions are known. This approach is an extension of the work of Oh and Kashyap and the work of Clark to multi-dimensional parameters estimation problems.
62

Etude des délais de survenue des effets indésirables médicamenteux à partir des cas notifiés en pharmacovigilance : problème de l'estimation d'une distribution en présence de données tronquées à droite / Time to Onset of Adverse Drug Reactions : Spontaneously Reported Cases Based Analysis and Distribution Estimation From Right-Truncated Data

Leroy, Fanny 18 March 2014 (has links)
Ce travail de thèse porte sur l'estimation paramétrique du maximum de vraisemblance pour des données de survie tronquées à droite, lorsque les délais de troncature sont considérés déterministes. Il a été motivé par le problème de la modélisation des délais de survenue des effets indésirables médicamenteux à partir des bases de données de pharmacovigilance, constituées des cas notifiés. Les distributions exponentielle, de Weibull et log-logistique ont été explorées.Parfois le caractère tronqué à droite des données est ignoré et un estimateur naïf est utilisé à la place de l'estimateur pertinent. Une première étude de simulations a montré que, bien que ces deux estimateurs - naïf et basé sur la troncature à droite - puissent être positivement biaisés, le biais de l'estimateur basé sur la troncature est bien moindre que celui de l'estimateur naïf et il en va de même pour l'erreur quadratique moyenne. De plus, le biais et l'erreur quadratique moyenne de l'estimateur basé sur la troncature à droite diminuent nettement avec l'augmentation de la taille d'échantillon, ce qui n'est pas le cas de l'estimateur naïf. Les propriétés asymptotiques de l'estimateur paramétrique du maximum de vraisemblance ont été étudiées. Sous certaines conditions, suffisantes, cet estimateur est consistant et asymptotiquement normal. La matrice de covariance asymptotique a été détaillée. Quand le délai de survenue est modélisé par la loi exponentielle, une condition d'existence de l'estimation du maximum de vraisemblance, assurant ces conditions suffisantes, a été obtenue. Pour les deux autres lois, une condition d'existence de l'estimation du maximum de vraisemblance a été conjecturée.A partir des propriétés asymptotiques de cet estimateur paramétrique, les intervalles de confiance de type Wald et de la vraisemblance profilée ont été calculés. Une seconde étude de simulations a montré que la couverture des intervalles de confiance de type Wald pouvait être bien moindre que le niveau attendu en raison du biais de l'estimateur du paramètre de la distribution, d'un écart à la normalité et d'un biais de l'estimateur de la variance asymptotique. Dans ces cas-là, la couverture des intervalles de la vraisemblance profilée est meilleure.Quelques procédures d'adéquation adaptées aux données tronquées à droite ont été présentées. On distingue des procédures graphiques et des tests d'adéquation. Ces procédures permettent de vérifier l'adéquation des données aux différents modèles envisagés.Enfin, un jeu de données réelles constitué de 64 cas de lymphomes consécutifs à un traitement anti TNF-α issus de la base de pharmacovigilance française a été analysé, illustrant ainsi l'intérêt des méthodes développées. Bien que ces travaux aient été menés dans le cadre de la pharmacovigilance, les développements théoriques et les résultats des simulations peuvent être utilisés pour toute analyse rétrospective réalisée à partir d'un registre de cas, où les données sur un délai de survenue sont aussi tronquées à droite. / This work investigates the parametric maximum likelihood estimation for right-truncated survival data when the truncation times are considered deterministic. It was motivated by the modeling problem of the adverse drug reactions time-to-onset from spontaneous reporting databases. The families of the exponential, Weibull and log-logistic distributions were explored.Sometimes, right-truncation features of spontaneous reports are not taken into account and a naive estimator is used instead of the truncation-based estimator. Even if the naive and truncation-based estimators may be positively biased, a first simulation study showed that the bias of the truncation-based estimator is always smaller than the naive one and this is also true for the mean squared error. Furthermore, when the sample size increases, the bias and the mean squared error are almost constant for the naive estimator while they decrease clearly for the truncation-based estimator.Asymptotic properties of the truncation-based estimator were studied. Under sufficient conditions, this parametric truncation-based estimator is consistent and asymptotically normally distributed. The covariance matrix was detailed. When the time-to-onset is exponentially distributed, these sufficient conditions are checked as soon as a condition for the maximum likelihood estimation existence is satisfied. When the time-to-onset is Weibull or log-logistic distributed, a condition for the maximum likelihood estimation existence was conjectured.The asymptotic distribution of the maximum likelihood estimator makes it possible to derive Wald-type and profile likelihood confidence intervals for the distribution parameters. A second simulation study showed that the estimated coverage probability of the Wald-type confidence intervals could be far from the expected level because of a bias of the parametric maximum likelihood estimator, a gap from the gaussian distribution and a bias of the asymptotic variance estimator. In these cases, the profile likelihood confidence intervals perform better.Some goodness-of-fit procedures adapted to right-truncated data are presented. Graphical procedures and goodness-of-fit tests may be distinguished. These procedures make it possible to check the fit of different parametric families to the data.Illustrating the developed methods, a real dataset of 64 cases of lymphoma, that occurred after anti TNF-α treatment and that were reported to the French pharmacovigilance, was finally analyzed. Whilst an application to pharmacovigilance was led, the theoretical developments and the results of the simulation study may be used for any retrospective analysis from case registries where data are right-truncated.
63

Développement et validation de stratégies de quantification lipidique par imagerie et spectroscopie proton à 3T : Application à l’étude de la surnutrition / Development and validation of lipid quantification strategies using proton magnetic resonance imaging and spectroscopy at 3T : Application to an overfeeding study

Nemeth, Angéline 28 November 2018 (has links)
L’imagerie et la spectroscopie par résonance magnétique nucléaire (IRM et SRM) sont des méthodes non-invasives qui ont le potentiel d’estimer in vivo la quantité et la qualité des adiposités abdominales. Le contexte scientifique et clinique de ce manuscrit s’articule autour de l’étude de surnutrition « Poly-Nut » dont l’un des objectifs est d’analyser les évolutions des tissus adipeux (TA) dans une phase rapide de prise de poids. L’originalité et la complexité de cette thèse résident dans le développement, l’adaptation et la comparaison de plusieurs méthodes quantitatives d’IRM et de SRM, pour l’étude du signal lipidique dans un contexte clinique à 3T. La fiabilité et la validation des mesures obtenues in vivo par ces techniques font ici l’objet d’une étude approfondie. Pour l’analyse quantitative du signal de spectroscopie, différentes méthodes existantes ont été comparées à celle développée spécifiquement pour notre étude clinique. L’estimation paramétrique par moindres carrés non linéaires appliquée aux spectres RMN des lipides peut conduire, selon la fonction modèle utilisée, à un problème non linéaire mal posé. Nous montrons alors que l’utilisation d’un modèle simplifié se fondant sur la structure d’une chaine de triglycéride, comme utilisé récemment en imagerie quantitative, constitue une solution valide au regard de l’état de l’art. Ensuite différentes méthodes (IRM, SRM, Dual Energy X-ray absorptiometry, chromatographie en phase gazeuse) ont été utilisées pour caractériser les TA sous-cutanés et viscéraux. Le suivi par IRM du contenu lipidique du foie ainsi que du volume et de la composition en acide gras des TA à partir d’une unique acquisition en multi-écho de gradient est démontré. Enfin des développements expérimentaux menés parallèlement à l’étude clinique sur un imageur préclinique à 4,7T, comparent différentes stratégies d’encodage du déplacement chimique par imagerie et caractérisent des méthodes SRM pour estimer in vivo la proportion d’omégas-3 dans les chaînes d’acides gras. / Magnetic resonance imaging and spectroscopy (MRI and MRS) are non-invasive methods that have the potential to estimate in vivo the quantity and the quality of abdominal adipose tissues (AT). The scientific and clinical context of this thesis is based on an overfeeding study entitled "Poly-Nut". One of the main objectives of this study is to analyze changes in adipose tissues in a rapid phase of weight gain. The originality and complexity of this thesis rely in the development, adaptation and comparison of several quantitative methods of MRI and MRS, for the study of lipid signal, in a clinical context, at 3T. The reliability and the validation of the measurements obtained in vivo using these techniques are the main subject of this PhD thesis. For the quantitative analysis of the spectroscopy signal, different existing methods have been compared to those developed specifically for our clinical study. According to the model function used, the nonlinear-least-squares parametric estimation applied to the lipid spectra can lead to an ill-posed nonlinear problem. We demonstrated that the use of a simplified model based on the structure of a triglyceride chain, as recently used in quantitative imaging, was a valid solution regarding the state of the art. Then different methods (MRI, MRS, Dual Energy X-ray absorptiometry, gas chromatography) were used to characterize the subcutaneous and visceral AT. We demonstrated the feasibility of MRI to follow the lipid content in the liver as well as the volume and the fatty acid composition of AT using a single multiple gradient-echo acquisition. Finally, experimental developments were carried out in parallel with the clinical study, on a 4.7T preclinical system, first, to compare different strategies for encoding the chemical shift using imaging and, secondly, to characterize MRS methods for in vivo estimation of the relative proportion of omega-3 among all fatty acids.
64

Modelagem Paramétrica de Cubas Eletrolíticas para Predição do Efeito Anódico. / Parametric modeling of electrolytic smelter pot for anodic effect prediction.

SILVA, Antonio José da 05 June 2009 (has links)
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-09-06T13:31:47Z No. of bitstreams: 1 Antonio_José_da_Silva2.pdf: 2564341 bytes, checksum: ff7454362aecf3bf6afd177edfd5c821 (MD5) / Made available in DSpace on 2017-09-06T13:31:47Z (GMT). No. of bitstreams: 1 Antonio_José_da_Silva2.pdf: 2564341 bytes, checksum: ff7454362aecf3bf6afd177edfd5c821 (MD5) Previous issue date: 2009-06-05 / FAPEMA / The Anode effect that occurs in electrolytic smelter pot is responsible for gases such as PFC s. These gases contribute to the greenhouse effect, and in addition jeopardizes its productive capacity. From the voltage (output) and current (input) are estimate ARX and OE models of the electrolytic smelter pot using the Systems Identification Theory, the ARX and OE models of the electrolytic smelter pot are built to represent the steady state operation and the anode effect occurrence. After the simulation are chosen the models with better adjustment to the measure exit. For the selection are used established criteria along the research, the ARX and OE models of electrolytic smelter pot, are built to represent the full state operation of the electrolytic smelter pot. Based on real data and via algebraic properties, the models generate the functions of specific transfer of each model that are validated with real data obtained in the industry, the answer in time, in the convergence frequency and speed are analyzed. From the transfer function is made the representation of the normal stage of operation of the electrolytic smelter pot, and by the properties of the estimate model is made the prediction the anode effect identifying the increase of the voltage in the validation stage. Therefore, this work introduces the investigation of ARX and OE parametric models how better represent the operation of the electrolytic smelter pot to can enable the prediction of the anode effect in the productive process of the aluminum. In this dissertation, we propose the models development in the domain of the continuous and discreet time with a study of her transitory answers and of steady state as well as your answer in frequency of your normal operation and in the phase that precedes the anode effect. / O efeito anódico que ocorre nas cubas eletrolíticas é responsável pela emissão de gases como os PFC s, gases esses, que contribuem para o efeito estufa, além de comprometer sua capacidade produtiva. A partir dos sinais de tensão (saída) e corrente (entrada) são estimados modelos ARX e OE da cuba eletrolítica utilizando a Teoria de Identificação de Sistemas. Após a simulação são escolhidos os modelos com melhor ajuste à saída medida. Para a seleção são utilizados critérios estabelecidos ao longo da pesquisa. Os modelos ARX e OE das cubas eletrolítica, são construídos para representar o pleno estado de funcionamento da cuba. Baseados em dados reais e via propriedades algébricas, os modelos geram as funções de transferência específicas de cada modelo que são validadas com dados reais obtidos na indústria, a resposta no tempo, na freqüência e velocidade de convergência são analisadas. A partir da função de transferência é feita a representação da fase normal de funcionamento da cuba eletrolítica, e pelas propriedades do modelo é feita a predição do efeito anódico identificando o aumento da tensão na fase de validação. Portanto, este trabalho apresenta a investigação de modelos paramétricos ARX e OE que melhor representam o funcionamento da cuba eletrolítica para possibilitar a predição do efeito anódico no processo produtivo do alumínio. Nesta dissertação propomos o desenvolvimento de modelos no domínio do tempo contínuo e discreto com um estudo das suas respostas transitória e de regime permanente assim como sua resposta em freqüência de seu funcionamento normal e na fase que antecede o efeito anódico.
65

Structured anisotropic sparsity priors for non-parametric function estimation / Parcimonie structurée anisotrope pour l'estimation non paramétrique

Farouj, Younes 17 November 2016 (has links)
Le problème d'estimer une fonction de plusieurs variables à partir d'une observation corrompue surgit dans de nombreux domaines d'ingénierie. Par exemple, en imagerie médicale cette tâche a attiré une attention particulière et a, même, motivé l'introduction de nouveaux concepts qui ont trouvé des applications dans de nombreux autres domaines. Cet intérêt est principalement du au fait que l'analyse des données médicales est souvent effectuée dans des conditions difficiles car on doit faire face au bruit, au faible contraste et aux transformations indésirables inhérents aux systèmes d'acquisition. D'autre part , le concept de parcimonie a eu un fort impact sur la reconstruction et la restauration d'images au cours des deux dernières décennies. La parcimonie stipule que certains signaux et images ont des représentations impliquant seulement quelques coefficients non nuls. Cela est avéré être vérifiable dans de nombreux problèmes pratiques. La thèse introduit de nouvelles constructions d'a priori de parcimonie dans le cas des ondelettes et de la variation totale. Ces constructions utilisent une notion d'anisotopie généralisée qui permet de regrouper des variables ayant des comportements similaires : ces comportement peuvent peut être liée à la régularité de la fonction, au sens physique des variables ou bien au modèle d'observation. Nous utilisons ces constructions pour l'estimation non-paramétriques de fonctions. Dans le cas des ondelettes, nous montrons l'optimalité de l'approche sur les espaces fonctionnelles habituels avant de présenter quelques exemples d’applications en débruitage de séquences d'images, de données spectrales et hyper-spectrales, écoulements incompressibles ou encore des images ultrasonores. En suite, nous modélisons un problème déconvolution de données d'imagerie par résonance magnétique fonctionnelle comme un problème de minimisation faisant apparaître un a priori de variation totale structuré en espace-temps. Nous adaptons une généralisation de l'éclatement explicite-implicite pour trouver une solution au problème de minimisation. / The problem of estimating a multivariate function from corrupted observations arises throughout many areas of engineering. For instance, in the particular field of medical signal and image processing, this task has attracted special attention and even triggered new concepts and notions that have found applications in many other fields. This interest is mainly due to the fact that the medical data analysis pipeline is often carried out in challenging conditions, since one has to deal with noise, low contrast and undesirable transformations operated by acquisition systems. On the other hand, the concept of sparsity had a tremendous impact on data reconstruction and restoration in the last two decades. Sparsity stipulates that some signals and images have representations involving only a few non-zero coefficients. The present PhD dissertation introduces new constructions of sparsity priors for wavelets and total variation. These construction harness notions of generalized anisotropy that enables grouping variables into sub-sets having similar behaviour; this behaviour can be related to the regularity of the unknown function, the physical meaning of the variables or the observation model. We use these constructions for non-parametric estimation of multivariate functions. In the case of wavelet thresholding, we show the optimality of the procedure over usual functional spaces before presenting some applications on denoising of image sequence, spectral and hyperspectral data, incompressible flows and ultrasound images. Afterwards, we study the problem of retrieving activity patterns from functional Magnetic Resonance Imaging data without incorporating priors on the timing, durations and atlas-based spatial structure of the activation. We model this challenge as a spatio-temporal deconvolution problem. We propose the corresponding variational formulation and we adapt the generalized forward-backward splitting algorithm to solve it.
66

Dependence modeling between continuous time stochastic processes : an application to electricity markets modeling and risk management / Modélisation de la dépendance entre processus stochastiques en temps continu : une application aux marchés de l'électricité et à la gestion des risques

Deschatre, Thomas 08 December 2017 (has links)
Cette thèse traite de problèmes de dépendance entre processus stochastiques en temps continu. Ces résultats sont appliqués à la modélisation et à la gestion des risques des marchés de l'électricité.Dans une première partie, de nouvelles copules sont établies pour modéliser la dépendance entre deux mouvements Browniens et contrôler la distribution de leur différence. On montre que la classe des copules admissibles pour les Browniens contient des copules asymétriques. Avec ces copules, la fonction de survie de la différence des deux Browniens est plus élevée dans sa partie positive qu'avec une dépendance gaussienne. Les résultats sont appliqués à la modélisation jointe des prix de l'électricité et d'autres commodités énergétiques. Dans une seconde partie, nous considérons un processus stochastique observé de manière discrète et défini par la somme d'une semi-martingale continue et d'un processus de Poisson composé avec retour à la moyenne. Une procédure d'estimation pour le paramètre de retour à la moyenne est proposée lorsque celui-ci est élevé dans un cadre de statistique haute fréquence en horizon fini. Ces résultats sont utilisés pour la modélisation des pics dans les prix de l'électricité.Dans une troisième partie, on considère un processus de Poisson doublement stochastique dont l'intensité stochastique est une fonction d'une semi-martingale continue. Pour estimer cette fonction, un estimateur à polynômes locaux est utilisé et une méthode de sélection de la fenêtre est proposée menant à une inégalité oracle. Un test est proposé pour déterminer si la fonction d'intensité appartient à une certaine famille paramétrique. Grâce à ces résultats, on modélise la dépendance entre l'intensité des pics de prix de l'électricité et de facteurs exogènes tels que la production éolienne. / In this thesis, we study some dependence modeling problems between continuous time stochastic processes. These results are applied to the modeling and risk management of electricity markets. In a first part, we propose new copulae to model the dependence between two Brownian motions and to control the distribution of their difference. We show that the class of admissible copulae for the Brownian motions contains asymmetric copulae. These copulae allow for the survival function of the difference between two Brownian motions to have higher value in the right tail than in the Gaussian copula case. Results are applied to the joint modeling of electricity and other energy commodity prices. In a second part, we consider a stochastic process which is a sum of a continuous semimartingale and a mean reverting compound Poisson process and which is discretely observed. An estimation procedure is proposed for the mean reversion parameter of the Poisson process in a high frequency framework with finite time horizon, assuming this parameter is large. Results are applied to the modeling of the spikes in electricity prices time series. In a third part, we consider a doubly stochastic Poisson process with stochastic intensity function of a continuous semimartingale. A local polynomial estimator is considered in order to infer the intensity function and a method is given to select the optimal bandwidth. An oracle inequality is derived. Furthermore, a test is proposed in order to determine if the intensity function belongs to some parametrical family. Using these results, we model the dependence between the intensity of electricity spikes and exogenous factors such as the wind production.
67

Estatística em confiabilidade de sistemas: uma abordagem Bayesiana paramétrica / Statistics on systems reliability: a parametric Bayesian approach

Rodrigues, Agatha Sacramento 17 August 2018 (has links)
A confiabilidade de um sistema de componentes depende da confiabilidade de cada componente. Assim, a estimação da função de confiabilidade de cada componente do sistema é de interesse. No entanto, esta não é uma tarefa fácil, pois quando o sistema falha, o tempo de falha de um dado componente pode não ser observado, isto é, um problema de dados censurados. Neste trabalho, propomos modelos Bayesianos paramétricos para estimação das funções de confiabilidade de componentes e sistemas em quatro diferentes cenários. Inicialmente, um modelo Weibull é proposto para estimar a distribuição do tempo de vida de um componente de interesse envolvido em sistemas coerentes não reparáveis, quando estão disponíveis o tempo de falha do sistema e o estado do componente no momento da falha do sistema. Não é imposta a suposição de que os tempos de vida dos componentes sejam identicamente distribuídos, mas a suposição de independência entre os tempos até a falha dos componentes é necessária, conforme teorema anunciado e devidamente demonstrado. Em situações com causa de falha mascarada, os estados dos componentes no momento da falha do sistema não são observados e, neste cenário, um modelo Weibull com variáveis latentes no processo de estimação é proposto. Os dois modelos anteriormente descritos propõem estimar marginalmente as funções de confiabilidade dos componentes quando não são disponíveis ou necessárias as informações dos demais componentes e, por consequência, a suposição de independência entre os tempos de vida dos componentes é necessária. Com o intuito de não impor esta suposição, o modelo Weibull multivariado de Hougaard é proposto para a estimação das funções de confiabilidade de componentes envolvidos em sistemas coerentes não reparáveis. Por fim, um modelo Weibull para a estimação da função de confiabilidade de componentes de um sistema em série reparável com causa de falha mascarada é proposto. Para cada cenário considerado, diferentes estudos de simulação são realizados para avaliar os modelos propostos, sempre comparando com a melhor solução encontrada na literatura até então, em que, em geral, os modelos propostos apresentam melhores resultados. Com o intuito de demonstrar a aplicabilidade dos modelos, análises de dados são realizadas com problemas reais não só da área de confiabilidade, mas também da área social. / The reliability of a system of components depends on reliability of each component. Thus, the initial statistical work should be the estimation of the reliability of each component of the system. This is not an easy task because when the system fails, the failure time of a given component can be not observed, that is, a problem of censored data. We propose parametric Bayesian models for reliability functions estimation of systems and components involved in four scenarios. First, a Weibull model is proposed to estimate component failure time distribution from non-repairable coherent systems when there are available the system failure time and the component status at the system failure moment. Furthermore, identically distributed failure times are not a required restriction. An important result is proved: without the assumption that components\' lifetimes are mutually independent, a given set of sub-reliability functions does not identify the corresponding marginal reliability function. In masked cause of failure situations, it is not possible to identify the statuses of the components at the moment of system failure and, in this second scenario, we propose a Bayesian Weibull model by means of latent variables in the estimation process. The two models described above propose to estimate marginally the reliability functions of the components when the information of the other components is not available or necessary and, consequently, the assumption of independence among the components\' failure times is necessary. In order to not impose this assumption, the Hougaard multivariate Weibull model is proposed for the estimation of the components\' reliability functions involved in non-repairable coherent systems. Finally, a Weibull model for the estimation of the reliability functions of components of a repairable series system with masked cause of failure is proposed. For each scenario, different simulation studies are carried out to evaluate the proposed models, always comparing then with the best solution found in the literature until then. In general, the proposed models present better results. In order to demonstrate the applicability of the models, data analysis are performed with real problems not only from the reliability area, but also from social area.
68

Estatística em confiabilidade de sistemas: uma abordagem Bayesiana paramétrica / Statistics on systems reliability: a parametric Bayesian approach

Agatha Sacramento Rodrigues 17 August 2018 (has links)
A confiabilidade de um sistema de componentes depende da confiabilidade de cada componente. Assim, a estimação da função de confiabilidade de cada componente do sistema é de interesse. No entanto, esta não é uma tarefa fácil, pois quando o sistema falha, o tempo de falha de um dado componente pode não ser observado, isto é, um problema de dados censurados. Neste trabalho, propomos modelos Bayesianos paramétricos para estimação das funções de confiabilidade de componentes e sistemas em quatro diferentes cenários. Inicialmente, um modelo Weibull é proposto para estimar a distribuição do tempo de vida de um componente de interesse envolvido em sistemas coerentes não reparáveis, quando estão disponíveis o tempo de falha do sistema e o estado do componente no momento da falha do sistema. Não é imposta a suposição de que os tempos de vida dos componentes sejam identicamente distribuídos, mas a suposição de independência entre os tempos até a falha dos componentes é necessária, conforme teorema anunciado e devidamente demonstrado. Em situações com causa de falha mascarada, os estados dos componentes no momento da falha do sistema não são observados e, neste cenário, um modelo Weibull com variáveis latentes no processo de estimação é proposto. Os dois modelos anteriormente descritos propõem estimar marginalmente as funções de confiabilidade dos componentes quando não são disponíveis ou necessárias as informações dos demais componentes e, por consequência, a suposição de independência entre os tempos de vida dos componentes é necessária. Com o intuito de não impor esta suposição, o modelo Weibull multivariado de Hougaard é proposto para a estimação das funções de confiabilidade de componentes envolvidos em sistemas coerentes não reparáveis. Por fim, um modelo Weibull para a estimação da função de confiabilidade de componentes de um sistema em série reparável com causa de falha mascarada é proposto. Para cada cenário considerado, diferentes estudos de simulação são realizados para avaliar os modelos propostos, sempre comparando com a melhor solução encontrada na literatura até então, em que, em geral, os modelos propostos apresentam melhores resultados. Com o intuito de demonstrar a aplicabilidade dos modelos, análises de dados são realizadas com problemas reais não só da área de confiabilidade, mas também da área social. / The reliability of a system of components depends on reliability of each component. Thus, the initial statistical work should be the estimation of the reliability of each component of the system. This is not an easy task because when the system fails, the failure time of a given component can be not observed, that is, a problem of censored data. We propose parametric Bayesian models for reliability functions estimation of systems and components involved in four scenarios. First, a Weibull model is proposed to estimate component failure time distribution from non-repairable coherent systems when there are available the system failure time and the component status at the system failure moment. Furthermore, identically distributed failure times are not a required restriction. An important result is proved: without the assumption that components\' lifetimes are mutually independent, a given set of sub-reliability functions does not identify the corresponding marginal reliability function. In masked cause of failure situations, it is not possible to identify the statuses of the components at the moment of system failure and, in this second scenario, we propose a Bayesian Weibull model by means of latent variables in the estimation process. The two models described above propose to estimate marginally the reliability functions of the components when the information of the other components is not available or necessary and, consequently, the assumption of independence among the components\' failure times is necessary. In order to not impose this assumption, the Hougaard multivariate Weibull model is proposed for the estimation of the components\' reliability functions involved in non-repairable coherent systems. Finally, a Weibull model for the estimation of the reliability functions of components of a repairable series system with masked cause of failure is proposed. For each scenario, different simulation studies are carried out to evaluate the proposed models, always comparing then with the best solution found in the literature until then. In general, the proposed models present better results. In order to demonstrate the applicability of the models, data analysis are performed with real problems not only from the reliability area, but also from social area.
69

Autocorrélation et stationnarité dans le processus autorégressif / Autocorrelation and stationarity in the autoregressive process

Proïa, Frédéric 04 November 2013 (has links)
Cette thèse est dévolue à l'étude de certaines propriétés asymptotiques du processus autorégressif d'ordre p. Ce dernier qualifie communément une suite aléatoire $(Y_{n})$ définie sur $\dN$ ou $\dZ$ et entièrement décrite par une combinaison linéaire de ses $p$ valeurs passées, perturbée par un bruit blanc $(\veps_{n})$. Tout au long de ce mémoire, nous traitons deux problématiques majeures de l'étude de tels processus : l'\textit{autocorrélation résiduelle} et la \textit{stationnarité}. Nous proposons en guise d'introduction un survol nécessaire des propriétés usuelles du processus autorégressif. Les deux chapitres suivants sont consacrés aux conséquences inférentielles induites par la présence d'une autorégression significative dans la perturbation $(\veps_{n})$ pour $p=1$ tout d'abord, puis pour une valeur quelconque de $p$, dans un cadre de stabilité. Ces résultats nous permettent d'apposer un regard nouveau et plus rigoureux sur certaines procédures statistiques bien connues sous la dénomination de \textit{test de Durbin-Watson} et de \textit{H-test}. Dans ce contexte de bruit autocorrélé, nous complétons cette étude par un ensemble de principes de déviations modérées liées à nos estimateurs. Nous abordons ensuite un équivalent en temps continu du processus autorégressif. Ce dernier est décrit par une équation différentielle stochastique et sa solution est plus connue sous le nom de \textit{processus d'Ornstein-Uhlenbeck}. Lorsque le processus d'Ornstein-Uhlenbeck est lui-même engendré par une diffusion similaire, cela nous permet de traiter la problématique de l'autocorrélation résiduelle dans le processus à temps continu. Nous inférons dès lors quelques propriétés statistiques de tels modèles, gardant pour objectif le parallèle avec le cas discret étudié dans les chapitres précédents. Enfin, le dernier chapitre est entièrement dévolu à la problématique de la stationnarité. Nous nous plaçons dans le cadre très général où le processus autorégressif possède une tendance polynomiale d'ordre $r$ tout en étant engendré par une marche aléatoire intégrée d'ordre $d$. Les résultats de convergence que nous obtenons dans un contexte d'instabilité généralisent le \textit{test de Leybourne et McCabe} et certains aspects du \textit{test KPSS}. De nombreux graphes obtenus en simulations viennent conforter les résultats que nous établissons tout au long de notre étude. / This thesis is devoted to the study of some asymptotic properties of the $p-$th order \textit{autoregressive process}. The latter usually designates a random sequence $(Y_{n})$ defined on $\dN$ or $\dZ$ and completely described by a linear combination of its $p$ last values and a white noise $(\veps_{n})$. All through this manuscript, one is concerned with two main issues related to the study of such processes: \textit{serial correlation} and \textit{stationarity}. We intend, by way of introduction, to give a necessary overview of the usual properties of the autoregressive process. The two following chapters are dedicated to inferential consequences coming from the presence of a significative autoregression in the disturbance $(\veps_{n})$ for $p=1$ on the one hand, and then for any $p$, in the stable framework. These results enable us to give a new light on some statistical procedures such as the \textit{Durbin-Watson test} and the \textit{H-test}. In this autocorrelated noise framework, we complete the study by a set of moderate deviation principles on our estimates. Then, we tackle a continuous-time equivalent of the autoregressive process. The latter is described by a stochastic differential equation and its solution is the well-known \textit{Ornstein-Uhlenbeck process}. In the case where the Ornstein-Uhlenbeck process is itself driven by an Ornstein-Uhlenbeck process, one deals with the serial correlation issue for the continuous-time process. Hence, we infer some statistical properties of such models, keeping the parallel with the discrete-time framework studied in the previous chapters as an objective. Finally, the last chapter is entirely devoted to the stationarity issue. We consider the general autoregressive process with a polynomial trend of order $r$ driven by a random walk of order $d$. The convergence results in the unstable framework generalize the \textit{Leybourne and McCabe test} and some angles of the \textit{KPSS test}. Many graphs obtained by simulations come to strengthen the results established all along the study.
70

[en] NON-PARAMETRIC ESTIMATIONS OF INTEREST RATE CURVES : MODEL SELECTION CRITERION: MODEL SELECTION CRITERIONPERFORMANCE DETERMINANT FACTORS AND BID-ASK S / [pt] ESTIMAÇÕES NÃO PARAMÉTRICAS DE CURVAS DE JUROS: CRITÉRIO DE SELEÇÃO DE MODELO, FATORES DETERMINANTES DEDESEMPENHO E BID-ASK SPREAD

ANDRE MONTEIRO DALMEIDA MONTEIRO 11 June 2002 (has links)
[pt] Esta tese investiga a estimação de curvas de juros sob o ponto de vista de métodos não-paramétricos. O texto está dividido em dois blocos. O primeiro investiga a questão do critério utilizado para selecionar o método de melhor desempenho na tarefa de interpolar a curva de juros brasileira em uma dada amostra. Foi proposto um critério de seleção de método baseado em estratégias de re-amostragem do tipo leave-k-out cross validation, onde K k £ £ 1 e K é função do número de contratos observados a cada curva da amostra. Especificidades do problema reduzem o esforço computacional requerido, tornando o critério factível. A amostra tem freqüência diária: janeiro de 1997 a fevereiro de 2001. O critério proposto apontou o spline cúbico natural -utilizado com método de ajuste perfeito aos dados - como o método de melhor desempenho. Considerando a precisão de negociação, este spline mostrou-se não viesado. A análise quantitativa de seu desempenho identificou, contudo, heterocedasticidades nos erros simulados. A partir da especificação da variância condicional destes erros e de algumas hipóteses, foi proposto um esquema de intervalo de segurança para a estimação de taxas de juros pelo spline cúbico natural, empregado como método de ajuste perfeito aos dados. O backtest sugere que o esquema proposto é consistente, acomodando bem as hipóteses e aproximações envolvidas. O segundo bloco investiga a estimação da curva de juros norte-americana construída a partir dos contratos de swaps de taxas de juros dólar-Libor pela Máquina de Vetores Suporte (MVS), parte do corpo da Teoria do Aprendizado Estatístico. A pesquisa em MVS tem obtido importantes avanços teóricos, embora ainda sejam escassas as implementações em problemas reais de regressão. A MVS possui características atrativas para a modelagem de curva de juros: é capaz de introduzir já na estimação informações a priori sobre o formato da curva e sobre aspectos da formação das taxas e liquidez de cada um dos contratos a partir dos quais ela é construída. Estas últimas são quantificadas pelo bid-ask spread (BAS) de cada contrato. A formulação básica da MVS é alterada para assimilar diferentes valores do BAS sem que as propriedades dela sejam perdidas. É dada especial atenção ao levantamento de informação a priori para seleção dos parâmetros da MVS a partir do formato típico da curva. A amostra tem freqüência diária: março de 1997 a abril de 2001. Os desempenhos fora da amostra de diversas especificações da MVS foram confrontados com aqueles de outros métodos de estimação. A MVS foi o método que melhor controlou o trade- off entre viés e variância dos erros. / [en] This thesis investigates interest rates curve estimation under non-parametric approach. The text is divided into two parts. The first one focus on which criterion to use to select the best performance method in the task of interpolating Brazilian interest rate curve. A selection criterion is proposed to measure out-of-sample performance by combining resample strategies leave-k-out cross validation applied upon the whole sample curves, where K k £ £ 1 and K is function of observed contract number in each curve. Some particularities reduce substantially the required computational effort, making the proposed criterion feasible. The data sample range is daily, from January 1997 to February 2001. The proposed criterion selected natural cubic spline, used as data perfect-fitting estimation method. Considering the trade rate precision, the spline is non-biased. However, quantitative analysis of performance determinant factors showed the existence of out-of-sample error heteroskedasticities. From a conditional variance specification of these errors, a security interval scheme is proposed for interest rate generated by perfect-fitting natural cubic spline. A backtest showed that the proposed security interval is consistent, accommodating the evolved assumptions and approximations. The second part estimate US free-for-floating interest rate swap contract curve by using Support Vector Machine (SVM), a method derived from Statistical Learning Theory. The SVM research has got important theoretical results, however the number of implementation on real regression problems is low. SVM has some attractive characteristics for interest rates curves modeling: it has the ability to introduce already in its estimation process a priori information about curve shape and about liquidity and price formation aspects of the contracts that generate the curve. The last information set is quantified by the bid-ask spread. The basic SVM formulation is changed in order to be able to incorporate the different values for bid-ask spreads, without losing its properties. Great attention is given to the question of how to extract a priori information from swap curve typical shape to be used in MVS parameter selection. The data sample range is daily, from March 1997 to April 2001. The out-of-sample performances of different SVM specifications are faced with others method performances. SVM got the better control of trade- off between bias and variance of out-of-sample errors.

Page generated in 0.1802 seconds