• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 11
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 58
  • 13
  • 11
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Modèles de caméras et algorithmes pour la création de contenu video 3D / Camera Models and algorithms for 3D video content creation

Pujades Rocamora, Sergi 14 October 2015 (has links)
Des optiques à longue focale ont été souvent utilisées dans le cinéma 2D et la télévision, soit dans le but de se rapprocher de la scène, soit dans le but de produire un effet esthétique grâce à la déformation de la perspective. Toutefois, dans le cinéma ou la télévision 3D, l'utilisation de longues focales crée le plus souvent un "effet carton” ou de la divergence oculaire.Pour résoudre ce problème, les méthodes de l'état de l'art utilisent des techniques de transformation de la disparité, qui sont une généralisation de l'interpolation de points de vue.Elles génèrent de nouvelles paires stéréoscopiques à partir des deux séquences d'images originales. Nous proposons d'utiliser plus de deux caméras pour résoudre les problèmes non résolus par les méthodes de transformation de la disparité.Dans la première partie de la thèse, nous passons en revue les causes de la fatigue visuelle et de l'inconfort visuel lors de la visualisation d'un film stéréoscopique. Nous modélisons alors la perception de la profondeur de la vision stéréoscopique d'une scène filmée en 3D avec deux caméras, et projetée dans une salle de cinéma ou sur un téléviseur 3D. Nous caractérisons mathématiquement cette distorsion 3D, et formulons les contraintes mathématiques associées aux causes de la fatigue visuelle et de l'inconfort. Nous illustrons ces distorsions 3D avec un nouveau logiciel interactif, la “salle de projection virtuelle".Afin de générer les images stéréoscopiques souhaitées, nous proposons d'utiliser le rendu basé image. Ces techniques comportent généralement deux étapes. Tout d'abord, les images d'entrée sont transformées vers la vue cible, puis les images transformées sont mélangées. Les transformations sont généralement calculés à l'aide d'une géométrie intermédiaire (implicite ou explicite). Le mélange d'images a été largement étudié dans la littérature et quelques heuristiques permettent d'obtenir de très bonnes performances.Cependant, la combinaison des heuristiques proposées n'est pas simple et nécessite du réglage manuel de nombreux paramètres.Dans cette thèse, nous proposons une nouvelle approche bayésienne au problème de synthèse de nouveaux points de vue, basé sur un modèle génératif.Le modèle génératif proposé tient compte de l'incertitude sur la transformation d'image. Le formalisme bayésien nous permet de déduire l'énergie du modèle génératif et de calculer les images désirées correspondant au maximum a posteriori. La méthode dépasse en termes de qualité les techniques de l'état de l'art du rendu basé image sur des jeux de données complexes. D'autre part, les équations de l'énergie fournissent une formalisation des heuristiques largement utilisés dans les techniques de rendu basé image.Le modèle génératif proposé aborde également le problème de la super-résolution, permettant de rendre des images à une résolution plus élevée que les images de départ.Dans la dernière partie de cette thèse, nous appliquons la nouvelle technique de rendu au cas du zoom stéréoscopique et nous montrons ses performances. / Optics with long focal length have been extensively used for shooting 2D cinema and television, either to virtually get closer to the scene or to produce an aesthetical effect through the deformation of the perspective. However, in 3D cinema or television, the use of long focal length either creates a “cardboard effect” or causes visual divergence. To overcome this problem, state-of-the-art methods use disparity mapping techniques, which is a generalization of view interpolation, and generate new stereoscopic pairs from the two image sequences. We propose to use more than two cameras to solve for the remaining issues in disparity mapping methods.In the first part of the thesis, we review the causes of visual fatigue and visual discomfort when viewing a stereoscopic film. We then model the depth perception from stereopsis of a 3D scene shot with two cameras, and projected in a movie theater or on a 3DTV. We mathematically characterize this 3D distortion, and derive the mathematical constraints associated with the causes of visual fatigue and discomfort. We illustrate these 3D distortions with a new interactive software, “The Virtual Projection Room”.In order to generate the desired stereoscopic images, we propose to use image-based rendering. Those techniques usually proceed in two stages. First, the input images are warped into the target view, and then the warped images are blended together. The warps are usually computed with the help of a geometric proxy (either implicit or explicit). Image blending has been extensively addressed in the literature and a few heuristics have proven to achieve very good performance. Yet the combination of the heuristics is not straightforward, and requires manual adjustment of many parameters.In this thesis, we propose a new Bayesian approach to the problem of novel view synthesis, based on a generative model taking into account the uncertainty of the image warps in the image formation model. The Bayesian formalism allows us to deduce the energy of the generative model and to compute the desired images as the Maximum a Posteriori estimate. The method outperforms state-of-the-art image-based rendering techniques on challenging datasets. Moreover, the energy equations provide a formalization of the heuristics widely used in image-based rendering techniques. Besides, the proposed generative model also addresses the problem of super-resolution, allowing to render images at a higher resolution than the initial ones.In the last part of this thesis, we apply the new rendering technique to the case of the stereoscopic zoom and show its performance.
42

Família Kumaraswamy-G para analisar dados de sobrevivência de longa duração

Eudes, Amanda Morales 25 February 2015 (has links)
Made available in DSpace on 2016-06-02T20:06:51Z (GMT). No. of bitstreams: 1 6689.pdf: 1539030 bytes, checksum: 72c7b3b07f3a78dcc9a7810fd8e09f9e (MD5) Previous issue date: 2015-02-25 / Universidade Federal de Minas Gerais / In survival analysis is studied the time until the occurrence of a particular event of interest and in the literature, the most common approach is parametric, where the data follow a specific probability distribution. Various known distributions maybe used to accommodate failure time data, however, most of these distributions are not able to accommodate non-monotonous hazard functions. Kumaraswamy (1980) proposed a new probability distribution and, based on that, recently Cordeiro and de Castro (2011) proposed a new family of generalized distributions, the so-called Kumaraswamy generalized (Kum-G). In addition to its flexibility, this distribution may also be considered for unimodal and tub shaped hazard functions. The objective of this dissertation is to present the family of Kum-G distributions and their particular cases to analyze lifetime data of individuals at risk, considering that part of the population will never present the event of interest, and considering that covariates may influence the survival function and the cured proportion of the population. Some properties of these models will be discussed as well as appropriate estimation methods, in the classical and Bayesian approaches. Finally, applications of such models are presented to literature data sets. / Em análise de sobrevivência estuda-se o tempo até a ocorrência de um determinado evento de interesse e na literatura, uma abordagem muito utilizada é a paramétrica, em que os dados seguem uma distribuição de probabilidade. Diversas distribuições conhecidas são utilizadas para acomodar dados de tempos de falha, porém, grande parte destas distribuições não é capaz de acomodar funções de risco não monótonas. Kumaraswamy (1980) propôs uma nova distribuição de probabilidade e, baseada nela, mais recentemente Cordeiro e de Castro (2011) propuseram uma nova família de distribuições generalizadas, a Kumaraswamy generalizada (Kum-G). Esta distribuição, além de ser flexível, contém distribuições com funções de risco unimodal e em forma de banheira. O objetivo deste trabalho é apresentar a família de distribuições Kum-G e seus casos particulares para analisar dados de tempo de vida de indivíduos em risco, considerando que uma parcela da população nunca apresentarão evento de interesse, além de considerarmos que covariáveis influenciem na função de sobrevivência e na proporção de curados da população. Algumas propriedades destes modelos serão abordadas, bem como métodos adequados de estimação, tanto na abordagem clássica quanto na bayesiana. Por fim, são apresentadas aplicações de tais modelos a conjuntos de dados existentes na literatura.
43

GARMA models, a new perspective using Bayesian methods and transformations / Modelos GARMA, uma nova perspectiva usando métodos Bayesianos e transformações

Breno Silveira de Andrade 16 December 2016 (has links)
Generalized autoregressive moving average (GARMA) models are a class of models that was developed for extending the univariate Gaussian ARMA time series model to a flexible observation-driven model for non-Gaussian time series data. This work presents the GARMA model with discrete distributions and application of resampling techniques to this class of models. We also proposed The Bayesian approach on GARMA models. The TGARMA (Transformed Generalized Autoregressive Moving Average) models was proposed, using the Box-Cox power transformation. Last but not least we proposed the Bayesian approach for the TGARMA (Transformed Generalized Autoregressive Moving Average). / Modelos Autoregressivos e de médias móveis generalizados (GARMA) são uma classe de modelos que foi desenvolvida para extender os conhecidos modelos ARMA com distribuição Gaussiana para um cenário de series temporais não Gaussianas. Este trabalho apresenta os modelos GARMA aplicados a distribuições discretas, e alguns métodos de reamostragem aplicados neste contexto. É proposto neste trabalho uma abordagem Bayesiana para os modelos GARMA. O trabalho da continuidade apresentando os modelos GARMA transformados, utilizando a transformação de Box-Cox. E por último porém não menos importante uma abordagem Bayesiana para os modelos GARMA transformados.
44

Approche bayésienne de l'estimation des composantes périodiques des signaux en chronobiologie / A Bayesian approach for periodic components estimation for chronobiological signals

Dumitru, Mircea 25 March 2016 (has links)
La toxicité et l’efficacité de plus de 30 agents anticancéreux présentent de très fortes variations en fonction du temps de dosage. Par conséquent, les biologistes qui étudient le rythme circadien ont besoin d’une méthode très précise pour estimer le vecteur de composantes périodiques (CP) de signaux chronobiologiques. En outre, dans les développements récents, non seulement la période dominante ou le vecteur de CP présentent un intérêt crucial, mais aussi leurs stabilités ou variabilités. Dans les expériences effectuées en traitement du cancer, les signaux enregistrés correspondant à différentes phases de traitement sont courts, de sept jours pour le segment de synchronisation jusqu’à deux ou trois jours pour le segment après traitement. Lorsqu’on étudie la stabilité de la période dominante nous devons considérer des signaux très court par rapport à la connaissance a priori de la période dominante, placée dans le domaine circadien. Les approches classiques fondées sur la transformée de Fourier (TF) sont inefficaces (i.e. manque de précision) compte tenu de la particularité des données (i.e. la courte longueur). Dans cette thèse, nous proposons une nouvelle méthode pour l’estimation du vecteur de CP des signaux biomédicaux, en utilisant les informations biologiques a priori et en considérant un modèle qui représente le bruit. Les signaux enregistrés dans le cadre d’expériences développées pour le traitement du cancer ont un nombre limité de périodes. Cette information a priori peut être traduite comme la parcimonie du vecteur de CP. La méthode proposée considère l’estimation de vecteur de CP comme un problème inverse enutilisant l’inférence bayésienne générale afin de déduire toutes les inconnues de notre modèle, à savoir le vecteur de CP mais aussi les hyperparamètres (i.e. les variances associées). / The toxicity and efficacy of more than 30 anticancer agents presents very high variations, depending on the dosing time. Therefore the biologists studying the circadian rhythm require a very precise method for estimating the Periodic Components (PC) vector of chronobiological signals. Moreover, in recent developments not only the dominant period or the PC vector present a crucial interest, but also their stability or variability. In cancer treatment experiments the recorded signals corresponding to different phases of treatment are short, from seven days for the synchronization segment to two or three days for the after treatment segment. When studying the stability of the dominant period we have to consider very short length signals relative to the prior knowledge of the dominant period, placed in the circadian domain. The classical approaches, based on Fourier Transform (FT) methods are inefficient (i.e. lack of precision) considering the particularities of the data (i.e. the short length). In this thesis we propose a new method for the estimation of the PC vector of biomedical signals, using the biological prior informations and considering a model that accounts for the noise. The experiments developed in the cancer treatment context are recording signals expressing a limited number of periods. This is a prior information that can be translated as the sparsity of the PC vector. The proposed method considers the PC vector estimation as an Inverse Problem (IP) using the general Bayesian inference in order to infer all the unknowns of our model, i.e. the PC vector but also the hyperparameters.
45

Approche unifiée multidimensionnelle du problème d'identification acoustique inverse / Unified multidimensional approach to the inverse problem for acoustic source identification

Le Magueresse, Thibaut 11 February 2016 (has links)
La caractérisation expérimentale de sources acoustiques est l'une des étapes essentielles pour la réduction des nuisances sonores produites par les machines industrielles. L'objectif de la thèse est de mettre au point une procédure complète visant à localiser et à quantifier des sources acoustiques stationnaires ou non sur un maillage surfacique par la rétro-propagation d'un champ de pression mesuré par un réseau de microphones. Ce problème inverse est délicat à résoudre puisqu'il est généralement mal-conditionné et sujet à de nombreuses sources d'erreurs. Dans ce contexte, il est capital de s'appuyer sur une description réaliste du modèle de propagation acoustique direct. Dans le domaine fréquentiel, la méthode des sources équivalentes a été adaptée au problème de l'imagerie acoustique dans le but d'estimer les fonctions de transfert entre les sources et l'antenne, en prenant en compte le phénomène de diffraction des ondes autour de l'objet d'intérêt. Dans le domaine temporel, la propagation est modélisée comme un produit de convolution entre la source et une réponse impulsionnelle décrite dans le domaine temps-nombre d'onde. Le caractère sous-déterminé du problème acoustique inverse implique d'utiliser toutes les connaissances a priori disponibles sur le champ sources. Il a donc semblé pertinent d'employer une approche bayésienne pour résoudre ce problème. Des informations a priori disponibles sur les sources acoustiques ont été mises en équation et il a été montré que la prise en compte de leur parcimonie spatiale ou de leur rayonnement omnidirectionnel pouvait améliorer significativement les résultats. Dans les hypothèses formulées, la solution du problème inverse s'écrit sous la forme régularisée de Tikhonov. Le paramètre de régularisation a été estimé par une approche bayésienne empirique. Sa supériorité par rapport aux méthodes communément utilisées dans la littérature a été démontrée au travers d'études numériques et expérimentales. En présence de fortes variabilités du rapport signal à bruit au cours du temps, il a été montré qu'il est nécessaire de mettre à jour sa valeur afin d'obtenir une solution satisfaisante. Finalement, l'introduction d'une variable manquante au problème reflétant la méconnaissance partielle du modèle de propagation a permis, sous certaines conditions, d'améliorer l'estimation de l'amplitude complexe des sources en présence d'erreurs de modèle. Les développements proposés ont permis de caractériser, in situ, la puissance acoustique rayonnée par composant d'un groupe motopropulseur automobile par la méthode de la focalisation bayésienne dans le cadre du projet Ecobex. Le champ acoustique cyclo-stationnaire généré par un ventilateur automobile a finalement été analysé par la méthode d'holographie acoustique de champ proche temps réel. / Experimental characterization of acoustic sources is one of the essential steps for reducing noise produced by industrial machinery. The aim of the thesis is to develop a complete procedure to localize and quantify both stationary and non-stationary sound sources radiating on a surface mesh by the back-propagation of a pressure field measured by a microphone array. The inverse problem is difficult to solve because it is generally ill-conditioned and subject to many sources of error. In this context, it is crucial to rely on a realistic description of the direct sound propagation model. In the frequency domain, the equivalent source method has been adapted to the acoustic imaging problem in order to estimate the transfer functions between the source and the antenna, taking into account the wave scattering. In the time domain, the propagation is modeled as a convolution product between the source and an impulse response described in the time-wavenumber domain. It seemed appropriate to use a Bayesian approach to use all the available knowledge about sources to solve this problem. A priori information available about the acoustic sources have been equated and it has been shown that taking into account their spatial sparsity or their omnidirectional radiation could significantly improve the results. In the assumptions made, the inverse problem solution is written in the regularized Tikhonov form. The regularization parameter has been estimated by an empirical Bayesian approach. Its superiority over methods commonly used in the literature has been demonstrated through numerical and experimental studies. In the presence of high variability of the signal to noise ratio over time, it has been shown that it is necessary to update its value to obtain a satisfactory solution. Finally, the introduction of a missing variable to the problem reflecting the partial ignorance of the propagation model could improve, under certain conditions, the estimation of the complex amplitude of the sources in the presence of model errors. The proposed developments have been applied to the estimation of the sound power emitted by an automotive power train using the Bayesian focusing method in the framework of the Ecobex project. The cyclo-stationary acoustic field generated by a fan motor was finally analyzed by the real-time near-field acoustic holography method.
46

Approche bayésienne pour la localisation de sources en imagerie acoustique / Bayesian approach in acoustic source localization and imaging

Chu, Ning 22 November 2013 (has links)
L’imagerie acoustique est une technique performante pour la localisation et la reconstruction de puissance des sources acoustiques en utilisant des mesures limitées au réseau des microphones. Elle est largement utilisée pour évaluer l’influence acoustique dans l’industrie automobile et aéronautique. Les méthodes d’imagerie acoustique impliquent souvent un modèle direct de propagation acoustique et l’inversion de ce modèle direct. Cependant, cette inversion provoque généralement un problème inverse mal-posé. Par conséquent, les méthodes classiques ne permettent d’obtenir de manière satisfaisante ni une haute résolution spatiale, ni une dynamique large de la puissance acoustique. Dans cette thèse, nous avons tout d’abord nous avons créé un modèle direct discret de la puissance acoustique qui devient alors à la fois linéaire et déterminé pour les puissances acoustiques. Et nous ajoutons les erreurs de mesures que nous décomposons en trois parties : le bruit de fond du réseau de capteurs, l’incertitude du modèle causée par les propagations à multi-trajets et les erreurs d’approximation de la modélisation. Pour la résolution du problème inverse, nous avons tout d’abord proposé une approche d’hyper-résolution en utilisant une contrainte de parcimonie, de sorte que nous pouvons obtenir une plus haute résolution spatiale robuste à aux erreurs de mesures à condition que le paramètre de parcimonie soit estimé attentivement. Ensuite, afin d’obtenir une dynamique large et une plus forte robustesse aux bruits, nous avons proposé une approche basée sur une inférence bayésienne avec un a priori parcimonieux. Toutes les variables et paramètres inconnus peuvent être estimées par l’estimation du maximum a posteriori conjoint (JMAP). Toutefois, le JMAP souffrant d’une optimisation non-quadratique d’importants coûts de calcul, nous avons cherché des solutions d’accélération algorithmique: une approximation du modèle direct en utilisant une convolution 2D avec un noyau invariant. Grâce à ce modèle, nos approches peuvent être parallélisées sur des Graphics Processing Unit (GPU) . Par ailleurs, nous avons affiné notre modèle statistique sur 2 aspects : prise en compte de la non stationarité spatiale des erreurs de mesures et la définition d’une loi a priori pour les puissances renforçant la parcimonie en loi de Students-t. Enfin, nous ont poussé à mettre en place une Approximation Variationnelle Bayésienne (VBA). Cette approche permet non seulement d’obtenir toutes les estimations des inconnues, mais aussi de fournir des intervalles de confiance grâce aux paramètres cachés utilisés par les lois de Students-t. Pour conclure, nos approches ont été comparées avec des méthodes de l’état-de-l’art sur des données simulées, réelles (provenant d’essais en soufflerie chez Renault S2A) et hybrides. / Acoustic imaging is an advanced technique for acoustic source localization and power reconstruction using limited measurements at microphone sensor array. This technique can provide meaningful insights into performances, properties and mechanisms of acoustic sources. It has been widely used for evaluating the acoustic influence in automobile and aircraft industries. Acoustic imaging methods often involve in two aspects: a forward model of acoustic signal (power) propagation, and its inverse solution. However, the inversion usually causes a very ill-posed inverse problem, whose solution is not unique and is quite sensitive to measurement errors. Therefore, classical methods cannot easily obtain high spatial resolutions between two close sources, nor achieve wide dynamic range of acoustic source powers. In this thesis, we firstly build up a discrete forward model of acoustic signal propagation. This signal model is a linear but under-determined system of equations linking the measured data and unknown source signals. Based on this signal model, we set up a discrete forward model of acoustic power propagation. This power model is both linear and determined for source powers. In the forward models, we consider the measurement errors to be mainly composed of background noises at sensor array, model uncertainty caused by multi-path propagation, as well as model approximating errors. For the inverse problem of the acoustic power model, we firstly propose a robust super-resolution approach with the sparsity constraint, so that we can obtain very high spatial resolution in strong measurement errors. But the sparsity parameter should be carefully estimated for effective performance. Then for the acoustic imaging with large dynamic range and robustness, we propose a robust Bayesian inference approach with a sparsity enforcing prior: the double exponential law. This sparse prior can better embody the sparsity characteristic of source distribution than the sparsity constraint. All the unknown variables and parameters can be alternatively estimated by the Joint Maximum A Posterior (JMAP) estimation. However, this JMAP suffers a non-quadratic optimization and causes huge computational cost. So that we improve two following aspects: In order to accelerate the JMAP estimation, we investigate an invariant 2D convolution operator to approximate acoustic power propagation model. Owing to this invariant convolution model, our approaches can be parallelly implemented by the Graphics Processing Unit (GPU). Furthermore, we consider that measurement errors are spatially variant (non-stationary) at different sensors. In this more practical case, the distribution of measurement errors can be more accurately modeled by Students-t law which can express the variant variances by hidden parameters. Moreover, the sparsity enforcing distribution can be more conveniently described by the Student's-t law which can be decomposed into multivariate Gaussian and Gamma laws. However, the JMAP estimation risks to obtain so many unknown variables and hidden parameters. Therefore, we apply the Variational Bayesian Approximation (VBA) to overcome the JMAP drawbacks. One of the fabulous advantages of VBA is that it can not only achieve the parameter estimations, but also offer the confidential interval of interested parameters thanks to hidden parameters used in Students-t priors. To conclude, proposed approaches are validated by simulations, real data from wind tunnel experiments of Renault S2A, as well as the hybrid data. Compared with some typical state-of-the-art methods, the main advantages of proposed approaches are robust to measurement errors, super spatial resolutions, wide dynamic range and no need for source number nor Signal to Noise Ration (SNR) beforehand.
47

Avaliação de testes diagnósticos na ausência de padrão ouro considerando relaxamento da suposição de independência condicional, covariáveis e estratificação da população: uma abordagem Bayesiana

Pereira, Gilberto de Araujo 16 December 2011 (has links)
Made available in DSpace on 2016-06-02T20:04:51Z (GMT). No. of bitstreams: 1 4040.pdf: 1510214 bytes, checksum: 7dfe4542c20ffa8a47309738bc22a922 (MD5) Previous issue date: 2011-12-16 / Financiadora de Estudos e Projetos / The application of a gold standard reference test in all or part of the sample under investigation is often not feasible for the majority of diseases affecting humans, either by a lack of consensus on which testing may be considered a gold standard, the high level of invasion of the gold standard technique, the high cost of financially large-scale application, or by ethical questions, so to know the performance of existing tests is essential for the process of diagnosis of these diseases. In statistical modeling aimed to obtain robust estimates of the prevalence of the disease (x ) and the performance parameters of diagnostic tests (sensitivity (Se) and specificity (Sp)), various strategies have been considered such as the stratification of the population, the relaxation of the assumption of conditional independence, the inclusion of covariates, the verification type (partial or total) and the techniques to replace the gold standard. In this thesis we propose a new structure of stratification of the population considering both the prevalence rates and the parameters of test performance among the different strata (EHW). A Bayesian latent class modeling to estimate these parameters was developed for the general case of K diagnostic tests under investigation, relaxation of the assumption of conditional independence according to the formulations of the fixed effect (FECD) and random (RECD) with dependent order (h _ k) and M covariates. The application of models to two data sets about the performance evaluation of diagnostic tests used in screening for Chagas disease in blood donors showed results consistent with the sensitivity studies. Overall, we observed for the structure of stratification proposal (EHW) superior performance and estimates closer to the nominal values when compared to the structure of stratification when only the prevalence rates are different between the strata (HW), even when we consider data set with rates of Se, Sp and x close among the strata. Generally, the structure of latent class, when we have low or high prevalence of the disease, estimates of sensitivity and specificity rates have higher standard errors. However, in these cases, when there is high concordance of positive or negative results of the tests, the error pattern of these estimates are reduced. Regardless of the structure of stratification (EHW, HW), sample size and the different scenarios used to model the prior information, the model of conditional dependency from the FECD and RECD had, from the information criteria (AIC, BIC and DIC), superior performance to the structure of conditional independence (CI) and to FECD with improved performance and estimates closer to the nominal values. Besides the connection logit, derived from the logistic distribution with symmetrical shape, find in the link GEV, derived from the generalized extreme value distribution which accommodates symmetric and asymmetric shapes, a interesting alternative to construct the conditional dependence structure from the RECD. As an alternative to the problem of identifiability, present in this type of model, the criteria adopted to elicit the informative priors by combining descriptive analysis of data, adjustment models from simpler structures, were able to produce estimates with low standard error and very close to the nominal values. / Na área da saúde a aplicação de teste de referência padrão ouro na totalidade ou parte da amostra sob investigação é, muitas vezes, impraticável devido à inexistência de consenso sobre o teste a ser considerado padrão ouro, ao elevado nível de invasão da técnica, ao alto custo da aplicação em grande escala ou por questões éticas. Contudo, conhecer o desempenho dos testes é fundamental no processo de diagnóstico. Na modelagem estatística voltada à estimação da taxa de prevalência da doença (x ) e dos parâmetros de desempenho de testes diagnósticos (sensibilidade (S) e especificidade (E)), a literatura tem explorado: estratificação da população, relaxamento da suposição de independência condicional, inclusão de covariáveis, tipo de verificação pelo teste padrão ouro e técnicas para substituir o teste padrão ouro inexistente ou inviável de ser aplicado em toda a amostra. Neste trabalho, propomos uma nova estrutura de estratificação da população considerando taxas de prevalências e parâmetros de desempenho diferentes entre os estratos (HWE). Apresentamos uma modelagem bayesiana de classe latente para o caso geral de K testes diagnósticos sob investigação, relaxamento da suposição de independência condicional segundo as formulações de efeito fixo (DCEF) e efeito aleatório (DCEA) com dependência de ordem (h _ K) e inclusão de M covariáveis. A aplicação dos modelos a dois conjuntos de dados sobre avaliação do desempenho de testes diagnósticos utilizados na triagem da doença de Chagas em doadores de sangue apresentou resultados coerentes com os estudos de sensibilidade. Observamos, para a estrutura de estratificação proposta, HWE, desempenho superior e estimativas muito próximas dos valores nominais quando comparados à estrutura de estratificação na qual somente as taxas de prevalências são diferentes entre os estratos (HW), mesmo quando consideramos dados com taxas de S, E e x muito próximas entre os estratos. Geralmente, na estrutura de classe latente, quando temos baixa ou alta prevalência da doença, as estimativas das sensibilidades e especificidades apresentam, respectivamente, erro padrão mais elevado. No entanto, quando há alta concordância de resultados positivos ou negativos, tal erro diminui. Independentemente da estrutura de estratificação (HWE, HW), do tamanho amostral e dos diferentes cenários utilizados para modelar o conhecimento a priori, os modelos de DCEF e de DCEA apresentaram, a partir dos critérios de informação (AIC, BIC e DIC), desempenhos superiores à estrutura de independência condicional (IC), sendo o de DCEF com melhor desempenho e estimativas mais próximas dos valores nominais. Além da ligação logito, derivada da distribuição logística com forma simétrica, encontramos na ligação VEG , derivada da distribuição de valor extremo generalizada a qual acomoda formas simétricas e assimétricas, interessante alternativa para construir a estrutura de DCEA. Como alternativa ao problema de identificabilidade, neste tipo de modelo, os critérios para elicitar as prioris informativas, combinando análise descritiva dos dados com ajuste de modelos de estruturas mais simples, contribuíram para produzir estimativas com baixo erro padrão e muito próximas dos valores nominais.
48

Diversidade, estrutura e relação genética de porta-enxertos de Prunus avaliados pela análise de caracteres morfológicos e de loci SSR / Diversity, structure and genetic relationship of Prunus rootstocks evaluated by analysis of morphological characters and SSR loci

Arge, Luis Willian Pacheco 31 August 2012 (has links)
Made available in DSpace on 2014-08-20T13:59:05Z (GMT). No. of bitstreams: 1 dissertacao_luis_willian_pacheco_arge.pdf: 2893589 bytes, checksum: d2e275d77781d5e9508058a77fa20f96 (MD5) Previous issue date: 2012-08-31 / This study aimed to assess the diversity, structure and the genetic relationship, evaluated by phenotypic and molecular of 75 Prunus rootstocks collection belonging to EMBRAPA Clima Temperado. The phenotypic analyzes were conducted by the evaluation of 21 qualitative and 26 quantitative traits of different plant organs and molecular analyzes were based on evaluation of 17 SSR loci. The data of quantitative traits were categorized by Scott & Knott method and submitted along with the qualitative data to statistical analysis (hierarchical by Jaccard coefficient) and clustering of phenotypic relationship. Molecular data were first converted to different formats and subjected to various statistical analyzes. The UPGMA dendogram obtained by genetic distance matrix calculated by Li & Nei coefficient, using the data of SSR loci evaluated was not able to distinguish between Tsukuba-1, Tsukuba-2 and Tsukuba-3 accesses, however, showed phenotypic effective to distinguish them. The dendograms of both analyzes together with the results of Bayesian approaches allowed the identification of three pools with high relation with the different groups that make up the collection. It was found that principal coordinates analysis based on phenotypic data, proved most effective for the detection of three pools detected with the hierarchical and the Bayesian approach. With the molecular data, the principal coordinates analysis corroborated with results obtained by the dendogram and the Bayesian approach. The access of group South Brazilian originating from samples collected in orchards in the region of Pelotas, which have no known pedigree, had largely down genetic and phenotypic distance, low 0.49 per both analyzes, with Aldrighi and Capdeboscq, and other known access. These accesses were traditionally used in the past as rootstocks in the state of Rio Grande do Sul. For both phenotypic and molecular analyzes, the group access of other species contributed more to genetic and phenotypic diversity, as expected, because are different species and with low similarity of features. Phenotypic and genetic characterization proved effective for elucidating the diversity, structure and genetic, and phenotypic relationship of the rootstocks of Prunus collection. / O presente trabalho objetivou avaliar a diversidade, estrutura e relação genética, por avaliação molecular, e fenotípica de uma coleção de 75 acessos de porta-enxertos de Prunus da EMBRAPA Clima Temperado. As análises fenotípicas foram conduzidas com a avaliação de 21 caracteres qualitativos e 26 quantitativos de diferentes órgãos das plantas, e as análises moleculares foram baseadas na avaliação de 17 loci SSR. Os dados dos caracteres quantitativos foram categorizados pelo método de agrupamento Scott & Knott e submetidos juntamente com os dados qualitativos às análises estatísticas de agrupamento (hierárquica pelo coeficiente de Jaccard) e de relação fenotípica. Os dados moleculares foram convertidos primeiramente para diferentes formatos e submetidos às diferentes análises estatísticas. O dendograma UPGMA obtido a partir da matriz de distância genética calculada pelo coeficiente Nei & Li, utilizando os dados dos loci SSR, não foi capaz de distinguir os acessos Tsukuba-1, Tsukuba-2 e Tsukuba-3, no entanto, os dados fenotípicos mostraram-se eficazes para distingui-los. Os dendograma de ambas as análises, juntamente com a abordagem Bayesiana, possibilitaram a identificação de três pools, com alta relação com os diferentes grupos que compõem a coleção. Verificou-se que análise de coordenadas principais, baseada em dados fenotípicos, mostrou-se mais eficaz para a detecção dos três pools detectados com as abordagens hierárquica e Bayesiana. Com os dados moleculares, a análise de coordenadas principais corroborou parcialmente com os resultados obtidos pelo dendograma e pela análise Bayesiana. Os acessos do grupo sul-brasileiro originários de coletas realizadas em pomares da região de Pelotas, os quais não possuem genealogia conhecida, apresentaram em grande parte baixa distância genética e fenotípica, abaixo de 0,49 por ambas as análises, em relação aos acessos Aldrighi e Capdeboscq. Estes acessos foram tradicionalmente usados no passado como porta-enxertos no estado do Rio Grande do Sul. Por ambas as análises, fenotípicas e moleculares, o grupo de acessos de outras espécies foi nalmente quem mais contribuiu para a diversidade genética e fenotípica, como já era esperado, pois são espécies distintas e com baixa similaridade de características. A caracterização genética por ambas as técnicas de análise mostrou-se efetiva para a elucidação da diversidade, estrutura e relação genética e fenotípica da coleção de porta-enxertos de Prunus avaliada.
49

Sur le pronostic des systèmes stochastiques

Ouladsine, Radouane 09 December 2013 (has links)
Cette thèse porte sur la problématique du pronostic des systèmes. Plus précisément, elle est dédiée aux systèmes stochastiques et deux contributions principales sont proposées. La première concerne la problématique du pronostic à base de connaissances d’expert. Le système considéré est supposé être exploité en vue de réaliser une mission. Durant cette dernière, on suppose disposer d’information, à travers les connaissances d’expert, sur l’environnement ; Cependant, à cause des phénomènes aléatoires, ces connaissances peuvent être que partielles. Par conséquent, une méthodologie de pronostic, basée sur des techniques fondée sur le principe de Maximum d’Entropie Relative (MER), est proposée. Ensuite, pour modéliser l’impact de cet environnement aléatoire sur le système les trajectoires de dégradation sont construites à travers un cumul stochastique basé sur la méthode Monte Carlo chaine de Markov. La deuxième contribution est dédiée au pronostic à base de modèle d’état non-linéaire et stochastique. Dans ce travail, on suppose que seule la structure de la fonction de dégradation est connue. Cette structure est supposée dépendre de la dynamique d’un paramètre inconnu. L’objectif ici, est d’estimer ce paramètre en vue de déterminer la dynamique de la dégradation. Dans ce cadre, une stratégie, du pronostic basée sur la technique de filtre bayésien, est proposée. La technique consiste à combiner deux filtres de Kalman. Le premier filtre est utilisé afin de déterminer le paramètre inconnu. Puis, en utilisant la valeur du paramètre estimée, le deuxième filtre assure la convergence de la dégradation. Une serie d’exemples est traitée afin d’illustrer notre contribution. / This thesis focuses on the problem of the systems prognostic. More precisely, it is dedicated to stochastic systems and two main contributions are proposed. The first one is about the prognostic of stochastic systems based on the expert knowledge and the proposed approach consists in assessing the system availability during a mission. This mission is supposed to model the user profile that express the environment in which the system will evolve. We suppose also that this profile is given through a partial knowledge provided by the expert. In fact, since the complexity of systems under consideration the expert can provide only incomplete information. The aim of the contribution is to estimate the system’s damage trajectory and analyse the mission success. In this case, a three steps methodology is proposed. The first step consists on estimating the environment probability distribution. Indeed, a probabilistic method based on maximum relative entropy (MRE) is used. The second step is dedicated to the damage trajectory construction. This step is performed by using a Markov Chain Monte Carlo (MCMC) simulation. Finally, the prediction of the mission success is performed. Note that, the models describing the damage behaviour, and in order to be more realistic, is supposed to be stochastic. The second contribution, concerns the model-based prognosis approach. More precisely, it about the use of the Bayesian filtering on the prognosis problem. The aim of the proposed approach is identify the damage parameter by using an Ensemble Kalman Filtre (EnKF). Then, estimate the RUL based on the damage propagation. To illustrate our contributions, a series of examples is treated.
50

Modelling flood heights of the Limpopo River at Beitbridge Border Post using extreme value distributions

Kajambeu, Robert January 2016 (has links)
MSc (Statistics) / Department of Statistics / Haulage trucks and cross border traders cross through Beitbridge border post from landlocked countries such as Zimbabwe and Zambia for the sake of trading. Because of global warming, South Africa has lately been experiencing extreme weather patterns in the form of very high temperatures and heavy rainfall. Evidently, in 2013 tra c could not cross the Limpopo River because water was owing above the bridge. For planning, its important to predict the likelihood of such events occurring in future. Extreme value models o er one way in which this can be achieved. This study identi es suitable distributions to model the annual maximum heights of Limpopo river at Beitbridge border post. Maximum likelihood method and the Bayesian approach are used for parameter estimation. The r -largest order statistics was also used in this dissertation. For goodness of t, the probability and quantile- quantile plots are used. Finally return levels are calculated from these distributions. The dissertation has revealed that the 100 year return level is 6.759 metres using the maximum likelihood and Bayesian approaches to estimate parameters. Empirical results show that the Fr echet class of distributions ts well the ood heights data at Beitbridge border post. The dissertation contributes positively by informing stakeholders about the socio- economic impacts that are brought by extreme flood heights for Limpopo river at Beitbridge border post

Page generated in 0.0688 seconds