• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 196
  • 31
  • 18
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 558
  • 558
  • 214
  • 196
  • 107
  • 102
  • 73
  • 67
  • 67
  • 67
  • 66
  • 57
  • 54
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Bayesian stochastic differential equation modelling with application to finance

Al-Saadony, Muhannad January 2013 (has links)
In this thesis, we consider some popular stochastic differential equation models used in finance, such as the Vasicek Interest Rate model, the Heston model and a new fractional Heston model. We discuss how to perform inference about unknown quantities associated with these models in the Bayesian framework. We describe sequential importance sampling, the particle filter and the auxiliary particle filter. We apply these inference methods to the Vasicek Interest Rate model and the standard stochastic volatility model, both to sample from the posterior distribution of the underlying processes and to update the posterior distribution of the parameters sequentially, as data arrive over time. We discuss the sensitivity of our results to prior assumptions. We then consider the use of Markov chain Monte Carlo (MCMC) methodology to sample from the posterior distribution of the underlying volatility process and of the unknown model parameters in the Heston model. The particle filter and the auxiliary particle filter are also employed to perform sequential inference. Next we extend the Heston model to the fractional Heston model, by replacing the Brownian motions that drive the underlying stochastic differential equations by fractional Brownian motions, so allowing a richer dependence structure across time. Again, we use a variety of methods to perform inference. We apply our methodology to simulated and real financial data with success. We then discuss how to make forecasts using both the Heston and the fractional Heston model. We make comparisons between the models and show that using our new fractional Heston model can lead to improve forecasts for real financial data.
242

Bayesian learning methods for modelling functional MRI

Groves, Adrian R. January 2009 (has links)
Bayesian learning methods are the basis of many powerful analysis techniques in neuroimaging, permitting probabilistic inference on hierarchical, generative models of data. This thesis primarily develops Bayesian analysis techniques for magnetic resonance imaging (MRI), which is a noninvasive neuroimaging tool for probing function, perfusion, and structure in the human brain. The first part of this work fits nonlinear biophysical models to multimodal functional MRI data within a variational Bayes framework. Simultaneously-acquired multimodal data contains mixtures of different signals and therefore may have common noise sources, and a method for automatically modelling this correlation is developed. A Gaussian process prior is also used to allow spatial regularization while simultaneously applying informative priors on model parameters, restricting biophysically-interpretable parameters to reasonable values. The second part introduces a novel data fusion framework for multivariate data analysis which finds a joint decomposition of data across several modalities using a shared loading matrix. Each modality has its own generative model, including separate spatial maps, noise models and sparsity priors. This flexible approach can perform supervised learning by using target variables as a modality. By inferring the data decomposition and multivariate decoding simultaneously, the decoding targets indirectly influence the component shapes and help to preserve useful components. The same framework is used for unsupervised learning by placing independent component analysis (ICA) priors on the spatial maps. Linked ICA is a novel approach developed to jointly decompose multimodal data, and is applied to combined structural and diffusion images across groups of subjects. This allows some of the benefits of tensor ICA and spatially-concatenated ICA to be combined, and allows model comparison between different configurations. This joint decomposition framework is particularly flexible because of its separate generative models for each modality and could potentially improve modelling of functional MRI, magnetoencephalography, and other functional neuroimaging modalities.
243

On auxiliary variables and many-core architectures in computational statistics

Lee, Anthony January 2011 (has links)
Emerging many-core computer architectures provide an incentive for computational methods to exhibit specific types of parallelism. Our ability to perform inference in Bayesian statistics is often dependent upon our ability to approximate expectations of functions of random variables, for which Monte Carlo methodology provides a general purpose solution using a computer. This thesis is primarily concerned with exploring the gains that can be obtained by using many-core architectures to accelerate existing population-based Monte Carlo algorithms, as well as providing a novel general framework that can be used to devise new population-based methods. Monte Carlo algorithms are often concerned with sampling random variables taking values in X whose density is known up to a normalizing constant. Population-based methods typically make use of collections of interacting auxiliary random variables, each of which is in X, in specifying an algorithm. Such methods are good candidates for parallel implementation when the collection of samples can be generated in parallel and their interaction steps are either parallelizable or negligible in cost. The first contribution of this thesis is in demonstrating the potential speedups that can be obtained for two common population-based methods, population-based Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC). The second contribution of this thesis is in the derivation of a hierarchical family of sparsity-inducing priors in regression and classification settings. Here, auxiliary variables make possible the implementation of a fast algorithm for finding local modes of the posterior density. SMC, accelerated on a many-core architecture, is then used to perform inference for a range of prior specifications to gain an understanding of sparse association signal in the context of genome-wide association studies. The third contribution is in the use of a new perspective on reversible MCMC kernels that allows for the construction of novel population-based methods. These methods differ from most existing methods in that one can make the resulting kernels define a Markov chain on X. A further development is that one can define kernels in which the number of auxiliary variables is given a distribution conditional on the values of the auxiliary variables obtained so far. This is perhaps the most important methodological contribution of the thesis, and the adaptation of the number of particles used within a particle MCMC algorithm provides a general purpose algorithm for sampling from a variety of complex distributions.
244

Modélisation de la susceptibilité génétique non observée d’un individu à partir de son histoire familiale de cancer : application aux études d'identification pangénomiques et à l'estimation du risque de cancer dans le syndrome de Lynch / Modeling the unobserved genetic susceptibility of an individual from his family history of cancer : applications to genome-wide identification studies and to the cancer risk estimation in Lynch syndrome

Drouet, Youenn 09 October 2012 (has links)
Le syndrome de Lynch est responsable d’environ 5% des cas de cancer colorectaux (CCR). Il correspond à la transmission d’une mutation,variation génétique rare, qui confère un haut risque de CCR. Une telle mutationn’est cependant identifiée que dans une famille sur deux. Dans lesfamilles sans mutation identifiée, dites négatives, le risque de CCR est malconnu en particulier les estimations individuelles du risque. Cette thèse comportedeux objectifs principaux. Obj. 1- étudier les stratégies capables de réduireles tailles d’échantillon dans les études visant à identifier de nouveauxgènes de susceptibilité ; et Obj. 2- définir un cadre théorique permettantd’estimer des risques individualisés de CCR dans les familles négatives, enutilisant l’histoire familiale et personnelle de CCR de l’individu. Notre travails’appuie sur la théorie des modèles mendéliens et la simulation de donnéesfamiliales, à partir desquelles il est possible d’étudier la puissance d’étudesd’identification, et d’évaluer in silico les qualités prédictives de méthodesd’estimation du risque. Les résultats obtenus apportent des connaissancesnouvelles pour la planification d’études futures. D’autre part, la cadre méthodologiqueque nous proposons permet une estimation plus précise durisque individuel, permettant d’envisager une surveillance plus individualisée. / Lynch syndrome is responsible of about 5% of cases of colorectal cancer (CRC). It corresponds to the transmission of a mutation, which is arare genetic variant, that confers a high risk of CRC. Such a mutation isidentified, however, in only one family of two. In families without identifiedmutation, called negative, the risk of CRC is largely unknown in particularthere is a lack of individualized risk estimates. This thesis has two main objectives.Obj. 1 - to explore strategies that could reduce the required samplesizes of identification studies, and Obj. 2 - to define a theoretical frameworkfor estimating individualized risk of CRC in negative families, using personaland family history of CRC of the individuals. Our work is based on thetheory of Mendelian models and the simulation of family data, from whichit is possible to study the power of identification studies as well as to assessand compare in silico the predictive ability of risk estimation methods. Theresults provide new knowledge for designing future studies, and the methodologicalframework we propose allows a more precise estimate of risk, thatmight lead to a more individualized cancer follow-up.
245

Assessing the use of a semisubmersible oil platform as a motion-based sea wave sensor. / Avaliação do uso de uma plataforma de óleo e gás do tipo semi-submersível como um sensor de onda marítimo baseado em movimento.

Soler, Jordi Mas 11 December 2018 (has links)
This thesis assesses the use of the measured motions of a semisubmersible oil platform as a basis for estimating on-site wave spectra. The inference method followed is based on the wave buoy analogy, which aims at solving the linear inverse problem: estimate the sea state, given the measured motions and the transfer function of the platform. Directional wave inference obtained from the records of vessels motions is a technique that has seen its application grow signicantly over the last years. As a matter of fact, its applications in ships with forward speed and ship-shaped moored platforms (such as FPSOs) have provided good results. However, little research has been done regarding the use of semisubmersible platforms as wave sensors. This is due to the fact that these platforms are designed to present no signicant responses when excited by waves. Notwithstanding this, the semisubmersible platforms are characterized by measurable small motions. Moreover, if compared with ship-shaped motion-based wave sensors, the responses of the semisubmersibles are in better agreement with the response characteristics estimations obtained by means of linear hydrodynamic models. In addition, the eminently linear characteristics of the responses often lasts even for severe wave conditions. This feature results in that the semisubmersible platforms stand as a promising wave sensor even for extreme sea states, conditions in which other types of sensors (i.e. buoys, radars) may face diculties. Throughout the text, the main results of this work are presented and discussed. These results are mainly based on a dedicated experimental campaign, carried out with a scaled model of the Asgar-B platform, which is a semisubmersible platform located in the Asgard eld oshore Norway. Regarding the sea states tested during the experiential campaign, they were estimated by means of a motion-based Bayesian inference method, which has been developed for more than then years at the EPUSP. In order to allow the adoption of the semisubmersible platforms as a motion based wave sensors, this thesis provides two signicant improvements of the method: rst, a method to obtain an estimation of the linearized equivalent external viscous damping is provided. This analytical methodology allows to reduce the uncertainty of the transfer function of the platform close to the resonances of the motions and, as a consequence, it increases the accuracy of the inference approach. The second relevant contribution is the development of an alternative prior distribution, which is adopted to introduce the prior beliefs regarding the sea state in the Bayesian inference approach. It is shown that although some aspects of this novel approach require further evaluation in future work, the prior distribution developed has potential to improve the accuracy of wave estimates, and, at the same time, it signi cantly simplies the calibration procedures followed by other state-of-the-art Bayesian wave inference methods. Summing up, the inference approach proposed in this work provides the bases to use each semisubmersible oil platform, which stand as the most common type of oil platforms operated oshore Brasil, as a motion based wave sensor, thus contributing to the possible broadening of the Brazilian oceanographic measurement network. / A presente tese investiga a adoção de plataformas de petróleo semi submersíveis como base para inferência das condições de onda através do monitoramento de seus movimentos. O problema em questão consiste na solução do problema inverso de comportamento em ondas; ou seja, uma vez observados os movimentos da unidade flutuante (e conhecidas suas funções de resposta de movimento), estima-se as condições de ondas que os causaram. Este tipo de método já vem sendo empregado há anos para navios em curso e também para navios convertidos em plataformas de petróleo (os chamados FPSOs) com bons resultados. No entanto, o possível emprego de plataformas semi-submersíveis para o mesmo fim foi muito pouco explorado até o momento. Evidentemente, isso decorre da suposição de que, uma vez que essas estruturas são projetadas com o intuito primeiro de atenuar os movimentos decorrentes das ações de ondas, naturalmente elas não seriam bons sensores para esta finalidade. Os resultados apresentados nesta tese, todavia, contrariam tal suposição. De fato, as semi-submersíveis respondem de forma fraca as ondas, porem esta resposta é mensurável. Não apenas isso, mas, em comparação com os cascos de navios, esta resposta adere melhor às previsões dos modelos hidrodinâmicos lineares a partir dos quais as características da plataforma são estimadas. Ademais, o caráter eminentemente linear da resposta muitas vezes perdura inclusive para condições de ondas severas. Isto, por sua vez, torna as semi-submersíveis promissoras inclusive para a estimação de mares extremos, situação nas quais os outros tipos de sensores (boias, radares) enfrentam dificuldades. Nesta tese, a demonstração destes fatos é sustentada por um extenso conjunto de testes experimentais realizados em tanque de ondas com um modelo em escala reduzida de uma plataforma que hoje opera no Mar do Norte. Para tanto, foi empregado um método de inferência Bayesiana para estimação de ondas em navios que vem sendo desenvolvido na EPUSP há mais de dez anos. Para o estudo das semi-submersíveis o trabalho propõe duas melhorias importantes no método: A primeira consiste em um procedimento analítico para prever o amortecimento hidrodinâmico de origem viscosa dos movimentos observados do casco. Este procedimento permite reduzir as incertezas quanto a função de resposta em condições de ressonância dos movimentos com as ondas e, dessa forma, aumentar a confiabilidade do método. A segunda contribuição relevante é a proposição de uma alternativa para a chamada distribuição a priori originalmente empregada pelo método Bayesiano. Demonstra-se que, embora alguns aspectos desta nova metodologia ainda necessitem de uma avaliação adicional em trabalhos futuros, a nova distribuição tem grande potencial para melhorar a precisão das estimativas de ondas, além de simplificar de maneira significativa os procedimentos atuais de calibração do sistema de inferência. Em suma, o método de inferência aqui proposto abre caminho para tornar cada unidade flutuante de óleo e gás do tipo semi-submersível, um dos sistemas de produção mais frequentes nas costas brasileiras, um eventual ponto de monitoramento de ondas, contribuindo então para a possível ampliação de nossas bases de medição oceanograficas.
246

Integration of beliefs and affective values in human decision-making / Intégration des croyances et valeurs affectives dans la prise de décision chez l'homme

Rouault, Marion 22 September 2015 (has links)
Le contrôle exécutif de l'action fait référence a la capacité de l'homme a contrôler et adapter son comportement de manière flexible, en lien avec ses états mentaux internes. Il repose sur l’évaluation des conséquences des actions pour ajuster les choix futurs. Les actions peuvent être renforcées ou dévalues en fonction de la valeur affective des conséquences, impliquant notamment les ganglions de la base et le cortex préfrontal médian. En outre, les conséquences des actions portent une information, qui permet d'ajuster le comportement en relation avec des croyances internes, impliquant le cortex préfrontal. Ainsi, les conséquences des actions portent deux types de signaux : (1) Une valeur affective, qui représente l’évaluation de la conséquence de l'action selon les préférences subjectives, issue de l'apprentissage par renforcement ; (2) Une valeur de croyance, mesurant comment les actions correspondent aux contingences externes, en lien avec l’inférence bayésienne. Cependant, la contribution de ces deux signaux a la prise de décision reste méconnue. Dans cette these, nous avons étudie la pertinence de cette dissociation aux niveaux comportemental et cérébral. Nous présentons plusieurs expériences comportementales permettant de dissocier ces deux signaux de valeur, sous la forme de taches d'apprentissage probabiliste avec des structures de récompense stochastiques et changeantes. Nous avons construit un modelé établissant les fondations fonctionnelles et computationnelles de la dissociation. Il combine deux systèmes en parallèle : un système d'apprentissage par renforcement modulant les valeurs affectives, et un système d’inférence bayésienne modulant les croyances. Le modèle explique mieux le comportement que de nombreux modèles alternatifs. Nous avons ensuite étudie, en IRM fonctionnelle, si les représentations dépendantes et indépendantes du choix des croyances et des valeurs affectives avaient des bases neurales distinctes. L’activité du cortex préfrontal ventromédian (VMPFC) et du cortex mid-cingulaire (MCC) corrélé avec les deux variables dépendantes du choix. Cependant, une double-dissociation a été identifiée concernant les représentations indépendantes du choix, le VMPFC étant spécifique des croyances alors que le MCC est spécifique des valeurs affectives. En outre, l’activité du cortex préfrontal latéral augmente lorsque les deux valeurs de décision sont proches et que le choix devient difficile. Ces résultats suggèrent qu'avant la décision, le cortex préfrontal ventromédian (VMPFC) et le cortex mid-cingulaire (MCC) encodent séparément les croyances et les valeurs affectives respectivement. Le cortex préfrontal latéral (LPFC) combine les deux signaux pour prendre une décision, puis renvoie l'information du choix aux régions médianes, probablement pour actualiser les deux signaux de valeur en fonction des conséquences du choix. Ces résultats contribuent a élucider les mécanismes cérébraux de la prise de décision dans le cortex préfrontal. / Executive control relates to the human ability to monitor and flexibly adapt behavior in relation to internal mental states. Specifically, executive control relies on evaluating action outcomes for adjusting subsequent action. Actions can be reinforced or devaluated given affective value of outcomes, notably in basal ganglia and medial prefrontal cortex. Additionally, outcomes convey information to adapt behavior in relation to internal beliefs, involving prefrontal cortex. Accordingly, action outcomes convey two major types of value signals: (1) Affective values, representing the valuation of action outcomes given subjective preferences and stemming from reinforcement learning; (2) Belief values about how actions map onto outcome contingencies and relating to Bayesian inference. However, how these two signals contribute to decision remains unclear, and previous experimental paradigms confounded them. In this PhD thesis, we investigated whether their dissociation is behaviorally and neurally relevant. We present several behavioral experiments dissociating these two signals, in the form of probabilistic reversal-learning tasks involving stochastic and changing reward structures. We built a model establishing the functional and computational foundations of such dissociation. It combined two parallel systems: reinforcement learning, modulating affective values, and Bayesian inference, monitoring beliefs. The model accounted for behavior better than many other alternative models. We then investigated whether beliefs and affective values have distinct neural bases using fMRI. BOLD signal was regressed against choice-dependent and choice-independent beliefs and affective values. Ventromedial prefrontal cortex (VMPFC) and midcingulate cortex (MCC) activity correlated with both choice-dependent variables. However, we found a double-dissociation regarding choice-independent variables, with VMPFC encoding choice-independent beliefs, whereas MCC encoded choice-independent affective values. Additionally, activity in lateral prefrontal cortex (LPFC) increased when decision values (i.e. mixture of beliefs and affective values) got closer to each other and action selection became more difficult. These results suggest that before decision, VMPFC and MCC separately encode beliefs and affective values respectively. LPFC combines both signals to decide, then feeds back choice information to these medial regions, presumably for updating these value signals according to action outcomes. These results provide new insight into the neural mechanisms of decision-making in prefrontal cortex.
247

High dimensional Markov chain Monte Carlo methods : theory, methods and applications / Méthodes de Monte Carlo par chaîne de Markov en grandes dimensions : théorie, méthodes et applications

Durmus, Alain 02 December 2016 (has links)
L'objet de cette thèse est l'analyse fine de méthodes de Monte Carlopar chaînes de Markov (MCMC) et la proposition de méthodologies nouvelles pour échantillonner une mesure de probabilité en grande dimension. Nos travaux s'articulent autour de trois grands sujets.Le premier thème que nous abordons est la convergence de chaînes de Markov en distance de Wasserstein. Nous établissons des bornes explicites de convergence géométrique et sous-géométrique. Nous appliquons ensuite ces résultats à l'étude d'algorithmes MCMC. Nous nous intéressons à une variante de l'algorithme de Metropolis-Langevin ajusté (MALA) pour lequel nous donnons des bornes explicites de convergence. Le deuxième algorithme MCMC que nous analysons est l'algorithme de Crank-Nicolson pré-conditionné, pour lequel nous montrerons une convergence sous-géométrique.Le second objet de cette thèse est l'étude de l'algorithme de Langevin unajusté (ULA). Nous nous intéressons tout d'abord à des bornes explicites en variation totale suivant différentes hypothèses sur le potentiel associé à la distribution cible. Notre étude traite le cas où le pas de discrétisation est maintenu constant mais aussi du cas d'une suite de pas tendant vers 0. Nous prêtons dans cette étude une attention toute particulière à la dépendance de l'algorithme en la dimension de l'espace d'état. Dans le cas où la densité est fortement convexe, nous établissons des bornes de convergence en distance de Wasserstein. Ces bornes nous permettent ensuite de déduire des bornes de convergence en variation totale qui sont plus précises que celles reportées précédemment sous des conditions plus faibles sur le potentiel. Le dernier sujet de cette thèse est l'étude des algorithmes de type Metropolis-Hastings par échelonnage optimal. Tout d'abord, nous étendons le résultat pionnier sur l'échelonnage optimal de l'algorithme de Metropolis à marche aléatoire aux densités cibles dérivables en moyenne Lp pour p ≥ 2. Ensuite, nous proposons de nouveaux algorithmes de type Metropolis-Hastings qui présentent un échelonnage optimal plus avantageux que celui de l'algorithme MALA. Enfin, nous analysons la stabilité et la convergence en variation totale de ces nouveaux algorithmes. / The subject of this thesis is the analysis of Markov Chain Monte Carlo (MCMC) methods and the development of new methodologies to sample from a high dimensional distribution. Our work is divided into three main topics. The first problem addressed in this manuscript is the convergence of Markov chains in Wasserstein distance. Geometric and sub-geometric convergence with explicit constants, are derived under appropriate conditions. These results are then applied to thestudy of MCMC algorithms. The first analyzed algorithm is an alternative scheme to the Metropolis Adjusted Langevin algorithm for which explicit geometric convergence bounds are established. The second method is the pre-Conditioned Crank-Nicolson algorithm. It is shown that under mild assumption, the Markov chain associated with thisalgorithm is sub-geometrically ergodic in an appropriated Wasserstein distance. The second topic of this thesis is the study of the Unadjusted Langevin algorithm (ULA). We are first interested in explicit convergence bounds in total variation under different kinds of assumption on the potential associated with the target distribution. In particular, we pay attention to the dependence of the algorithm on the dimension of the state space. The case of fixed step sizes as well as the case of nonincreasing sequences of step sizes are dealt with. When the target density is strongly log-concave, explicit bounds in Wasserstein distance are established. These results are then used to derived new bounds in the total variation distance which improve the one previously derived under weaker conditions on the target density.The last part tackles new optimal scaling results for Metropolis-Hastings type algorithms. First, we extend the pioneer result on the optimal scaling of the random walk Metropolis algorithm to target densities which are differentiable in Lp mean for p ≥ 2. Then, we derive new Metropolis-Hastings type algorithms which have a better optimal scaling compared the MALA algorithm. Finally, the stability and the convergence in total variation of these new algorithms are studied.
248

A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar Imaging / Une approche bayésienne pour les problèmes inverses en imagerie Radar à Synthèse d'Ouverture

Zhu, Sha 23 October 2012 (has links)
L'imagerie Radar à Synthèse d'Ouverture (RSO) est une technique bien connue dans les domaines de télédétection, de surveillance aérienne, de géologie et de cartographie. Obtenir des images de haute résolution malgré la présence de bruit, tout en prenant en compte les caractéristiques des cibles dans la scène observée, les différents incertitudes de mesure et les erreurs resultantes de la modélisation, devient un axe de recherche très important.Les méthodes classiques, souvent fondées sur i) la modélisation simplifiée de la scène ; ii) la linéarisation de la modélisation directe (relations mathématiques liant les signaux reçus, les signaux transmis et les cibles) simplifiée ; et iii) l'utilisation de méthodes d'inversion simplifiées comme la Transformée de Fourier Inverse (TFI) rapide, produisent des images avec une résolution spatiale faible, peu robustes au bruit et peu quantifiables (effets des lobes secondaires et bruit du speckle).Dans cette thèse, nous proposons d'utiliser une approche bayésienne pour l'inversion. Elle permettrais de surmonter les inconvénients mentionnés des méthodes classiques, afin d'obtenir des images stables de haute résolution ainsi qu'une estimation plus précise des paramètres liés à la reconnaissance de cibles se trouvant dans la scène observée.L'approche proposée est destinée aux problèmes inverses de l'imagerie RSO mono-, bi-, et multi- statique ainsi que l'imagerie des cibles à micromouvement. Les a priori appropriés de modélisation permettant d'améliorer les caractéristiques des cibles pour des scènes de diverses natures seront présentées. Des méthodes d'estimation rapides et efficaces utilistant des a priori simples ou hiérarchiques seront développées. Le problème de l'estimation des hyperparameters sera galement traité dans le cadre bayésin. Les résultats relatifs aux données synthétiques, expérimentales et réelles démontrent l'efficacité de l'approche proposée. / Synthetic Aperture Radar (SAR) imaging is a well-known technique in the domain of remote sensing, aerospace surveillance, geography and mapping. To obtain images of high resolution under noise, taking into account of the characteristics of targets in the observed scene, the different uncertainties of measure and the modeling errors becomes very important.Conventional imaging methods are based on i) over-simplified scene models, ii) a simplified linear forward modeling (mathematical relations between the transmitted signals, the received signals and the targets) and iii) using a very simplified Inverse Fast Fourier Transform (IFFT) to do the inversion, resulting in low resolution and noisy images with unsuppressed speckles and high side lobe artifacts.In this thesis, we propose to use a Bayesian approach to SAR imaging, which overcomes many drawbacks of classical methods and brings high resolution, more stable images and more accurate parameter estimation for target recognition.The proposed unifying approach is used for inverse problems in Mono-, Bi- and Multi-static SAR imaging, as well as for micromotion target imaging. Appropriate priors for modeling different target scenes in terms of target features enhancement during imaging are proposed. Fast and effective estimation methods with simple and hierarchical priors are developed. The problem of hyperparameter estimation is also handled in this Bayesian approach framework. Results on synthetic, experimental and real data demonstrate the effectiveness of the proposed approach.
249

Incorporação de informações de marcadores genéticos em programas de melhoramento genético de bovinos de corte / Incorporation of genetic markers information in beef cattle breeding programs

Rezende, Fernanda Marcondes de 02 May 2012 (has links)
A disponibilidade de informações baseadas nos marcadores genéticos surgiu como oportunidade de aprimorar os programas de melhoramento animal pela incorporação desses efeitos nas avaliações genéticas. Nesse contexto, o presente estudo teve como objetivos comparar modelos que consideraram ou não os efeitos dos marcadores para a estimação dos valores genéticos dos animais, bem como estimar os efeitos de substituição alélica dos marcadores por seis metodologias distintas (regressão múltipla bayesiana, regressão de cumeeira bayesiana, Bayes A, Bayes B, Bayes Cπ e LASSO bayesiano) e avaliar o impacto da inclusão desses efeitos na acurácia das estimativas dos valores genéticos e os conflitos de seleção existentes aos serem comparadas as classificações dos animais com base nos valores genéticos clássicos e nos valores genéticos assistidos por marcadores. Dados de 83.404 animais pertencentes a um programa de seleção de animas da raça Nelore, mensurados para peso na desmama, ganho de peso pós-desmama, perímetro escrotal e escore de musculosidade, que corresponderam a 116.652 animais na matriz de parentesco, foram utilizados. Do total de animais com informações fenotípicas e genealógicas disponíveis, apenas 3.160 foram genotipados para 106 marcadores do tipo SNP. Os resultados obtidos para a comparação de modelos não demonstraram vantagens claras da inclusão conjunta dos efeitos poligênicos e dos marcadores nos modelos de avaliação genética, entretanto, os modelos que incluíram apenas o efeito dos marcadores tiveram os piores ajustes e desempenhos preditivos. As diferenças observadas entre as estimativas dos efeitos de substituição alélica dos marcadores pelas diferentes metodologias analisadas se devem à maneira como cada método regulariza esses efeitos. A incorporação das informações dos marcadores nas avaliações genéticas proporcionou, no geral, um aumento na acurácia das estimativas dos valores genéticos, especialmente para os tourinhos de reposição. Ao serem comparados os 20% melhores animais classificados com base no valor genético clássico e no valor genético assistido por marcadores, os maiores conflitos de seleção foram observados para os touros e tourinhos genotipados. Em suma, o presente projeto demonstrou que, embora a utilização de painéis de marcadores de muito baixa densidade não altere a capacidade preditiva dos modelos de avaliação genética, esses têm impacto na acurácia das estimativas dos valores genéticos. / The availability of molecular markers information turned out to be an opportunity to improve animal breeding programs, by the inclusion of those effects in the estimation of breeding values. Under that perspective, the aims of present research were to compare genetic evaluation models that assumed or not markers effects on the estimation of breeding values, as well estimate the allelic substitution effects of SNP markers applying six different methodologies (Bayesian multiple regression, Bayesian ridge regression, Bayes A, Bayes B, Bayes Cπ and Bayesian Lasso) and evaluate the impact of these effects on the reliability of breeding values and the divergences on animals classification based on classical breeding values and marker assisted breeding values. Data of 83,404 animals belonging to a Nellore beef cattle (Bos indicus) selection program, measured for post-weaning gain, scrotal circumference and muscle score, corresponding to 116,562 animals on the relationship matrix, were used. From those animals, a set of 3,160 animals with phenotypic and genealogy data available, was genotyped for a panel of 106 SNP markers. Model comparison results did not demonstrate clearly the advantage of assuming polygenic and markers effects together in genetic evaluation models, however, models that considered only markers effects presented the worst global fit and predictive ability. Differences observed on the markers effects estimates were due the shrinkage process applied by each method. The incorporation of markers information on genetic evaluations provided, in general, increases on the reliability of breeding values, mainly for replacement young animals. Comparing the 20% best animals classified by classical breeding value and marker assisted breeding value, the highest divergences were observed to sires and young bulls that were genotyped. Summarizing, although this research showed that the inclusion of very low density SNP chip information was not able to improve the predictive ability of genetic evaluation models, they increased the reliability of breeding values estimates.
250

Ajuste de modelos de degradabilidade ruminal por meio da técnica de produção de gases utilizando as metodologias clássica e bayesiana / Adjustment of ruminal degradability models applying the technique of gas production by using classical and Bayesian methodologies

Souza, Gabriel Batalini de 15 March 2013 (has links)
Dado o poder agropecuário nacional e sabendo que a pastagem tem papel fundamental na nutrição animal, torna-se primordial o estudo dos mecanismos da digestão ruminal das forragens, para um aproveitamento mais racional das pastagens pelos animais, propiciando uma fermentação ruminal ótima e possibilitando o balanceamento de rações de forma mais adequada. Esta abordagem é possível por meio dos modelos de degradação ruminal, que são classificados como modelos de regressão não lineares. Neste trabalho são abordadas as metodologias clássica e bayesiana para ajustar os modelos que descrevem a cinética de degradação ruminal por meio da técnica de produção de gases. Na abordagem clássica foram considerados os modelos não sigmoidal proposto por Orskov&McDonald (1979), o Logístico proposto por Schofield (1994) e o Gompertz proposto por Lavrencic (1997), considerando a necessidade de fatores autorregressivos de primeira e segunda ordem mediante o teste de razão de verossimilhança (TRV); os modelos foram avaliados por meio dos critérios de Akaike (AIC), coeficiente de determinação ajustado (R2 aj) e quadrado médio residual (QMR). Em uma segunda etapa realizou-se o ajuste do modelo não sigmoidal sem fator autorregressivo utilizando a abordagem bayesiana, em que a condição de convergência das cadeias foi analisada por meio dos critérios de Geweke (1992), Heidelberger&Welch (1993), Raftery& Lewis (1992) e o Erro de Monte Carlo (EMC). Dentre os modelos utilizados, o que melhor se ajustou aos dados analisados foi o modelo não sigmoidal proposto por Orskov e McDonald (1979), sem o fator autorregressivo, obtendo estimativas condizentes com a realidade do fenômeno. Os resultados obtidos por meio da abordagem bayesiana também foram satisfatórios, mostrando que a técnica, apesar de pouco difundida em estudos de degradação ruminal é uma metodologia bastante viável e tem muito a agregar em estudos da área. / Given the national agricultural power and knowing that grazing plays an important role in animal nutrition, it becomes primordial to study the mechanisms of ruminal digestion of forages, for a more rational use of pastures by the animals, providing an optimal rumen fermentation and allowing a more adequate and balanced feed. This approach is possible by using the rumen degradation models, which are classified as non-linear regression models. This essay discusses the classical and Bayesian methods to adjust the models that describe the kinetics of degradation by rumen gas production technique. In the classical approach, the \"Non Sigmoidal models\", proposed by Orskov& McDonald (1979), the \"Logistic\", proposed by Schofield (1994), and \"Gompertz\", proposed by Lavrencic (1997), were considered, taking into account the need for autoregressive factors of first and second order, by the \"likelihood ratio test \" (TRV). These models were evaluated using the Akaike criteria (AIC), the coefficient of determination adjusted (R2aj) and \"the residual average square\" (QMR). In the following stage, the adjustment of the non sigmoidal model without the autoregressive factor were performed, using the Bayesian approach. For these matters, the condition of the convergence of chains was analyzed using Geweke (1992), Heidelberger & Welch (1993), Raftery& Lewis (1992) and Monte Carlo error(EMC) criteria.Among the models used, the one that best settle to the data analyzed was the non sigmoidal model without the autoregressive factor, proposed by Orskov and McDonald (1979), obtaining consistent estimates with the reality of the phenomenon. The results obtained through the Bayesian approach were also satisfactory, showing that the technique, although less diffused in studies of rumen methodology, is very viable and has a lot to add in these area studies.

Page generated in 0.0843 seconds