Spelling suggestions: "subject:"priors"" "subject:"variors""
11 |
Bayesian Model Uncertainty and Prior Choice with Applications to Genetic Association StudiesWilson, Melanie Ann January 2010 (has links)
<p>The Bayesian approach to model selection allows for uncertainty in both model specific parameters and in the models themselves. Much of the recent Bayesian model uncertainty literature has focused on defining these prior distributions in an objective manner, providing conditions under which Bayes factors lead to the correct model selection, particularly in the situation where the number of variables, <italic>p</italic>, increases with the sample size, <italic>n</italic>. This is certainly the case in our area of motivation; the biological application of genetic association studies involving single nucleotide polymorphisms. While the most common approach to this problem has been to apply a marginal test to all genetic markers, we employ analytical strategies that improve upon these marginal methods by modeling the outcome variable as a function of a multivariate genetic profile using Bayesian variable selection. In doing so, we perform variable selection on a large number of correlated covariates within studies involving modest sample sizes. </p>
<p>In particular, we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally 'validated' in independent studies. </p>
<p></p>
<p>In the context of Bayesian model uncertainty for problems involving a large number of correlated covariates we characterize commonly used prior distributions on the model space and investigate their implicit multiplicity correction properties first in the extreme case where the model includes an increasing number of redundant covariates and then under the case of full rank design matrices. We provide conditions on the asymptotic (in <italic>n</italic> and <italic>p</italic>) behavior of the model space prior </p>
<p>required to achieve consistent selection of the global hypothesis of at least one associated variable in the analysis using global posterior probabilities (i.e. under 0-1 loss). In particular, under the assumption that the null model is true, we show that the commonly used uniform prior on the model space leads to inconsistent selection of the global hypothesis via global posterior probabilities (the posterior probability of at least one association goes to <italic>1</italic>) when the rank of the design matrix is finite. In the full rank case, we also show inconsistency when <italic>p</italic> goes to infinity faster than the square root of <italic>n</italic>. Alternatively, we show that any model space prior such that the global prior odds of association increases at a rate slower than the square root of <italic>n<italic> results in consistent selection of the global hypothesis in terms of posterior probabilities.</p> / Dissertation
|
12 |
Bayesian Model Discrimination and Bayes Factors for Normal Linear State Space ModelsFrühwirth-Schnatter, Sylvia January 1993 (has links) (PDF)
It is suggested to discriminate between different state space models for a given time series by means of a Bayesian approach which chooses the model that minimizes the expected loss. Practical implementation of this procedures requires a fully Bayesian analysis for both the state vector and the unknown hyperparameters which is carried out by Markov chain Monte Carlo methods. Application to some non-standard situations such as testing hypotheses on the boundary of the parameter space, discriminating non-nested models and discrimination of more than two models is discussed in detail. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
13 |
Análisis Costo Beneficio de la Implementación del Modelo de Black-Litterman para Asignación de Activos en Portafolios de InversiónGálvez Pinto, Rocío Magdalena January 2008 (has links)
En esta memoria de título se presentan los resultados de la comparación de dos modelos de
asignación de activos en portafolios de inversión. El primero de ellos corresponde a la forma
tradicional de estructurar portafolios, el modelo de Markowitz, que es contrastado con el modelo
de Black-Litterman, que propone la inclusión de visiones del inversionista al momento de estimar
los retornos esperados para los activos involucrados.
Este trabajo responde a la necesidad de asignar activos en portafolios, consiguiendo máxima
rentabilidad para el nivel de riesgo escogido. El Modelo de Markowitz presenta inconvenientes
como portafolios altamente concentrados, que no recogen el punto de vista del inversionista y
poca objetividad al momento de estimar rendimientos para los activos involucrados. Cómo
solución se propone el modelo de Black-Litterman, que desplaza la frontera eficiente al realizar
una nueva optimización riesgo-retorno, consiguiendo carteras menos riesgosas y coherentes con
la intuición previa del inversionista.
Se propone como objetivo general de este trabajo: Analizar el costo y beneficio de la
implementación del modelo de Black-Litterman en portafolios de inversión. Para esto se
presentan objetivos específicos que apuntan a comparar carteras eficientes de inversión,
determinando el costo de implementar dichos modelos, determinar el nivel de confianza de
algunos emisores de recomendaciones bursátiles y analizar la aplicabilidad de estos modelos en
casos reales.
La comparación entre modelos se realiza de modo ínter temporal, es decir se obtienen fronteras
eficientes de inversión en determinados momentos de tiempo. Dichas fronteras se comparan,
observando cual entrega mejores oportunidades de inversión. De cada frontera se extrae el
portafolio de mínima varianza, riesgo 4 y 5%. Con eso se verifica su rentabilidad real versus su
rentabilidad esperada, tanto parcialmente como en la serie acumulada de todo el periodo en
estudio. En lo que se refiere a las fronteras eficientes a comparar, se recolectan datos sobre los
precios de cierre semanales de las acciones de las 19 empresas que compusieron el índice
nacional IPSA durante el 2007, con 100% de presencia bursátil. Con ellos se calcula niveles de
riesgo (betas) para obtener tasas de retorno esperado sobre el patrimonio (CAPM), covarianzas
entre empresas, y otros procedimientos que permiten utilizar procedimientos de optimización.
Como resultados principales se encontró que el rendimiento acumulado del periodo es mayor en
el modelo de Black-Litterman, para todos los niveles de riesgo estudiados. A su vez, las
tendencias de los retornos son crecientes, pero las pendientes aumentan con el riesgo en el caso
de B-L, mientras que disminuyen con Markowitz. Por tanto, en la medida que más sea el
inversionista amante al riesgo, más conveniente resulta la implementación del modelo de BlackLitterman.
Para el caso de los portafolios de mínimo riesgo, los resultados son similares por lo
que pareciera no justificarse el costo de la implementación de un modelo de cómo éste.
|
14 |
Investigação da reconstrução tomográfica utilizando transdutores distribuídos de ultrassom. / Reconstruction tomography investigation using ultrasound distributed transducers.Diego Armando Cardona Cardenas 17 January 2018 (has links)
A Ultrassonografia é uma ferramenta que vem sendo bastante utilizada pelas equipes médicas para diagnosticar e monitorar diferentes doenças. Isto pode ser explicado pelo fato de ser não invasiva e ser livre de radiação ionizante. A tomografia por ultrassom (USCT), uma das classes de ultrassonografia, é apresentada como uma alternativa promissora, de baixo custo, na avaliação de patologias e tumores nas glândulas mamárias. Apesar disso, a eficiência dos algoritmos desenvolvidos para o USCT depende tanto dos seus parâmetros iniciais como das características dos objetos dentro do meio de propagação (refletividade, tamanho, contraste). Para melhorar os resultados dos algoritmos de USCT é comum inicializar estes algoritmos com informação anatômica da região a ser reconstruída (Priors). Apesar das melhoras, para baixos contrastes, os efeitos das alterações nos Priors sobre estes algoritmos não são claros, e além disso, não existem estudos sobre a geração e uso de Priors para altos contrastes. Neste trabalho foi investigada a reconstrução tomográfica quantitativa por ultrassom, desde informações provenientes da reflexão, transmissão e espalhamento das ondas de ultrassom, com o intuito de diminuir o erro nos algoritmos de USCT e gerar melhores Priors para múltiplos contrastes. Para este propósito, através de simulações, foram estudadas técnicas que usam a reflexão como caminho para conhecer regiões (máscara por reflexão) ou para inferir bordas dos objetos dentro do meio (Abertura Sintética de Transmissão (STA)), técnicas que assumem transmissão linear do som oferecendo uma ideia da velocidade dentro do meio (Técnicas de Reconstrução Algébrica (ART)) e algoritmos que usam a difração do som (Distorted Born Iterative Method (DBIM)) para, segundo certos limites, inferir melhor tanto bordas como velocidade dos objetos dentro do meio. Também foi analisada como esta última técnica se comporta diante de diversas inicializações (Priors). Como resultados e conclusões neste trabalho mostrou-se: como o aumento do contraste no meio gera os piores resultados do DBIM; perante a boas inicializações do meio de propagação, o algoritmo, independentemente do contraste, tende a gerar boas reconstruções; o uso de estratégias que delimitem ou diminuam o número de variáveis a serem encontradas (máscara por reflexão) junto com o DBIM possibilita uma convergência mais rápida e melhora desempenho deste; inicializar os objetos dentro do meio de propagação (Priors) com áreas maiores do que as esperadas, oferece melhores resultados no DBIM do que trabalhar com áreas menores; informações qualitativas provenientes da reflexão (STA) são relevantes e aumentam a sua importância conforme aumenta o contraste estudado; através dos algoritmos ART, é possível uma delimitação inicial dos objetos dentro do meio para certos contrastes. Estas informações quantitativas podem ser melhoradas por meio da execução conjunta do ART com uma variação do Modified Median Filter aqui proposta. / Ultrasonography is a tool that has been used by medical professionals to diagnose and to monitor different kinds of diseases. This can be explained by its characteristics, such as being non-invasive and being free of ionizing radiation. Ultrasound Tomography (USCT) is one of the classes of ultrasonography, and is presented as a promising low cost alternative in the evaluation of pathologies and tumors in the breast. However, the efficiency of the USCT-algorithms depends both on its initial parameters and of the objects characteristics within the propagation medium (reflectivity, size, contrast). To improve the results of the USCT-algorithms it is common to initialize the algorithms with a-priori anatomical information of the region to be reconstructed (Priors). Despite of improving the results of the USCT-algorithms for low contrasts, the effects of the Priors in these algorithms are not clear, and in addition, there are no studies about the generation and the use of Priors for high contrasts. In this work, quantitative reconstruction for ultrasound was investigated based on information from the reflection, transmission and scattering of ultrasound waves, in order to reduce the error in the USCT-algorithms and to generate better Priors for multiple contrasts. For this purpose, it was studied, through simulations, techniques that use reflection to differentiate regions (reflection mask), or to deduce objects borders within the propagation medium (synthetic transmission aperture (STA)), as well as techniques that assume linear sound transmission to get an idea of the velocity inside the propagation medium (algebraic reconstruction technique (ART)) and algorithms that use sound diffraction (Distorted Born Iterative Method (DBIM)) to better infer both edges and velocity of objects within the propagation medium. It was also analyzed how the DBIM behaves due to multiple initializations (Priors). As results and conclusions, it was shown: how the increase of contrast in the propagation medium generates the worse results of the DBIM; in the presence of a good initialization of the propagation medium, the DBIM, regardless of the contrast, tends to generate good reconstructions; the use of strategies that delimit or reduce the number of unknown variables (reflection mask) along with the DBIM enables fast convergence and it improves the DBIM\'s performance; initializing the objects within the propagation medium with areas larger than expected provides better DBIM results than working with smaller areas; qualitative information derived from the reflection (STA) are relevant and increase their importance as the contrast increases; initial delimitation of objects within the propagation medium for certain contrasts is possible via transmission reconstruction. This quantitative information can be improved through the implementation of ART together with a variation of the Modified Median Filter here proposed.
|
15 |
DSGE Model Estimation and Labor Market DynamicsMickelsson, Glenn January 2016 (has links)
Essay 1: Estimation of DSGE Models with Uninformative Priors DSGE models are typically estimated using Bayesian methods, but because prior information may be lacking, a number of papers have developed methods for estimation with less informative priors (diffuse priors). This paper takes this development one step further and suggests a method that allows full information maximum likelihood (FIML) estimation of a medium-sized DSGE model. FIML estimation is equivalent to placing uninformative priors on all parameters. Inference is performed using stochastic simulation techniques. The results reveal that all parameters are identifiable and several parameter estimates differ from previous estimates that were based on more informative priors. These differences are analyzed. Essay 2: A DSGE Model with Labor Hoarding Applied to the US Labor Market In the US, some relatively stable patterns can be observed with respect to employment, production and productivity. An increase in production is followed by an increase in employment with lags of one or two quarters. Productivity leads both production and employment, especially employment. I show that it is possible to replicate this empirical pattern in a model with only one demand-side shock and labor hoarding. I assume that firms have organizational capital that depreciates if workers are utilized to a high degree in current production. When demand increases, firms can increase utilization, but over time, they have to hire more workers and reduce utilization to restore organizational capital. The risk shock turns out to be very dominant and explains virtually all of the dynamics. Essay 3: Demand Shocks and Labor Hoarding: Matching Micro Data In Swedish firm-level data, output is more volatile than employment, and in response to demand shocks, employment follows output with a one- to two-year lag. To explain these observations, we use a model with labor hoarding in which firms can change production by changing the utilization rate of their employees. Matching the impulse response functions, we find that labor hoarding in combination with increasing returns to scale in production and a very high price stickiness can explain the empirical pattern very well. Increasing returns to scale implies a larger percentage change in output than in employment. Price stickiness amplifies volatility in output because the price has a dampening effect on demand changes. Both of these explain the delayed reaction in employment in response to output changes.
|
16 |
Uncertainty, Identification, And Privacy: Experiments In Individual Decision-makingRivenbark, David 01 January 2010 (has links)
The alleged privacy paradox states that individuals report high values for personal privacy, while at the same time they report behavior that contradicts a high privacy value. This is a misconception. Reported privacy behaviors are explained by asymmetric subjective beliefs. Beliefs may or may not be uncertain, and non-neutral attitudes towards uncertainty are not necessary to explain behavior. This research was conducted in three related parts. Part one presents an experiment in individual decision making under uncertainty. Ellsberg's canonical two-color choice problem was used to estimate attitudes towards uncertainty. Subjects believed bets on the color ball drawn from Ellsberg's ambiguous urn were equally likely to pay. Estimated attitudes towards uncertainty were insignificant. Subjective expected utility explained subjects' choices better than uncertainty aversion and the uncertain priors model. A second treatment tested Vernon Smith's conjecture that preferences in Ellsberg's problem would be unchanged when the ambiguous lottery is replaced by a compound objective lottery. The use of an objective compound lottery to induce uncertainty did not affect subjects' choices. The second part of this dissertation extended the concept of uncertainty to commodities where quality and accuracy of a quality report were potentially ambiguous. The uncertain priors model is naturally extended to allow for potentially different attitudes towards these two sources of uncertainty, quality and accuracy. As they relate to privacy, quality and accuracy of a quality report are seen as metaphors for online security and consumer trust in e-commerce, respectively. The results of parametric structural tests were mixed. Subjects made choices consistent with neutral attitudes towards uncertainty in both the quality and accuracy domains. However, allowing for uncertainty aversion in the quality domain and not the accuracy domain outperformed the alternative which only allowed for uncertainty aversion in the accuracy domain. Finally, part three integrated a public-goods game and punishment opportunities with the Becker-DeGroot-Marschak mechanism to elicit privacy values, replicating previously reported privacy behaviors. The procedures developed elicited punishment (consequence) beliefs and information confidentiality beliefs in the context of individual privacy decisions. Three contributions are made to the literature. First, by using cash rewards as a mechanism to map actions to consequences, the study eliminated hypothetical bias as a confounding behavioral factor which is pervasive in the privacy literature. Econometric results support the 'privacy paradox' at levels greater than 10 percent. Second, the roles of asymmetric beliefs and attitudes towards uncertainty were identified using parametric structural likelihood methods. Subjects were, in general, uncertainty neutral and believed 'bad' events were more likely to occur when their private information was not confidential. A third contribution is a partial test to determine which uncertain process, loss of privacy or the resolution of consequences, is of primary importance to individual decision-makers. Choices were consistent with uncertainty neutral preferences in both the privacy and consequences domains.
|
17 |
Minimally Corrective, Approximately Recovering Priors to Correct Expert Judgement in Bayesian Parameter EstimationMay, Thomas Joseph 23 July 2015 (has links)
Bayesian parameter estimation is a popular method to address inverse problems. However, since prior distributions are chosen based on expert judgement, the method can inherently introduce bias into the understanding of the parameters. This can be especially relevant in the case of distributed parameters where it is difficult to check for error. To minimize this bias, we develop the idea of a minimally corrective, approximately recovering prior (MCAR prior) that generates a guide for the prior and corrects the expert supplied prior according to that guide. We demonstrate this approach for the 1D elliptic equation or the elliptic partial differential equation and observe how this method works in cases with significant and without any expert bias. In the case of significant expert bias, the method substantially reduces the bias and, in the case with no expert bias, the method only introduces minor errors. The cost of introducing these small errors for good judgement is worth the benefit of correcting major errors in bad judgement. This is particularly true when the prior is only determined using a heuristic or an assumed distribution. / Master of Science
|
18 |
Statistical Monitoring and Modeling for Spatial ProcessesKeefe, Matthew James 17 March 2017 (has links)
Statistical process monitoring and hierarchical Bayesian modeling are two ways to learn more about processes of interest. In this work, we consider two main components: risk-adjusted monitoring and Bayesian hierarchical models for spatial data. Usually, if prior information about a process is known, it is important to incorporate this into the monitoring scheme. For example, when monitoring 30-day mortality rates after surgery, the pre-operative risk of patients based on health characteristics is often an indicator of how likely the surgery is to succeed. In these cases, risk-adjusted monitoring techniques are used. In this work, the practical limitations of the traditional implementation of risk-adjusted monitoring methods are discussed and an improved implementation is proposed. A method to perform spatial risk-adjustment based on exact locations of concurrent observations to account for spatial dependence is also described. Furthermore, the development of objective priors for fully Bayesian hierarchical models for areal data is explored for Gaussian responses. Collectively, these statistical methods serve as analytic tools to better monitor and model spatial processes. / Ph. D. / The purpose of this research was to advance understanding of help-seeking behaviors of lowincome older adults who were deemed ineligible to receive state-funded assistance. I used health services data from two independent state agencies to assess factors associated with service use and health status; follow-up interviews were conducted to explore self-management strategies of rural older adults with unmet needs. Older adults who did not receive help were at increased risk for hospitalization and mortality compared to individuals who received helped. Rural older adults were significantly more likely to not receive help and were at increased risk for mortality, placing them in a vulnerable position. Interviews with rural-dwelling older adults that were not receiving help highlighted the challenges associated with living with unmet needs but demonstrated resilience through their use of physical and psychological coping mechanisms to navigate daily challenges and maintain health and well-being. They had to deal with numerous difficulties performing instrumental activities of daily living (IADL); mobility was an underlying problem that led to subsequent IADL limitations, such as difficulty with household chores and meal preparation. Policymakers need to advocate for services that allow older adults to address preemptively their care needs before they become unmanageable. Ensuring the availability of services for near-risk older adults who are proactive in addressing their functional care needs would benefit individuals and caregivers on whom they rely. Such services not only support older adults’ health, functioning, and well-being but may be cost-effective for public programs. Policies should reduce unmet needs among older adults by increasing service access in rural communities because even if services exist, they may not be available to this near-risk population of older adults.Many current scientific applications involve data collection that has some type of spatial component. Within these applications, the objective could be to monitor incoming data in order to quickly detect any changes in real time. Another objective could be to use statistical models to characterize and understand the underlying features of the data. In this work, we develop statistical methodology to monitor and model data that include a spatial component. Specifically, we develop a monitoring scheme that adjusts for spatial risk and present an objective way to quantify and model spatial dependence when data is measured for areal units. Collectively, the statistical methods developed in this work serve as analytic tools to better monitor and model spatial data.
|
19 |
Towards scalable, multi-view urban modeling using structure priors / Vers une modélisation urbaine 3D extensible intégrant des à priori de structure géométriqueBourki, Amine 21 December 2017 (has links)
Nous étudions dans cette thèse le problème de reconstruction 3D multi-vue à partir d’une séquence d’images au sol acquises dans des environnements urbains ainsi que la prise en compte d’a priori permettant la préservation de la structure sous-jacente de la géométrie 3D observée, ainsi que le passage à l’échelle de tels processus de reconstruction qui est intrinsèquement délicat dans le contexte de l’imagerie urbaine. Bien que ces deux axes aient été traités de manière extensive dans la littérature, les méthodes de reconstruction 3D structurée souffrent d’une complexité en temps de calculs restreignant significativement leur intérêt. D’autre part, les approches de reconstruction 3D large échelle produisent généralement une géométrie simplifiée, perdant ainsi des éléments de structures qui sont importants dans le contexte urbain. L’objectif de cette thèse est de concilier les avantages des approches de reconstruction 3D structurée à celles des méthodes rapides produisant une géométrie simplifiée. Pour ce faire, nous présentons “Patchwork Stereo”, un framework qui combine stéréoscopie photométrique utilisant une poignée d’images issues de points de vue éloignés, et un nuage de point épars. Notre méthode intègre une analyse simultanée 2D-3D réalisant une extraction robuste de plans 3D ainsi qu’une segmentation d’images top-down structurée et repose sur une optimisation par champs de Markov aléatoires. Les contributions présentées sont évaluées via des expériences quantitatives et qualitatives sur des données d’imagerie urbaine complexes illustrant des performances tant quant à la fidélité structurelle des reconstructions 3D que du passage à l’échelle / In this thesis, we address the problem of 3D reconstruction from a sequence of calibrated street-level photographs with a simultaneous focus on scalability and the use of structure priors in Multi-View Stereo (MVS).While both aspects have been studied broadly, existing scalable MVS approaches do not handle well the ubiquitous structural regularities, yet simple, of man-made environments. On the other hand, structure-aware 3D reconstruction methods are slow and scale poorly with the size of the input sequences and/or may even require additional restrictive information. The goal of this thesis is to reconcile scalability and structure awareness within common MVS grounds using soft, generic priors which encourage : (i) piecewise planarity, (ii) alignment of objects boundaries with image gradients and (iii) with vanishing directions (VDs), and (iv) objects co-planarity. To do so, we present the novel “Patchwork Stereo” framework which integrates photometric stereo from a handful of wide-baseline views and a sparse 3D point cloud combining robust 3D plane extraction and top-down image partitioning from a unified 2D-3D analysis in a principled Markov Random Field energy minimization. We evaluate our contributions quantitatively and qualitatively on challenging urban datasets and illustrate results which are at least on par with state-of-the-art methods in terms of geometric structure, but achieved in several orders of magnitude faster paving the way for photo-realistic city-scale modeling
|
20 |
Acquiring 3D Full-body Motion from Noisy and Ambiguous InputLou, Hui 2012 May 1900 (has links)
Natural human motion is highly demanded and widely used in a variety of applications such as video games and virtual realities. However, acquisition of full-body motion remains challenging because the system must be capable of accurately capturing a wide variety of human actions and does not require a considerable amount of time and skill to assemble. For instance, commercial optical motion capture systems such as Vicon can capture human motion with high accuracy and resolution while they often require post-processing by experts, which is time-consuming and costly. Microsoft Kinect, despite its high popularity and wide applications, does not provide accurate reconstruction of complex movements when significant occlusions occur. This dissertation explores two different approaches that accurately reconstruct full-body human motion from noisy and ambiguous input data captured by commercial motion capture devices.
The first approach automatically generates high-quality human motion from noisy data obtained from commercial optical motion capture systems, eliminating the need for post-processing. The second approach accurately captures a wide variety of human motion even under significant occlusions by using color/depth data captured by a single Kinect camera. The common theme that underlies two approaches is the use of prior knowledge embedded in pre-recorded motion capture database to reduce the reconstruction ambiguity caused by noisy and ambiguous input and constrain the solution to lie in the natural motion space. More specifically, the first approach constructs a series of spatial-temporal filter bases from pre-captured human motion data and employs them along with robust statistics techniques to filter noisy motion data corrupted by noise/outliers. The second approach formulates the problem in a Maximum a Posterior (MAP) framework and generates the most likely pose which explains the observations as well as consistent with the patterns embedded in the pre-recorded motion capture database. We demonstrate the effectiveness of our approaches through extensive numerical evaluations on synthetic data and comparisons against results created by commercial motion capture systems. The first approach can effectively denoise a wide variety of noisy motion data, including walking, running, jumping and swimming while the second approach is shown to be capable of accurately reconstructing a wider range of motions compared with Microsoft Kinect.
|
Page generated in 0.0519 seconds