191 |
Modeling survival after acute myocardial infarction using accelerated failure time models and space varying regressionYang, Aijun 27 August 2009 (has links)
Acute Myocardial Infarction (AMI), commonly known as heart attack, is a leading
cause of death for adult men and women in the world. Studying mortality after AMI
is therefore an important problem in epidemiology. This thesis develops statistical
methodology for examining geographic patterns in mortality following AMI. Specifically, we develop parametric Accelerated Failure Time (AFT) models for censored survival data, where space-varying regression is used to investigate spatial patterns of mortality after AMI. In addition to important covariates such as age and gender, the regression models proposed here also incorporate spatial random e ects that describe the residual heterogeneity associated with di erent local health geographical units. We conduct model inference under a hierarchical Bayesian modeling framework using Markov Chain Monte Carlo algorithms for implementation. We compare an array of models and address the goodness-of- t of the parametric AFT model through simulation studies and an application to a longitudinal AMI study in Quebec. The application of our AFT model to the Quebec AMI data yields interesting ndings
concerning aspects of AMI, including spatial variability. This example serves as a
strong case for considering the parametric AFT model developed here as a useful tool
for the analysis of spatially correlated survival data.
|
192 |
A Fully Bayesian Analysis of Multivariate Latent Class Models with an Application to Metric Conjoint AnalysisFrühwirth-Schnatter, Sylvia, Otter, Thomas, Tüchler, Regina January 2002 (has links) (PDF)
In this paper we head for a fully Bayesian analysis of the latent class model with a priori unknown number of classes. Estimation is carried out by means of Markov Chain Monte Carlo (MCMC) methods. We deal explicitely with the consequences the unidentifiability of this type of model has on MCMC estimation. Joint Bayesian estimation of all latent variables, model parameters, and parameters determining the probability law of the latent process is carried out by a new MCMC method called permutation sampling. In a first run we use the random permutation sampler to sample from the unconstrained posterior. We will demonstrate that a lot of important information, such as e.g. estimates of the subject-specific regression coefficients, is available from such an unidentified model. The MCMC output of the random permutation sampler is explored in order to find suitable identifiability constraints. In a second run we use the permutation sampler to sample from the constrained posterior by imposing identifiablity constraints. The unknown number of classes is determined by formal Bayesian model comparison through exact model likelihoods. We apply a new method of computing model likelihoods for latent class models which is based on the method of bridge sampling. The approach is applied to simulated data and to data from a metric conjoint analysis in the Austrian mineral water market. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
|
193 |
The Value of Branding in Two-sided PlatformsSun, Yutec 13 August 2013 (has links)
This thesis studies the value of branding in the smartphone market. Measuring brand value with data available at product level potentially entails computational and econometric challenges due to data constraints. These issues motivate the three studies of the thesis. Chapter 2 studies the smartphone market to understand how operating system platform providers can grow one of the most important intangible assets, i.e., brand value, by leveraging the indirect network between two user groups in a two-sided platform. The main finding is that iPhone achieved the greatest brand value growth by opening its platform to the participation of third-party developers, thereby indirectly connecting the consumers and the developers via its app store effectively. Without the open app store, I find that iPhone would have lost its brand value by becoming a two-sided platform. Hence these findings provide an important lesson that open platform strategy is vital to the success of building platform brands. Chapter 3 solves a computational challenge in structural estimation of aggregate demand. I develop a computationally efficient MCMC algorithm for the GMM estimation framework developed by Berry, Levinsohn and Pakes (1995) and Gowrisankaran and Rysman (forthcoming). I combine the MCMC method with the classical approach by transforming the GMM into a Laplace type estimation framework, therefore avoiding the need to formulate a likelihood model. The proposed algorithm solves the two fixed point problems, i.e., the market share inversion and the dynamic programming, incrementally with MCMC iteration. Hence the proposed approach achieves computational efficiency without compromising the advantages of the conventional GMM approach. Chapter 4 reviews recently developed econometric methods to control for endogeneity bias when the random slope coefficient is correlated with treatment variables. I examine how standard instrumental variables and control function approaches can solve the slope endogeneity problem under two general frameworks commonly used in the literature.
|
194 |
The Value of Branding in Two-sided PlatformsSun, Yutec 13 August 2013 (has links)
This thesis studies the value of branding in the smartphone market. Measuring brand value with data available at product level potentially entails computational and econometric challenges due to data constraints. These issues motivate the three studies of the thesis. Chapter 2 studies the smartphone market to understand how operating system platform providers can grow one of the most important intangible assets, i.e., brand value, by leveraging the indirect network between two user groups in a two-sided platform. The main finding is that iPhone achieved the greatest brand value growth by opening its platform to the participation of third-party developers, thereby indirectly connecting the consumers and the developers via its app store effectively. Without the open app store, I find that iPhone would have lost its brand value by becoming a two-sided platform. Hence these findings provide an important lesson that open platform strategy is vital to the success of building platform brands. Chapter 3 solves a computational challenge in structural estimation of aggregate demand. I develop a computationally efficient MCMC algorithm for the GMM estimation framework developed by Berry, Levinsohn and Pakes (1995) and Gowrisankaran and Rysman (forthcoming). I combine the MCMC method with the classical approach by transforming the GMM into a Laplace type estimation framework, therefore avoiding the need to formulate a likelihood model. The proposed algorithm solves the two fixed point problems, i.e., the market share inversion and the dynamic programming, incrementally with MCMC iteration. Hence the proposed approach achieves computational efficiency without compromising the advantages of the conventional GMM approach. Chapter 4 reviews recently developed econometric methods to control for endogeneity bias when the random slope coefficient is correlated with treatment variables. I examine how standard instrumental variables and control function approaches can solve the slope endogeneity problem under two general frameworks commonly used in the literature.
|
195 |
Topics in Random Matrices: Theory and Applications to Probability and StatisticsKousha, Termeh 13 December 2011 (has links)
In this thesis, we discuss some topics in random matrix theory which have applications to probability, statistics and quantum information theory. In Chapter 2, by relying on the spectral properties of an associated adjacency matrix, we find the distribution of the maximum of a Dyck path and show that it has the same distribution function as the unsigned Brownian excursion which was first derived in 1976 by Kennedy. We obtain a large and moderate deviation principle for the law of the maximum of a random Dyck path. Our result extends the results of Chung, Kennedy and Khorunzhiy and Marckert. In Chapter 3, we discuss a method of sampling called the Gibbs-slice sampler. This method is based on Neal's slice sampling combined with Gibbs sampling. In Chapter 4, we discuss several examples which have applications in physics and quantum information theory.
|
196 |
Actuarial Inference and Applications of Hidden Markov ModelsTill, Matthew Charles January 2011 (has links)
Hidden Markov models have become a popular tool for modeling long-term investment guarantees. Many different variations of hidden Markov models have been proposed over the past decades for modeling indexes such as the S&P 500, and they capture the tail risk inherent in the market to varying degrees. However, goodness-of-fit testing, such as residual-based testing, for hidden Markov models is a relatively undeveloped area of research. This work focuses on hidden Markov model assessment, and develops a stochastic approach to deriving a residual set that is ideal for standard residual tests. This result allows hidden-state models to be tested for goodness-of-fit with the well developed testing strategies for single-state
models.
This work also focuses on parameter uncertainty for the popular long-term equity hidden Markov models. There is a special focus on underlying states that represent lower returns and higher volatility in the market, as these states can have the largest impact on investment guarantee valuation. A Bayesian approach for the hidden Markov models is applied to address the issue of parameter uncertainty and the impact it can have on investment guarantee models.
Also in this thesis, the areas of portfolio optimization and portfolio replication under a hidden Markov model setting are further developed. Different strategies for optimization and portfolio hedging under hidden Markov models are presented and compared using real world data. The impact of parameter uncertainty, particularly with model parameters that are connected with higher market volatility, is once again a focus, and the effects of not taking parameter uncertainty into account when optimizing or hedging in a hidden Markov
are demonstrated.
|
197 |
Bayesian Analysis for Large Spatial DataPark, Jincheol 2012 August 1900 (has links)
The Gaussian geostatistical model has been widely used in Bayesian modeling of spatial data. A core difficulty for this model is at inverting the n x n covariance matrix, where n is a sample size. The computational complexity of matrix inversion increases as O(n3). This difficulty is involved in almost all statistical inferences approaches of the model, such as Kriging and Bayesian modeling. In Bayesian inference, the inverse of covariance matrix needs to be evaluated at each iteration in posterior simulations, so Bayesian approach is infeasible for large sample size n due to the current computational power limit.
In this dissertation, we propose two approaches to address this computational issue, namely, the auxiliary lattice model (ALM) approach and the Bayesian site selection (BSS) approach. The key feature of ALM is to introduce a latent regular lattice which links Gaussian Markov Random Field (GMRF) with Gaussian Field (GF) of the observations. The GMRF on the auxiliary lattice represents an approximation to the Gaussian process. The distinctive feature of ALM from other approximations lies in that ALM avoids completely the problem of the matrix inversion by using analytical likelihood of GMRF. The computational complexity of ALM is rather attractive, which increase linearly with sample size.
The second approach, Bayesian site selection (BSS), attempts to reduce the dimension of data through a smart selection of a representative subset of the observations. The BSS method first split the observations into two parts, the observations near the target prediction sites (part I) and their remaining (part II). Then, by treating the observations in part I as response variable and those in part II as explanatory variables, BSS forms a regression model which relates all observations through a conditional likelihood derived from the original model. The dimension of the data can then be reduced by applying a stochastic variable selection procedure to the regression model, which selects only a subset of the part II data as explanatory data. BSS can provide us more understanding to the underlying true Gaussian process, as it directly works on the original process without any approximations involved.
The practical performance of ALM and BSS will be illustrated with simulated data and real data sets.
|
198 |
Modelos lineares generalizados bayesianos para dados longitudinais / Bayesian generalized linear models for longitudinal dataMonfardini, Frederico [UNESP] 19 February 2016 (has links)
Submitted by FREDERICO MONFARDINI null (fred.monf@gmail.com) on 2016-05-04T01:21:27Z
No. of bitstreams: 1
DISSERTAÇÃO - FREDERICO.pdf: 1083790 bytes, checksum: e190391e7f59e12ce3b3f062297293e5 (MD5) / Rejected by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo:
O arquivo submetido está sem a ficha catalográfica. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação.
Corrija esta informação e realize uma nova submissão contendo o arquivo correto.
Agradecemos a compreensão.
on 2016-05-06T14:24:35Z (GMT) / Submitted by FREDERICO MONFARDINI null (fred.monf@gmail.com) on 2016-05-11T01:12:32Z
No. of bitstreams: 1
DISSERTAÇÃO - FREDERICO.pdf: 979406 bytes, checksum: 75d1f03b99c1e8e3627b3ee7b3776361 (MD5) / Rejected by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo:
O mês informado na capa e contra-capa do documento estão diferentes da data de defesa informada na folha de aprovação.
Corrija estas informações no arquivo PDF e realize uma nova submissão contendo o arquivo correto.
Agradecemos a compreensão. on 2016-05-13T13:14:09Z (GMT) / Submitted by FREDERICO MONFARDINI null (fred.monf@gmail.com) on 2016-05-16T04:01:38Z
No. of bitstreams: 1
DISSSERTAÇÃO - FREDERICO.pdf: 1003174 bytes, checksum: 3449613d0bfa6567b122b1461608bc55 (MD5) / Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-05-16T14:41:59Z (GMT) No. of bitstreams: 1
monfardini_f_me_prud.pdf: 1003174 bytes, checksum: 3449613d0bfa6567b122b1461608bc55 (MD5) / Made available in DSpace on 2016-05-16T14:41:59Z (GMT). No. of bitstreams: 1
monfardini_f_me_prud.pdf: 1003174 bytes, checksum: 3449613d0bfa6567b122b1461608bc55 (MD5)
Previous issue date: 2016-02-19 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Os Modelos Lineares Generalizados (GLM) foram introduzidos no início dos anos 70, tendo um grande impacto no desenvolvimento da teoria estatística. Do ponto de vista teórico, esta classe de modelos representa uma abordagem unificada de muitos modelos estatísticos, correntemente usados nas aplicações, podendo-se utilizar dos mesmos procedimentos de inferência. Com o avanço computacional das últimas décadas foi notável o desenvolvimento de extensões nesta classe de modelos e de métodos para os procedimentos de inferência. No contexto da abordagem Bayesiana, até a década de 80 utilizava-se de métodos aproximados de inferência, tais como aproximação de Laplace, quadratura Gaussiana e outros. No início da década de 90, foram popularizados os métodos de Monte Carlo via Cadeias de Markov (Monte Carlo Markov Chain - MCMC) que revolucionaram as aplicações no contexto Bayesiano. Apesar de serem métodos altamente eficientes, a convergência do algoritmo em modelos complexos pode ser extremamente lenta, o que gera alto custo computacional. Em 2009 surgiu o método de Aproximações de Laplace Aninhadas Integradas (Integrated Nested Laplace Aproximation - INLA) que busca eficiência tanto no custo computacional como na precisão das estimativas. Considerando a importância desta classe de modelos, neste trabalho propõem-se explorar extensões dos MLG para dados longitudinais e recentes propostas apresentadas na literatura para os procedimentos de inferência. Mais especificamente, explorar modelos para dados binários (binomiais) e para dados de contagem (Poisson), considerando a presença de variabilidade extra, incluindo superdispersão e presença de efeitos aleatórios através de modelos hierárquicos e modelos hierárquicos dinâmicos. Além disso, explorar diferentes procedimentos de inferência no contexto Bayesiano, incluindo MCMC e INLA. / Generalized Linear Models (GLM) were introduced in the early 70s, having a great impact on the development of statistical theory. From a theoretical point of view, this class of model is a unified approach to many statistical models commonly used in applications and can be used with the same inference procedures. With advances in the computer over subsequent decades has come a remarkable development of extensions in this class of design and method for inference procedures. In the context of Bayesian approach, until the 80s, it was used to approximate inference methods, such as approximation of Laplace, Gaussian quadrature, etc., The Monte Carlo Markov Chain methods (MCMC) were popularized in the early 90s and have revolutionized applications in a Bayesian context. Although they are highly efficient methods, the convergence of the algorithm in complex models can be extremely slow, which causes high computational cost. The Integrated Nested Laplace Approximations method (INLA), seeking efficiency in both computational cost and accuracy of estimates, appeared in 2009. This work proposes to explore extensions of GLM for longitudinal data considering the importance of this class of model, and recent proposals in the literature for inference procedures. More specifically, it explores models for binary data (binomial) and count data (Poisson), considering the presence of extra variability, including overdispersion and the presence of random effects through hierarchical models and hierarchical dynamic models. It also explores different Bayesian inference procedures in this context, including MCMC and INLA.
|
199 |
Estimation of conditional auto-regressive modelsSha, Zhe January 2016 (has links)
Conditional auto-regressive (CAR) models are frequently used with spatial data. However, the likelihood of such a model is expensive to compute even for a moderately sized data set of around 1000 sites. For models involving latent variables, the likelihood is not usually available in closed form. In this thesis we use a Monte Carlo approximation to the likelihood (extending the approach of Geyer and Thompson (1992)), and develop two strategies for maximising this. One strategy is to limit the step size by defining an experimental region using a Monte Carlo approximation to the variance of the estimates. The other is to use response surface methodology. The iterative procedures are fully automatic, with user-specified options to control the simulation and convergence criteria. Both strategies are implemented in our R package mclcar. We demonstrate aspects of the algorithms on simulated data on a torus, and achieve similar results to others in a short computational time on two datasets from the literature. We then use the methods on a challenging problem concerning forest restoration with data from around 7000 trees arranged in transects within study plots. We modelled the growth rate of the trees by a linear mixed effects model with CAR spatial error and CAR random e ects for study plots in an acceptable computational time. Our proposed methods can be used for similar models to provide a clearly defined framework for maximising Monte Carlo approximations to likelihoods and reconstructing likelihood surfaces near the maximum.
|
200 |
Filtro de partículas adaptativo para o tratamento de oclusões no rastreamento de objetos em vídeos / Adaptive MCMC-particle filter to handle of occlusions in object tracking on videosOliveira, Alessandro Bof de January 2008 (has links)
O rastreamento de objetos em vídeos representa um importante problema na área de processamento de imagens, quer seja pelo grande número de aplicações envolvidas, ou pelo grau de complexidade que pode ser apresentado. Como exemplo de aplicações, podemos citar sua utilização em áreas como robótica móvel, interface homem-máquina, medicina, automação de processo industriais até aplicações mais tracionais como vigilância e monitoramento de trafego. O aumento na complexidade do rastreamento se deve principalmente a interação do objeto rastreado com outros elementos da cena do vídeo, especialmente nos casos de oclusões parciais ou totais. Quando uma oclusão ocorre a informação sobre a localização do objeto durante o rastreamento é perdida parcial ou totalmente. Métodos de filtragem estocástica, utilizados para o rastreamento de objetos, como os Filtros de Partículas não apresentam resultados satisfatórios na presença de oclusões totais, onde temos uma descontinuidade na trajetória do objeto. Portanto torna-se necessário o desenvolvimento de métodos específicos para tratar o problema de oclusão total. Nesse trabalho, nós desenvolvemos uma abordagem para tratar o problema de oclusão total no rastreamento de objetos utilizando Filtro de Partículas baseados em Monte Carlo via Cadeia de Markov (MCCM) com função geradora de partículas adaptativa. Durante o rastreamento do objeto, em situações onde não há oclusões, nós utilizamos uma função de probabilidade geradora simétrica. Entretanto, quando uma oclusão total, ou seja, uma descontinuidade na trajetória é detectada, a função geradora torna-se assimétrica, criando um termo de “inércia” ou “arraste” na direção do deslocamento do objeto. Ao sair da oclusão, o objeto é novamente encontrado e a função geradora volta a ser simétrica novamente. / The object tracking on video is an important task in image processing area either for the great number of involved applications, or for the degree of complexity that can be presented. How example of application, we can cite its use from robotic area, machine-man interface, medicine, automation of industry process to vigilance and traffic control applications. The increase of complexity of tracking is occasioned principally by interaction of tracking object with other objects on video, specially when total or partial occlusions occurs. When a occlusion occur the information about the localization of tracking object is lost partially or totally. Stochastic filtering methods, like Particle Filter do not have satisfactory results in the presence of total occlusions. Total occlusion can be understood like discontinuity in the object trajectory. Therefore is necessary to develop specific method to handle the total occlusion task. In this work, we develop an approach to handle the total occlusion task using MCMC-Particle Filter with adaptive sampling probability function. When there is not occlusions we use a symmetric probability function to sample the particles. However, when there is a total occlusion, a discontinuity in the trajectory is detected, and the probability sampling function becomes asymmetric. This break of symmetry creates a “drift” or “inertial” term in object shift direction. When the tracking object becomes visible (after the occlusion) it is found again and the sampling function come back to be symmetric.
|
Page generated in 0.0338 seconds