• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 6
  • 5
  • 3
  • 1
  • Tagged with
  • 47
  • 13
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Achieving shrinkage in a time-varying parameter model framework

Bitto, Angela, Frühwirth-Schnatter, Sylvia January 2019 (has links) (PDF)
Shrinkage for time-varying parameter (TVP) models is investigated within a Bayesian framework, with the aim to automatically reduce time-varying Parameters to staticones, if the model is overfitting. This is achieved through placing the double gamma shrinkage prior on the process variances. An efficient Markov chain Monte Carlo scheme is devel- oped, exploiting boosting based on the ancillarity-sufficiency interweaving strategy. The method is applicable both to TVP models for univariate a swell as multivariate time series. Applications include a TVP generalized Phillips curve for EU area inflation modeling and a multivariate TVP Cholesky stochastic volatility model for joint modeling of the Returns from the DAX-30index.
32

Estatística gradiente: teoria assintótica de alta ordem e correção tipo-Bartlett / Gradient statistic: higher order asymptotics and Bartlett-type correction

Vargas, Tiago Moreira 15 April 2013 (has links)
Obtemos uma expansão assintótica da função de distribuição sob a hipótese nula da estatística gradiente para testar hipóteses nulas compostas na presença de parâmetros de perturbação. Esta expansão é derivada utilizando uma rota Bayesiana baseada no argumento de encolhimento descrito em Ghosh e Mukerjee (1991). Usando essa expansão, propomos uma estatística gradiente corrigida por um fator de correção tipo-Bartlett, que tem distribuição qui-quadrado até um erro de ordem o(n-1) sob a hipótese nula. A partir disso, determinamos fórmulas matriciais e algébricas que auxiliam na obtenção da estatística gradiente corrigida em modelos lineares generalizados com dispersão conhecida e desconhecida. Simulações de Monte Carlo são apresentadas. Finalmente, discutimos a obtenção de regiões de credibilidade via inversão da estatística gradiente. Caracterizamos as densidades a priori, matching priors, que asseguram propriedades de cobertura frequentista acuradas para essas regiões. / We obtain an asymptotic expansion for the null distribution function of the gradient statistic for testing composite null hypotheses in the presence of nuisance parameters. The expansion is derived using a Bayesian route based on the shrinkage argument described in Ghosh and Mukerjee (1991). Using this expansion, we propose a Bartlett-type corrected gradient statistic, which has a chi-square distribution up to an error of order o(n1) under the null hypothesis. Also, we determined matrix and algebraic formulas that assist in obtaining Bartett-type corrected statistic in generalized linear models with known and unknown dispersion. Monte Carlo simulations are presented. Finally, we obtain credible regions based by the inversion of gradient statistic. We characterize priori densities, matching priors, that ensure accurate frequentist coverage properties for these regions.
33

Segmentation of 3D Carotid Ultrasound Images Using Weak Geometric Priors

Solovey, Igor January 2010 (has links)
Vascular diseases are among the leading causes of death in Canada and around the globe. A major underlying cause of most such medical conditions is atherosclerosis, a gradual accumulation of plaque on the walls of blood vessels. Particularly vulnerable to atherosclerosis is the carotid artery, which carries blood to the brain. Dangerous narrowing of the carotid artery can lead to embolism, a dislodgement of plaque fragments which travel to the brain and are the cause of most strokes. If this pathology can be detected early, such a deadly scenario can be potentially prevented through treatment or surgery. This not only improves the patient's prognosis, but also dramatically lowers the overall cost of their treatment. Medical imaging is an indispensable tool for early detection of atherosclerosis, in particular since the exact location and shape of the plaque need to be known for accurate diagnosis. This can be achieved by locating the plaque inside the artery and measuring its volume or texture, a process which is greatly aided by image segmentation. In particular, the use of ultrasound imaging is desirable because it is a cost-effective and safe modality. However, ultrasonic images depict sound-reflecting properties of tissue, and thus suffer from a number of unique artifacts not present in other medical images, such as acoustic shadowing, speckle noise and discontinuous tissue boundaries. A robust ultrasound image segmentation technique must take these properties into account. Prior to segmentation, an important pre-processing step is the extraction of a series of features from the image via application of various transforms and non-linear filters. A number of such features are explored and evaluated, many of them resulting in piecewise smooth images. It is also proposed to decompose the ultrasound image into several statistically distinct components. These components can be then used as features directly, or other features can be obtained from them instead of the original image. The decomposition scheme is derived using Maximum-a-Posteriori estimation framework and is efficiently computable. Furthermore, this work presents and evaluates an algorithm for segmenting the carotid artery in 3D ultrasound images from other tissues. The algorithm incorporates information from different sources using an energy minimization framework. Using the ultrasound image itself, statistical differences between the region of interest and its background are exploited, and maximal overlap with strong image edges encouraged. In order to aid the convergence to anatomically accurate shapes, as well as to deal with the above-mentioned artifacts, prior knowledge is incorporated into the algorithm by using weak geometric priors. The performance of the algorithm is tested on a number of available 3D images, and encouraging results are obtained and discussed.
34

Essays in Financial Econometrics

Jeong, Dae Hee 14 January 2010 (has links)
I consider continuous time asset pricing models with stochastic differential utility incorporating decision makers' concern with ambiguity on true probability measure. In order to identify and estimate key parameters in the models, I use a novel econometric methodology developed recently by Park (2008) for the statistical inference on continuous time conditional mean models. The methodology only imposes the condition that the pricing error is a continuous martingale to achieve identification, and obtain consistent and asymptotically normal estimates of the unknown parameters. Under a representative agent setting, I empirically evaluate alternative preference specifications including a multiple-prior recursive utility. My empirical findings are summarized as follows: Relative risk aversion is estimated around 1.5-5.5 with ambiguity aversion and 6-14 without ambiguity aversion. Related, the estimated ambiguity aversion is both economically and statistically significant and including the ambiguity aversion clearly lowers relative risk aversion. The elasticity of intertemporal substitution (EIS) is higher than 1, around 1.3-22 with ambiguity aversion, and quite high without ambiguity aversion. The identification of EIS appears to be fairly weak, as observed by many previous authors, though other aspects of my empirical results seem quite robust. Next, I develop an approach to test for martingale in a continuous time framework. The approach yields various test statistics that are consistent against a wide class of nonmartingale semimartingales. A novel aspect of my approach is to use a time change defined by the inverse of the quadratic variation of a semimartingale, which is to be tested for the martingale hypothesis. With the time change, a continuous semimartingale reduces to Brownian motion if and only if it is a continuous martingale. This follows immediately from the celebrated theorem by Dambis, Dubins and Schwarz. For the test of martingale, I may therefore see if the given process becomes Brownian motion after the time change. I use several existing tests for multivariate normality to test whether the time changed process is indeed Brownian motion. I provide asymptotic theories for my test statistics, on the assumption that the sampling interval decreases, as well as the time horizon expands. The stationarity of the underlying process is not assumed, so that my results are applicable also to nonstationary processes. A Monte-Carlo study shows that our tests perform very well for a wide range of realistic alternatives and have superior power than other discrete time tests.
35

Non-local active contours

Appia, Vikram VijayanBabu 17 May 2012 (has links)
This thesis deals with image segmentation problems that arise in various computer vision related fields such as medical imaging, satellite imaging, video surveillance, recognition and robotic vision. More specifically, this thesis deals with a special class of image segmentation technique called Snakes or Active Contour Models. In active contour models, image segmentation is posed as an energy minimization problem, where an objective energy function (based on certain image related features) is defined on the segmenting curve (contour). Typically, a gradient descent energy minimization approach is used to drive the initial contour towards a minimum for the defined energy. The drawback associated with this approach is that the contour has a tendency to get stuck at undesired local minima caused by subtle and undesired image features/edges. Thus, active contour based curve evolution approaches are very sensitive to initialization and noise. The central theme of this thesis is to develop techniques that can make active contour models robust against certain classes of local minima by incorporating global information in energy minimization. These techniques lead to energy minimization with global considerations; we call these models -- 'Non-local active contours'. In this thesis, we consider three widely used active contour models: 1) Edge- and region-based segmentation model, 2) Prior shape knowledge based segmentation model, and 3) Motion segmentation model. We analyze the traditional techniques used for these models and establish the need for robust models that avoid local minima. We address the local minima problem for each model by adding global image considerations.
36

Segmentation of 3D Carotid Ultrasound Images Using Weak Geometric Priors

Solovey, Igor January 2010 (has links)
Vascular diseases are among the leading causes of death in Canada and around the globe. A major underlying cause of most such medical conditions is atherosclerosis, a gradual accumulation of plaque on the walls of blood vessels. Particularly vulnerable to atherosclerosis is the carotid artery, which carries blood to the brain. Dangerous narrowing of the carotid artery can lead to embolism, a dislodgement of plaque fragments which travel to the brain and are the cause of most strokes. If this pathology can be detected early, such a deadly scenario can be potentially prevented through treatment or surgery. This not only improves the patient's prognosis, but also dramatically lowers the overall cost of their treatment. Medical imaging is an indispensable tool for early detection of atherosclerosis, in particular since the exact location and shape of the plaque need to be known for accurate diagnosis. This can be achieved by locating the plaque inside the artery and measuring its volume or texture, a process which is greatly aided by image segmentation. In particular, the use of ultrasound imaging is desirable because it is a cost-effective and safe modality. However, ultrasonic images depict sound-reflecting properties of tissue, and thus suffer from a number of unique artifacts not present in other medical images, such as acoustic shadowing, speckle noise and discontinuous tissue boundaries. A robust ultrasound image segmentation technique must take these properties into account. Prior to segmentation, an important pre-processing step is the extraction of a series of features from the image via application of various transforms and non-linear filters. A number of such features are explored and evaluated, many of them resulting in piecewise smooth images. It is also proposed to decompose the ultrasound image into several statistically distinct components. These components can be then used as features directly, or other features can be obtained from them instead of the original image. The decomposition scheme is derived using Maximum-a-Posteriori estimation framework and is efficiently computable. Furthermore, this work presents and evaluates an algorithm for segmenting the carotid artery in 3D ultrasound images from other tissues. The algorithm incorporates information from different sources using an energy minimization framework. Using the ultrasound image itself, statistical differences between the region of interest and its background are exploited, and maximal overlap with strong image edges encouraged. In order to aid the convergence to anatomically accurate shapes, as well as to deal with the above-mentioned artifacts, prior knowledge is incorporated into the algorithm by using weak geometric priors. The performance of the algorithm is tested on a number of available 3D images, and encouraging results are obtained and discussed.
37

Un nouvel a priori de formes pour les contours actifs / A new shape prior for active contour model

Ahmed, Fareed 14 February 2014 (has links)
Les contours actifs sont parmi les méthodes de segmentation d'images les plus utilisées et de nombreuses implémentations ont vu le jour durant ces 25 dernières années. Parmi elles, l'approche greedy est considérée comme l'une des plus rapides et des plus stables. Toutefois, quelle que soit l'implémentation choisie, les résultats de segmentation souffrent grandement en présence d'occlusions, de concavités ou de déformation anormales de la forme. Si l'on dispose d'informations a priori sur la forme recherchée, alors son incorporation à un modèle existant peut permettre d'améliorer très nettement les résultats de segmentation. Dans cette thèse, l'inclusion de ce type de contraintes de formes dans un modèle de contour actif explicite est proposée. Afin de garantir une invariance à la rotation, à la translation et au changement d'échelle, les descripteurs de Fourier sont utilisés. Contrairement à la plupart des méthodes existantes, qui comparent la forme de référence et le contour actif en cours d'évolution dans le domaine d'origine par le biais d'une transformation inverse, la méthode proposée ici réalise cette comparaison dans l'espace des descripteurs. Cela assure à notre approche un faible temps de calcul et lui permet d'être indépendante du nombre de points de contrôle choisis pour le contour actif. En revanche, cela induit un biais dans la phase des coefficients de Fourier, handicapant l'invariance à la rotation. Ce problème est résolu par un algorithme original. Les expérimentations indiquent clairement que l'utilisation de ce type de contrainte de forme améliore significativement les résultats de segmentation du modèle de contour actif utilisé. / Active contours are widely used for image segmentation. There are many implementations of active contours. The greedy algorithm is being regarded as one of the fastest and stable implementations. No matter which implementation is being employed, the segmentation results suffer greatly in the presence of occlusion, context noise, concavities or abnormal deformation of shape. If some prior knowledge about the shape of the object is available, then its addition to an existing model can greatly improve the segmentation results. In this thesis inclusion of such shape constraints for explicit active contours is being implemented. These shape priors are introduced through the use of robust Fourier based descriptors which makes them invariant to the translation, scaling and rotation factors and enables the deformable model to converge towards the prior shape even in the presence of occlusion and contextual noise. Unlike most existing methods which compare the reference shape and evolving contour in the spatial domain by applying the inverse transforms, our proposed method realizes such comparisons entirely in the descriptor space. This not only decreases the computational time but also allows our method to be independent of the number of control points chosen for the description of the active contour. This formulation however, may introduce certain anomalies in the phase of the descriptors which affects the rotation invariance. This problem has been solved by an original algorithm. Experimental results clearly indicate that the inclusion of these shape priors significantly improved the segmentation results of the active contour model being used.
38

Estatística gradiente: teoria assintótica de alta ordem e correção tipo-Bartlett / Gradient statistic: higher order asymptotics and Bartlett-type correction

Tiago Moreira Vargas 15 April 2013 (has links)
Obtemos uma expansão assintótica da função de distribuição sob a hipótese nula da estatística gradiente para testar hipóteses nulas compostas na presença de parâmetros de perturbação. Esta expansão é derivada utilizando uma rota Bayesiana baseada no argumento de encolhimento descrito em Ghosh e Mukerjee (1991). Usando essa expansão, propomos uma estatística gradiente corrigida por um fator de correção tipo-Bartlett, que tem distribuição qui-quadrado até um erro de ordem o(n-1) sob a hipótese nula. A partir disso, determinamos fórmulas matriciais e algébricas que auxiliam na obtenção da estatística gradiente corrigida em modelos lineares generalizados com dispersão conhecida e desconhecida. Simulações de Monte Carlo são apresentadas. Finalmente, discutimos a obtenção de regiões de credibilidade via inversão da estatística gradiente. Caracterizamos as densidades a priori, matching priors, que asseguram propriedades de cobertura frequentista acuradas para essas regiões. / We obtain an asymptotic expansion for the null distribution function of the gradient statistic for testing composite null hypotheses in the presence of nuisance parameters. The expansion is derived using a Bayesian route based on the shrinkage argument described in Ghosh and Mukerjee (1991). Using this expansion, we propose a Bartlett-type corrected gradient statistic, which has a chi-square distribution up to an error of order o(n1) under the null hypothesis. Also, we determined matrix and algebraic formulas that assist in obtaining Bartett-type corrected statistic in generalized linear models with known and unknown dispersion. Monte Carlo simulations are presented. Finally, we obtain credible regions based by the inversion of gradient statistic. We characterize priori densities, matching priors, that ensure accurate frequentist coverage properties for these regions.
39

Uma priori beta para distribuição binomial negativa

OLIVEIRA, Cícero Carlos Felix de 08 July 2011 (has links)
Submitted by (ana.araujo@ufrpe.br) on 2016-05-25T16:16:39Z No. of bitstreams: 1 Cicero Carlos Felix de Oliveira.pdf: 934310 bytes, checksum: 4f4332b0b319f6bf33cdc1d615c36324 (MD5) / Made available in DSpace on 2016-05-25T16:16:39Z (GMT). No. of bitstreams: 1 Cicero Carlos Felix de Oliveira.pdf: 934310 bytes, checksum: 4f4332b0b319f6bf33cdc1d615c36324 (MD5) Previous issue date: 2011-07-08 / This dissertation is being dealt with a discrete distribution based on Bernoulli trials, which is the Negative Binomial distribution. The main objective is to propose a new non-informative prior distribution for the Negative Binomial model, which is being termed as a possible prior distribution Beta(0; 0), which is an improper distribution. This distribution is also known for the Binomial model as Haldane prior, but for the Negative Binomial model there are no studies to date. The study of the behavior of this prior was based on Bayesian and classical contexts. The idea of using a non-informative prior is the desire to make statistical inference based on the minimum of information prior subjective as possible. Well, makes it possible to compare the results of classical inference that uses only sample information, for example, the maximum likelihood estimator. When is compared the Beta(0; 0) distribution with the Bayes-Laplace prior and Jeffreys prior, based on the Bayesian estimators (posterior mean and posterior mode) and the maximum likelihood estimator, note that the possible Beta(0; 0) prior is less informative than the others prior. It is also verified that is prior possible is a limited distribution in parameter space, thus, an important feature for non-informative prior. The main argument shows that the possible Beta(0; 0) prior is adequate, when it is applied in a predictive posterior distribution for Negative Binomial model, leading the a Beta-Negative Binomial distribution (which corresponds the a hypergeometric multiplied by a probability). All observations citas are strengthened by several studies, such as: basic concepts related to Bayesian Inference and concepts of the negative binomial distribution and Beta-Negative Binomial (a mixture of Beta with the negative binomial) distribution. / Nesta dissertação está sendo abordado uma distribuição discreta baseada em ensaios de Bernoulli, que é a distribuição Binomial Negativa. O objetivo principal é prôpor uma nova distribuição a priori não informativa para o modelo Binomial Negativa, que está sendo denominado como uma possível distribuição a priori Beta(0; 0), que é uma distribuição imprópria. Essa distribuição também é conhecida para o modelo Binomial como a priori de Haldane, mas para o modelo Binomial Negativa não há nenhum estudo até o momento. O estudo do comportamento desta a priori foi baseada nos contextos bayesiano e clássico. A ideia da utilização de uma a priori não informativa é o desejo de fazer inferência estatística baseada no mínimo de informação subjetiva a priori quanto seja possível. Assim, torna possível a comparação com os resultados da inferência clássica que só usa informação amostral, como por exemplo, o estimador de máxima verossimilhança. Quando é comparado a distribuição Beta(0; 0) com a priori de Bayes - Laplace e a priori de Jeffreys, baseado-se nos estimadores bayesiano (média a posteriori e moda a posteriori) e no estimador de máxima verossimilhança, nota-se que a possível a priori Beta(0; 0) é menos informativa do que as outras a priori. É verificado também, que esta possível a priori é uma distribuição limitada no espaço paramétrico, sendo assim, uma característica importante para a priori não informativa. O principal argumento mostra que a possível a priori Beta(0; 0) é adequada, quando ela é aplicada numa distribuição a posteriori preditiva para modelo Binomial Negativa, levando a uma distribuição Beta Binomial Negativa (que corresponde a uma hipergeométrica multiplicada por uma probabilidade). Todas as observações citadas são fortalecidas por alguns estudos feitos, tais como: conceitos básicos associados à Inferência Bayesiana e conceitos das distribuições Binomial Negativa e Beta Binomial Negativa (que uma mistura da Beta com a Binomial Negativa).
40

HIGH SPEED IMAGING VIA ADVANCED MODELING

Soumendu Majee (10942896) 04 August 2021 (has links)
<div>There is an increasing need to accurately image objects at a high temporal resolution for different applications in order to analyze the underlying physical, chemical, or biological processes. In this thesis, we use advanced models exploiting the image structure and the measurement process in order to achieve an improved temporal resolution. The thesis is divided into three chapters, each corresponding to a different imaging application.</div><div><br></div><div>In the first chapter, we propose a novel method to localize neurons in fluorescence microscopy images. Accurate localization of neurons enables us to scan only the neuron locations instead of the full brain volume and thus improve the temporal resolution of neuron activity monitoring. We formulate the neuron localization problem as an inverse problem where we reconstruct an image that encodes the location of the neuron centers. The sparsity of the neuron centers serves as a prior model, while the forward model comprises of shape models estimated from training data.</div><div><br></div><div>In the second chapter, we introduce multi-slice fusion, a novel framework to incorporate advanced prior models for inverse problems spanning many dimensions such as 4D computed tomography (CT) reconstruction. State of the art 4D reconstruction methods use model based iterative reconstruction (MBIR), but it depends critically on the quality of the prior modeling. Incorporating deep convolutional neural networks (CNNs) in the 4D reconstruction problem is difficult due to computational difficulties and lack of high-dimensional training data. Multi-Slice Fusion integrates the tomographic forward model with multiple low dimensional CNN denoisers along different planes to produce a 4D regularized reconstruction. The improved regularization in multi-slice fusion allows each time-frame to be reconstructed from fewer measurements, resulting in an improved temporal resolution in the reconstruction. Experimental results on sparse-view and limited-angle CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional methods, while also being practical to implement and train.</div><div><br></div><div>In the final chapter, we introduce CodEx, a synergistic combination of coded acquisition and a non-convex Bayesian reconstruction for improving acquisition speed in computed tomography (CT). In an ideal ``step-and-shoot'' tomographic acquisition, the object is rotated to each desired angle, and the view is taken. However, step-and-shoot acquisition is slow and can waste photons, so in practice the object typically rotates continuously in time, leading to views that are blurry. This blur can then result in reconstructions with severe motion artifacts. CodEx works by encoding the acquisition with a known binary code that the reconstruction algorithm then inverts. The CodEx reconstruction method uses the alternating direction method of multipliers (ADMM) to split the inverse problem into iterative deblurring and reconstruction sub-problems, making reconstruction practical. CodEx allows for a fast data acquisition leading to a good temporal resolution in the reconstruction.</div>

Page generated in 0.0322 seconds