• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 20
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 31
  • 28
  • 23
  • 21
  • 21
  • 18
  • 18
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Unit root test of limited time series-- empirical analysis in exchange rate target zone and Japan interbank interest rate

Ho, Ya-chi 26 June 2006 (has links)
There are much economic and financial data which are restricted by some bounds, such as expenditure shares, unemployment, norminal interest rate, or target zone exchange rate. How to interpret and analyze time series whose behaviors can be well approximated by means of integrated processes, I(1), but are ¡§limited¡¨ in the sense that their range is constrained by fixed bounded is what this thesis develops. One method to analyze bounded variable of this paper is ¡§The Bounded Unit Root¡¨ which provided by Cavaliere (2005), and the other is using Gibbs sampling simulation and trying to recover the part of hidden variables. We would examin some empirical problems that has often been tackled in the literature and we give three time series which include Danish kron/Deutshe mark, Belgium Franc/ Deutshe mark, and Japan 1 mouth interbank interest rate for examples. We conclude that these three time series data are I(0) in classical unit root test framework, but are all I(1) in The Bounded Unit Root test framework. And the results of Gibbs sampling simulation are that Danish kron/Deutshe mark and Belgium Franc/ Deutshe mark are I(0), but Japan 1 mouth interbank interest rate is I(1).
22

Monte Carlo Integration Using Importance Sampling and Gibbs Sampling

Hörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
To evaluate the expectation of a simple function with respect to a complicated multivariate density Monte Carlo integration has become the main technique. Gibbs sampling and importance sampling are the most popular methods for this task. In this contribution we propose a new simple general purpose importance sampling procedure. In a simulation study we compare the performance of this method with the performance of Gibbs sampling and of importance sampling using a vector of independent variates. It turns out that the new procedure is much better than independent importance sampling; up to dimension five it is also better than Gibbs sampling. The simulation results indicate that for higher dimensions Gibbs sampling is superior. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
23

Speeding Up Gibbs Sampling in Probabilistic Optical Flow

Piao, Dongzhen 01 December 2014 (has links)
In today’s machine learning research, probabilistic graphical models are used extensively to model complicated systems with uncertainty, to help understanding of the problems, and to help inference and predict unknown events. For inference tasks, exact inference methods such as junction tree algorithms exist, but they suffer from exponential growth of cluster size and thus is not able to handle large and highly connected graphs. Approximate inference methods do not try to find exact probabilities, but rather give results that improve as algorithm runs. Gibbs sampling, as one of the approximate inference methods, has gained lots of traction and is used extensively in inference tasks, due to its ease of understanding and implementation. However, as problem size grows, even the faster algorithm needs a speed boost to meet application requirement. The number of variables in an application graphical model can range from tens of thousands to billions, depending on problem domain. The original sequential Gibbs sampling may not return satisfactory result in limited time. Thus, in this thesis, we investigate in ways to speed up Gibbs sampling. We will study ways to do better initialization, blocking variables to be sampled together, as well as using simulated annealing. These are the methods that modifies the algorithm itself. We will also investigate in ways to parallelize the algorithm. An algorithm is parallelizable if some steps do not depend on other steps, and we will find out such dependency in Gibbs sampling. We will discuss how the choice of different hardware and software architecture will affect the parallelization result. We will use optical flow problem as an example to demonstrate the various speed up methods we investigated. An optical flow method tries to find out the movements of small image patches between two images in a temporal sequence. We demonstrate how we can model it using probabilistic graphical model, and solve it using Gibbs sampling. The result of using sequential Gibbs sampling is demonstrated, with comparisons from using various speed up methods and other optical flow methods.
24

三分類Qual VAR模型-美國景氣預測之應用

蔡郁敏, Tsai,Yu-Min Unknown Date (has links)
追求長期穩定的經濟成長是每個國家欲追求的目標,在經濟發展過程中,外在衝擊常常導致經濟體系的景氣循環波動,而短期間景氣循環的大幅波動將不利於經濟體系穩定發展,因為民眾的消費、廠商的投資決策以及政府政策的規劃與實施,都深深受到景氣變動的影響。因此準確預測景氣動向,深受經濟學者、政府以及一般民眾的重視。 預估景氣循環擴張和衰退持續期間的長短並不容易,由美國國家經濟研究局 提供的資料得知,第二次世界大戰後,美國景氣擴張最長的時間,曾經延續了一百零六個月,而最短的則只有十二個月;在景氣衰退方面,最短是六個月,最長則為十六個月。而二次大戰前,時間變化的幅度就更大了。由於景氣變化前的徵兆並不是很顯著,因此許多經濟學者從各種方面來探討與分析景氣循環。 本篇論文引用 ordered Probit 模型對 Dueker (2005) 文章作一個擴展與應用,將Dueker文中原本的二分類:景氣衰退、景氣擴張延伸為景氣三分類:景氣衰退、景氣狀態不明與景氣擴張,帶入 Qual VAR 模型並利用Gibbs sampling模擬未知參數與變數,藉由統計分析,希望能對景氣循環提出一個更為詳細的詮釋。而本篇論文的目的希望在相對於二分類模型,在總體現象上能提供一個更為完善與更明確的描述,使得在分析上能更完整。參考 NBER 所公佈的景氣轉折點並輔以其他指標,將景氣區分為三分類,以 Qual VAR 模型模擬出景氣三分類的景氣指標,再對這個指標做預測分析,並比較美國景氣在二分類與三分類之下的異同。結果指出三分類模型成功的預測出 2002 年第一季到 2003 年第三季美國景氣擴張的狀態,而三選擇模型的模型比起二選擇模型,對於預測景氣狀態有更為明確的判斷,且加入一分類指標,提供新的景氣變動解釋,幫助人們做出更為合適的決策。
25

Price discovery using a regime-sensitive cointegration approach

Hinterholz, Eduardo Mathias January 2015 (has links)
Submitted by EDUARDO HINTERHOLZ (eduh17@gmail.com) on 2015-08-26T19:57:33Z No. of bitstreams: 1 DissertaçãoFinal.pdf: 1431279 bytes, checksum: dea2c0cdc148ed945cdfc8b33e86f668 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2015-08-26T20:02:31Z (GMT) No. of bitstreams: 1 DissertaçãoFinal.pdf: 1431279 bytes, checksum: dea2c0cdc148ed945cdfc8b33e86f668 (MD5) / Made available in DSpace on 2015-08-27T13:12:19Z (GMT). No. of bitstreams: 1 DissertaçãoFinal.pdf: 1431279 bytes, checksum: dea2c0cdc148ed945cdfc8b33e86f668 (MD5) Previous issue date: 2015 / This work proposes a method to examine variations in the cointegration relation between preferred and common stocks in the Brazilian stock market via Markovian regime switches. It aims on contributing for future works in 'pairs trading' and, more specifically, to price discovery, given that, conditional on the state, the system is assumed stationary. This implies there exists a (conditional) moving average representation from which measures of 'information share' (IS) could be extracted. For identification purposes, the Markov error correction model is estimated within a Bayesian MCMC framework. Inference and capability of detecting regime changes are shown using a Montecarlo experiment. I also highlight the necessity of modeling financial effects of high frequency data for reliable inference. / Este trabalho propõe um método para examinar variações na relação cointegração de preços de ações preferenciais e ordinárias da bolsa brasileira através de mudanças de regime no sentido de Markov. Este modelo tem como objetivo contribuir tanto para futuros trabalhos em negociações de pares ('pairs trading') quanto, principalmente, para aplicação em descoberta de preços visto que, condicional nos estados, é pressuposta estacionariedade no sistema. Desta maneira seria possível a extração de medidas de 'parcela de informação' (IS) baseadas na representação de médias móveis de um modelo de correção de erros Markoviano, estimado através de um ferramental bayesiano do tipo MCMC por questões de identificação. A validade do modelo no sentido de capturar as variações de regime é demonstrada através de experimento de Montecarlo, bem como é evidenciada a necessidade da modelar não normalidades na distribuição dos dados de alta frequência visando inferência.
26

Análise Bayesiana da área de olho do lombo e da espessura de gordura obtidas por ultrassom e suas associações com outras características de importância econômica na raça Nelore

Yokoo, Marcos Jun Iti [UNESP] 23 July 2009 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:32:15Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-07-23Bitstream added on 2014-06-13T20:43:07Z : No. of bitstreams: 1 yokoo_mji_dr_jabo.pdf: 844911 bytes, checksum: ced28ac4ef446ee71a8486d5e86abb80 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Objetivou-se com esse trabalho estimar os parâmetros genéticos para as características área de olho de lombo (AOL), espessura de gordura subcutânea na costela (EG) e espessura de gordura na garupa (EGP8) obtidas por ultrassom, ao ano (A) e ao sobreano (S). Além disso, foram estimadas as correlações genéticas entre essas características de carcaça obtidas por ultrassom (CCUS), e dessas com outras características de importância econômica em bovinos de corte, como peso (PS), altura do posterior (ALT) e perímetro escrotal (PE450) ao sobreano, idade ao primeiro parto (IPP) e primeiro intervalo entre partos (PIEP). Os parâmetros genéticos foram estimados em análises multi-características pelo modelo animal, utilizando-se a inferência Bayesiana via algoritmo de Gibbs Sampling. As estimativas de herdabilidade a posteriori para as CCUS foram: 0,46 (AOL_A), 0,42 (EG_A), 0,60 (EGP8_A), 0,33 (AOL_S), 0,59 (EG_S) e 0,55 (EGP8_S), mostrando que se essas características forem utilizadas como critério de seleção, as mesmas devem responder rapidamente à seleção individual, sem causar antagonismo na seleção do PE450, PS (A e S) e IPP. A estimativa de herdabilidade a posteriori para as características IPP e PIEP foi de magnitude moderada a baixa, 0,26 e 0,11, respectivamente. A ALT apresentou correlação genética (rg) negativa com a EG_S (-0,38) e EGP8_S (-0,32), evidenciando que a seleção para animais mais altos pode levar a animais tardios em termos de terminação da carcaça. A seleção para melhorar as CCUS, o PIEP e o PE450 não afetará a IPP, contudo, animais mais pesados e mais altos tendem a ser mais precoces sexualmente (rg variou entre - 0,22 e -0,44). Com exceção da EG_S (rg=0,40), a seleção para as CCUS e as características de crescimento não afetará o PIEP, por resposta correlacionada. / The objective of this work was to estimate genetic parameters for the traits longissimus muscle area (LMA), backfat thickness (BF) and rump fat thickness (RF) measured by real time ultrasound at 12 (Y) and 18 (S) months of age. In addition, this study aimed estimate the genetic correlations between these carcass traits measured by real time ultrasound (CTUS), and those with other economically important traits in beef cattle, i.e., weight (W), hip height (HH) and scrotal circumference (SC450 ) at 18 months of age, age at first calving (AFC) and first calving interval (FCI). The genetic parameters were estimated in multi-trait analyses, with animal models, by Bayesian inference using the Gibbs Sampling algorithm. The heritability estimates for LMA (Y and S), BF (Y and S) and RF (Y and S) were 0.46 and 0.33, 0.42 and 0.59, and 0.60 and 0.55, respectively, showing that if these traits will used as selection criteria, they must respond quickly to individual selection, without causing antagonism in the selection of the SC450, W (Y and S) and AFC. The a posteriori heritability estimates for AFC and FCI were from moderate to low, 0.26 and 0.11, respectively. The HH showed negative genetic correlations (rg) with BF_S (-0.38) and RF_S (-0.32), suggesting that long term selection for taller animals would tend to produce animals with less subcutaneous fat, i.e. later-maturing in terms of carcass finishing. Selection to improve CTUS, FCI and SC450 will not affect the AFC, however, heavier and taller animals tend to be more sexually precocious (rg ranged between -0.22 and -0.44). Except for the BF_S (rg=0.40), the selection for the CTUS and growth traits will not affect the FCI, by correlated response.
27

Modelo bayesiano de coincidências em processos de listagens

Reis, Juliana Coutinho dos 03 May 2006 (has links)
Made available in DSpace on 2016-06-02T20:05:58Z (GMT). No. of bitstreams: 1 DissJCR.pdf: 420621 bytes, checksum: a07860bb2ec0004c2a221933e0f45403 (MD5) Previous issue date: 2006-05-03 / Financiadora de Estudos e Projetos / In this work we present a bayesian methodology to estimate the number of coincident individuals of two lists, considering the occurrence of correct and incorrect registers of the informations registers of each individual present in the lists. We adopt, in this model, three di¤erent prioris for the number of coincident pairs and study its performance through simulated data. Due to di¢ culties found in the choice of the hiperparameters of this model, we present as solution to the this problem a hierarchic bayesian model and verify its adequateness through the gotten estimates for simulated data. / Nesta dissertação apresentamos uma metodologia bayesiana para estimar o número de indivíduos coincidentes de duas listas, considerando a ocorrência de registros corretos e incorretos das informações cadastrais de cada indivíduo presente nas listas. Adotamos três diferentes prioris para o número de pares coincidentes e estudamos sua performance através de dados simulados. Devido às dificuldades encontradas na escolha dos valores dos hiperparâmetros deste modelo, apresentamos como solução a este problema um modelo bayesiano hierárquico e verificamos sua adequabilidade através das estimativas obtidas para dados simulados.
28

The application and interpretation of the two-parameter item response model in the context of replicated preference testing

Button, Zach January 1900 (has links)
Master of Science / Statistics / Suzanne Dubnicka / Preference testing is a popular method of determining consumer preferences for a variety of products in areas such as sensory analysis, animal welfare, and pharmacology. However, many prominent models for this type of data do not allow different probabilities of preferring one product over the other for each individual consumer, called overdispersion, which intuitively exists in real-world situations. We investigate the Two-Parameter variation of the Item Response Model (IRM) in the context of replicated preference testing. Because the IRM is most commonly applied to multiple-choice testing, our primary focus is the interpretation of the model parameters with respect to preference testing and the evaluation of the model’s usefulness in this context. We fit a Bayesian version of the Two-Parameter Probit IRM (2PP) to two real-world datasets, Raisin Bran and Cola, as well as five hypothetical datasets constructed with specific parameter properties in mind. The values of the parameters are sampled via the Gibbs Sampler and examined using various plots of the posterior distributions. Next, several different models and prior distribution specifications are compared over the Raisin Bran and Cola datasets using the Deviance Information Criterion (DIC). The Two-Parameter IRM is a useful tool in the context of replicated preference testing, due to its ability to accommodate overdispersion, its intuitive interpretation, and its flexibility in terms of parameterization, link function, and prior specification. However, we find that this model brings computational difficulties in certain situations, some of which require creative solutions. Although the IRM can be interpreted for replicated preference testing scenarios, this data typically contains few replications, while the model was designed for exams with many items. We conclude that the IRM may provide little evidence for marketing decisions, and it is better-suited for exploring the nature of consumer preferences early in product development.
29

On the Construction of an Automatic Traffic Sign Recognition System

Jonsson, Fredrik January 2017 (has links)
This thesis proposes an automatic road sign recognition system, including all steps from the initial detection of road signs from a digital image to the final recognition step that determines the class of the sign. We develop a Bayesian approach for image segmentation in the detection step using colour information in the HSV (Hue, Saturation and Value) colour space. The image segmentation uses a probability model which is constructed based on manually extracted data on colours of road signs collected from real images. We show how the colour data is fitted using mixture multivariate normal distributions, where for the case of parameter estimation Gibbs sampling is used. The fitted models are then used to find the (posterior) probability of a pixel colour to belong to a road sign using the Bayesian approach. Following the image segmentation, regions of interest (ROIs) are detected by using the Maximally Stable Extremal Region (MSER) algorithm, followed by classification of the ROIs using a cascade of classifiers. Synthetic images are used in training of the classifiers, by applying various random distortions to a set of template images constituting most road signs in Sweden, and we demonstrate that the construction of such synthetic images provides satisfactory recognition rates. We focus on a large set of the signs on the Swedish road network, including almost 200 road signs. We use classification models such as the Support Vector Machine (SVM), and Random Forest (RF), where for features we use Histogram of Oriented Gradients (HOG).
30

Advances in computational Bayesian statistics and the approximation of Gibbs measures / Avancées en statistiques computationelles Bayesiennes et approximation de mesures de Gibbs

Ridgway, James 17 September 2015 (has links)
Ce mémoire de thèse regroupe plusieurs méthodes de calcul d'estimateur en statistiques bayésiennes. Plusieurs approches d'estimation seront considérées dans ce manuscrit. D'abord en estimation nous considérerons une approche standard dans le paradigme bayésien en utilisant des estimateurs sous la forme d'intégrales par rapport à des lois \textit{a posteriori}. Dans un deuxième temps nous relâcherons les hypothèses faites dans la phase de modélisation. Nous nous intéresserons alors à l'étude d'estimateurs répliquant les propriétés statistiques du minimiseur du risque de classification ou de ranking théorique et ceci sans modélisation du processus génératif des données. Dans les deux approches, et ce malgré leur dissemblance, le calcul numérique des estimateurs nécessite celui d'intégrales de grande dimension. La plus grande partie de cette thèse est consacrée au développement de telles méthodes dans quelques contextes spécifiques. / This PhD thesis deals with some computational issues of Bayesian statistics. I start by looking at problems stemming from the standard Bayesian paradigm. Estimators in this case take the form of integrals with respect to the posterior distribution. Next we will look at another approach where no, or almost no model is necessary. This will lead us to consider a Gibbs posterior. Those two approaches, although different in aspect, will lead to similar computational difficulties. In this thesis, I address some of these issues.

Page generated in 0.0558 seconds