• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 15
  • 15
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 155
  • 70
  • 31
  • 28
  • 24
  • 23
  • 22
  • 17
  • 17
  • 15
  • 15
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Essays on economic and econometric applications of Bayesian estimation and model comparison

Li, Guangjie January 2009 (has links)
This thesis consists of three chapters on economic and econometric applications of Bayesian parameter estimation and model comparison. The first two chapters study the incidental parameter problem mainly under a linear autoregressive (AR) panel data model with fixed effect. The first chapter investigates the problem from a model comparison perspective. The major finding in the first chapter is that consistency in parameter estimation and model selection are interrelated. The reparameterization of the fixed effect parameter proposed by Lancaster (2002) may not provide a valid solution to the incidental parameter problem if the wrong set of exogenous regressors are included. To estimate the model consistently and to measure its goodness of fit, the Bayes factor is found to be more preferable for model comparson than the Bayesian information criterion based on the biased maximum likelihood estimates. When the model uncertainty is substantial, Bayesian model averaging is recommended. The method is applied to study the relationship between financial development and economic growth. The second chapter proposes a correction function approach to solve the incidental parameter problem. It is discovered that the correction function exists for the linear AR panel model of order p when the model is stationary with strictly exogenous regressors. MCMC algorithms are developed for parameter estimation and to calculate the Bayes factor for model comparison. The last chapter studies how stock return's predictability and model uncertainty affect a rational buy-and-hold investor's decision to allocate her wealth for different lengths of investment horizons in the UK market. The FTSE All-Share Index is treated as the risky asset, and the UK Treasury bill as the riskless asset in forming the investor's portfolio. Bayesian methods are employed to identify the most powerful predictors by accounting for model uncertainty. It is found that though stock return predictability is weak, it can still affect the investor's optimal portfolio decisions over different investment horizons.
122

Distribuição preditiva do preço de um ativo financeiro: abordagens via modelo de série de tempo Bayesiano e densidade implícita de Black & Scholes / Predictive distribution of a stock price: Bayesian time series model and Black & Scholes implied density approaches

Oliveira, Natália Lombardi de 01 June 2017 (has links)
Apresentamos duas abordagens para obter uma densidade de probabilidades para o preço futuro de um ativo: uma densidade preditiva, baseada em um modelo Bayesiano para série de tempo e uma densidade implícita, baseada na fórmula de precificação de opções de Black & Scholes. Considerando o modelo de Black & Scholes, derivamos as condições necessárias para obter a densidade implícita do preço do ativo na data de vencimento. Baseando-­se nas densidades de previsão, comparamos o modelo implícito com a abordagem histórica do modelo Bayesiano. A partir destas densidades, calculamos probabilidades de ordem e tomamos decisões de vender/comprar um ativo. Como exemplo, apresentamos como utilizar estas distribuições para construir uma fórmula de precificação. / We present two different approaches to obtain a probability density function for the stocks future price: a predictive distribution, based on a Bayesian time series model, and the implied distribution, based on Black & Scholes option pricing formula. Considering the Black & Scholes model, we derive the necessary conditions to obtain the implied distribution of the stock price on the exercise date. Based on predictive densities, we compare the market implied model (Black & Scholes) with a historical based approach (Bayesian time series model). After obtaining the density functions, it is simple to evaluate probabilities of one being bigger than the other and to make a decision of selling/buying a stock. Also, as an example, we present how to use these distributions to build an option pricing formula.
123

Matching DSGE models to data with applications to fiscal and robust monetary policy

Kriwoluzky, Alexander 01 December 2009 (has links)
Diese Doktorarbeit untersucht drei Fragestellungen. Erstens, wie die Wirkung von plötzlichen Änderungen exogener Faktoren auf endogene Variablen empirisch im Allgemeinen zu bestimmen ist. Zweitens, welche Effekte eine Erhöhung der Staatsausgaben im Speziellen hat. Drittens, wie optimale Geldpolitik bestimmt werden kann, wenn der Entscheider keine eindeutigen Modelle für die ökonomischen Rahmenbedingungen hat. Im ersten Kapitel entwickele ich eine Methode, mithilfe derer die Effekte von plötzlichen Änderungen exogener Faktoren auf endogene Variablen geschätzt werden können. Dazu wird die gemeinsame Verteilung von Parametern einer Vektor Autoregression (VAR) und eines stochastischen allgemeinen Gleichgewichtsmodelles (DSGE) bestimmt. Auf diese Weise können zentrale Probleme gelöst werden: das Identifikationsproblem der VAR und eine mögliche Misspezifikation des DSGE Modells. Im zweitem Kapitel wende ich die Methode aus dem ersten Kapitel an, um den Effekt einer angekündigten Erhöhung der Staatsausgaben auf den privaten Konsum und die Reallöhne zu untersuchen. Die Identifikation beruht auf der Einsicht, dass endogene Variablen, oft qualitative Unterschiede in der Periode der Ankündigung und nach der Realisation zeigen. Die Ergebnisse zeigen, dass der private Konsum negativ im Zeitraum der Ankündigung reagiert und positiv nach der Realisation. Reallöhne steigen zum Zeitpunkt der Ankündigung und sind positiv für zwei Perioden nach der Realisation. Im abschließendem Kapitel untersuche ich gemeinsam mit Christian Stoltenberg, wie Geldpolitik gesteuert werden sollte, wenn die Modellierung der Ökonomie unsicher ist. Wenn ein Modell um einen Parameter erweitert wird, kann das Modell dadurch so verändert werden, dass sich die Politikempfehlungen zwischen dem ursprünglichen und dem neuen Modell unterscheiden. Oft wird aber lediglich das erweiterte Modell betrachtet. Wir schlagen eine Methode vor, die beiden Modellen Rechnung trägt und somit zu einer besseren Politik führt. / This thesis is concerned with three questions: first, how can the effects macroeconomic policy has on the economy in general be estimated? Second, what are the effects of a pre-announced increase in government expenditures? Third, how should monetary policy be conducted, if the policymaker faces uncertainty about the economic environment. In the first chapter I suggest to estimate the effects of an exogenous disturbance on the economy by considering the parameter distributions of a Vector Autoregression (VAR) model and a Dynamic Stochastic General Equilibrium (DSGE) model jointly. This allows to resolve the major issue a researcher has to deal with when working with a VAR model and a DSGE model: the identification of the VAR model and the potential misspecification of the DSGE model. The second chapter applies the methodology presented in the preceding chapter to investigate the effects of a pre-announced change in government expenditure on private consumption and real wages. The shock is identified by exploiting its pre-announced nature, i.e. different signs of the responses in endogenous variables during the announcement and after the realization of the shock. Private consumption is found to respond negatively during the announcement period and positively after the realization. The reaction of real wages is positive on impact and positive for two quarters after the realization. In the last chapter ''Optimal Policy Under Model Uncertainty: A Structural-Bayesian Estimation Approach'' I investigate jointly with Christian Stoltenberg how policy should optimally be conducted when the policymaker is faced with uncertainty about the economic environment. The standard procedure is to specify a prior over the parameter space ignoring the status of some sub-models. We propose a procedure that ensures that the specified set of sub-models is not discarded too easily. We find that optimal policy based on our procedure leads to welfare gains compared to the standard practice.
124

Estudos transversais em epidemiologia veterinária : utilização de modelos hierárquicos e revisão de métodos estatísticos para analise de desfechos binários / Cross-sectional studies in veterinary epidemiology : use of hierarchical models and review of statistical methods for binary outcomes

Martinez, Brayan Alexander Fonseca January 2016 (has links)
Um dos estudos observacionais mais difundidos e usados em epidemiologia veterinária é o estudo do tipo transversal. Sua popularidade ocorre por fatores como baixo custo e rapidez comparados com outros tipos de estudos, além de ajudar a estimar a prevalência de uma doença (desfecho) e postular fatores associados com o desfecho, que poderão ser confirmados como fatores causais em outros tipos de estudos epidemiológicos. Porém, este tipo de estudo apresenta dois importantes desafios: a dependência dos dados, muito frequente dada a típica estrutura populacional de animais dentro do mesmo rebanho ou fazenda e a escolha da medida de associação para desfechos binários, tão frequentes neste modelo de estudo. Com o objetivo de contribuir com a compreensão global da epidemiologia do aborto bovino associado à Neospora caninum tendo em conta a estrutura populacional, construiu-se um modelo misto com os dados de um estudo transversal realizado em duas regiões do Rio Grande do Sul. Usaram-se dados de 60 propriedades amostradas em duas regiões (noroeste e sudeste) e 1256 bovinos. A percentagem de aborto dentro de cada rebanho variou entre 1% e 30%. Vacas soropositivas tiveram 6,63 vezes mais chances de ter histórico de aborto (IC 95%: 4,41-13,20). As chances de uma vaca ter histórico de aborto foram 5,18 vezes maiores na região noroeste em relação à região sudeste (IC 95%: 1,83-20,80). Um coeficiente de correlação intraclasse de 16% foi estimado, indicando que 16% da variação da ocorrência de abortamentos não explicados pelos efeitos fixos foram devido as fazendas. Já na segunda parte deste trabalho, uma revisão sistemática foi realizada considerando um conjunto diverso de revistas e jornais com o objetivo de verificar os métodos estatísticos usados e a adequação das interpretações das medidas de associação estimadas em estudos transversais na área de medicina veterinária. Um total de 62 artigos foi avaliado. A revisão mostrou que, independentemente do nível de prevalência relatado no artigo, 96% deles empregou regressão logística e, portanto, estimaram razão de chances (RC). Nos artigos com prevalência superior a 10%, 23 deles fizeram uma interpretação adequada da RC como uma “razão de chances” ou simplesmente não fizeram uma interpretação direta da RC, enquanto 23 artigos interpretaram de forma inadequada a RC, considerando-a como risco ou probabilidade. Entre os artigos com prevalência inferior a 10%, apenas três interpretaram a RC como uma “razão de chances”, cinco interpretaram como risco ou probabilidade e em um, apesar de ter estimado a razão de prevalências (RP), foi interpretado de forma inadequada. Paralelamente, com o objetivo de exemplificar o uso de métodos estatísticos que estimam diretamente a razão de prevalências (RP), medida mais adequada para os estudos transversais, um conjunto de dados obtidos a partir de um estudo transversal sobre a ocorrência de anticorpos (AC) contra o vírus da diarreia viral bovina (BVDV) foi usado. Os AC foram medidos em amostras de tanque de leite de rebanhos leiteiros localizados no estado do Rio Grande do Sul, em que os possíveis fatores associados puderam ser avaliados. Entre os métodos utilizados, as maiores discrepâncias nas medidas de associação estimadas foram observadas com a regressão logística tomando-se como referência a regressão log-binomial. Finalmente, é importante que este tipo de desafio seja atendido pelos pesquisadores que realizam estudos transversais, ou seja, considerar a estrutura das populações nas análises, cuidado ao escolher o tipo de modelo estatístico empregado para desfecho binário e interpretação dos estimadores. / The commonest study design used in veterinary epidemiology is the cross-sectional study. Its popularity lies on the fact of the short time needed and low costs compared with other types of studies; moreover, this type of study estimates prevalence and associated factors, which may be elucidated as causal in another type of epidemiological studies. However, this type of study presents two major challenges: a very common dependence between data given the typical structure of the animal population, i.e., animals within herds or farms and the choice of measure of association for binary outcomes, frequently used in this type of study. In order to contribute to the understanding of the epidemiology of bovine abortion associated with Neospora caninum, a mixed model accounting for the hierarchical structure of cattle population using data from a cross-sectional study conducted in two regions (northwest and southeast) of Rio Grande do Sul was made. Data from 60 dairy herds and 1256 sampled cattle were used. The percentage of abortions in each herd ranged between 1% and 30%. Seropositive cows were 6.63 times more likely to have a history of abortion (95% CI: 4.41 to 13.20). The chances of a cow have a history of abortion were 5.18 times higher in the northwest comparing with the southeast region (95% CI: 1.83 to 20.80). An intraclass correlation coefficient (ICC) of 16% was estimated which means that 16% of the variation in abortion occurrence not explained by the fixed effects is due to farms. In the second part of this work, a systematic review was conducted considering a range of journals and newspapers in order to verify the statistical methods used and the adequacy of the interpretations of the measures of association estimated in cross-sectional studies from the veterinary medicine field. A total of 62 articles were revised. The review showed that, regardless of the reported prevalence, 96% of them employed logistic regression, therefore estimating odds ratio (OR). From the articles that reported prevalence rates above 10%, 23 of them did a proper interpretation of OR as an odds ratio, or simply did not make a direct interpretation of the OR, while 23 articles interpreted improperly the OR as a risk or probability. Among the articles that reported prevalence rates lower than 10%, only three interpreted the OR as an odds ratio, five interpreted as a risk or probability and only one, despite the estimated prevalence ratio (PR), it was improperly interpreted. Meanwhile, in order to exemplify the use of statistical methods to estimate directly the PR, the more appropriate measure of association in cross-sectional studies, a data set obtained from a cross-sectional study to estimate the occurrence of antibodies (AB) against bovine viral diarrhea virus (BVDV) in milk was used; AB were measured in bulk tank samples from dairy herds located in the state of Rio Grande do Sul, Brazil, and also possible associated factors were estimated. Among the methods used, major discrepancies in the measures of association estimated were observed with the logistic regression, comparing with the log-binomial regression. Finally, it is important that such challenges are met by the researchers that undertake cross-sectional studies.
125

Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du Toit

Du Toit, Jan Valentine January 2006 (has links)
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots. This methodology involves subjective human judgment, is time consuming, and can result in suboptimal results. The newly developed automated construction algorithm solves these difficulties by performing model selection based on an objective model selection criterion. Partial residual plots are only utilized after the best model is found to gain insight into the relationships between inputs and the target. Models are organized in a search tree with a greedy search procedure that identifies good models in a relatively short time. The automated construction algorithm, implemented in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection methodologies found in the literature. This implementation, which is called AutoGANN, has a simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the variables that must be included in the model and uncertainty about the model structure. Model averaging utilizes in-sample model selection criteria and creates a combined model with better predictive ability than using any single model. In the field of Credit Scoring, the standard theory of scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more accurate scorecard that discriminates better between good and bad applicants. The pre-processing step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates. The time it takes to develop a scorecard may be reduced by utilizing the automated construction algorithm. / Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
126

Mélanges bayésiens de modèles d'extrêmes multivariés, Application à la prédétermination régionale des crues avec données incomplètes.

Anne, Sabourin 24 September 2013 (has links) (PDF)
La théorie statistique univariée des valeurs extrêmes se généralise au cas multivarié mais l'absence d'un cadre paramétrique naturel complique l'inférence de la loi jointe des extrêmes. Les marges d'erreur associées aux estimateurs non paramétriques de la structure de dépendance sont difficilement accessibles à partir de la dimension trois. Cependant, quantifier l'incertitude est d'autant plus important pour les applications que le problème de la rareté des données extrêmes est récurrent, en particulier en hydrologie. L'objet de cette thèse est de développer des modèles de dépendance entre extrêmes, dans un cadre bayésien permettant de représenter l'incertitude. Après une introduction à la théorie des valeurs extrêmes et à l'inférence bayésienne (chapitre 1), le chapitre 2 explore les propriétés des modèles obtenus en combinant des modèles paramétriques existants, par mélange bayésien (Bayesian Model Averaging). Un modèle semi-paramétrique de mélange de Dirichlet est étudié au chapitre suivant : une nouvelle paramétrisation est introduite afin de s'affranchir d'une contrainte de moments caractéristique de la structure de dépendance et de faciliter l'échantillonnage de la loi a posteriori. Le chapitre~\ref{censorDiri} est motivé par une application hydrologique: il s'agit d'estimer la structure de dépendance spatiale des crues extrêmes dans la région cévenole des Gardons en utilisant des données historiques enregistrées en quatre points. Les données anciennes augmentent la taille de l'échantillon mais beaucoup de ces données sont censurées. Une méthode d'augmentation de données est introduite, dans le cadre du mélange de Dirichlet, palliant l'absence d'expression explicite de la vraisemblance censurée. Les perspectives sont discutées au chapitre 5.
127

Contributions to quality improvement methodologies and computer experiments

Tan, Matthias H. Y. 16 September 2013 (has links)
This dissertation presents novel methodologies for five problem areas in modern quality improvement and computer experiments, i.e., selective assembly, robust design with computer experiments, multivariate quality control, model selection for split plot experiments, and construction of minimax designs. Selective assembly has traditionally been used to achieve tight specifications on the clearance of two mating parts. Chapter 1 proposes generalizations of the selective assembly method to assemblies with any number of components and any assembly response function, called generalized selective assembly (GSA). Two variants of GSA are considered: direct selective assembly (DSA) and fixed bin selective assembly (FBSA). In DSA and FBSA, the problem of matching a batch of N components of each type to give N assemblies that minimize quality cost is formulated as axial multi-index assignment and transportation problems respectively. Realistic examples are given to show that GSA can significantly improve the quality of assemblies. Chapter 2 proposes methods for robust design optimization with time consuming computer simulations. Gaussian process models are widely employed for modeling responses as a function of control and noise factors in computer experiments. In these experiments, robust design optimization is often based on average quadratic loss computed as if the posterior mean were the true response function, which can give misleading results. We propose optimization criteria derived by taking expectation of the average quadratic loss with respect to the posterior predictive process, and methods based on the Lugannani-Rice saddlepoint approximation for constructing accurate credible intervals for the average loss. These quantities allow response surface uncertainty to be taken into account in the optimization process. Chapter 3 proposes a Bayesian method for identifying mean shifts in multivariate normally distributed quality characteristics. Multivariate quality characteristics are often monitored using a few summary statistics. However, to determine the causes of an out-of-control signal, information about which means shifted and the directions of the shifts is often needed. We propose a Bayesian approach that gives this information. For each mean, an indicator variable that indicates whether the mean shifted upwards, shifted downwards, or remained unchanged is introduced. Default prior distributions are proposed. Mean shift identification is based on the modes of the posterior distributions of the indicators, which are determined via Gibbs sampling. Chapter 4 proposes a Bayesian method for model selection in fractionated split plot experiments. We employ a Bayesian hierarchical model that takes into account the split plot error structure. Expressions for computing the posterior model probability and other important posterior quantities that require evaluation of at most two uni-dimensional integrals are derived. A novel algorithm called combined global and local search is proposed to find models with high posterior probabilities and to estimate posterior model probabilities. The proposed method is illustrated with the analysis of three real robust design experiments. Simulation studies demonstrate that the method has good performance. The problem of choosing a design that is representative of a finite candidate set is an important problem in computer experiments. The minimax criterion measures the degree of representativeness because it is the maximum distance of a candidate point to the design. Chapter 5 proposes algorithms for finding minimax designs for finite design regions. We establish the relationship between minimax designs and the classical set covering location problem in operations research, which is a binary linear program. We prove that the set of minimax distances is the set of discontinuities of the function that maps the covering radius to the optimal objective function value, and optimal solutions at the discontinuities are minimax designs. These results are employed to design efficient procedures for finding globally optimal minimax and near-minimax designs.
128

Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du Toit

Du Toit, Jan Valentine January 2006 (has links)
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots. This methodology involves subjective human judgment, is time consuming, and can result in suboptimal results. The newly developed automated construction algorithm solves these difficulties by performing model selection based on an objective model selection criterion. Partial residual plots are only utilized after the best model is found to gain insight into the relationships between inputs and the target. Models are organized in a search tree with a greedy search procedure that identifies good models in a relatively short time. The automated construction algorithm, implemented in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection methodologies found in the literature. This implementation, which is called AutoGANN, has a simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the variables that must be included in the model and uncertainty about the model structure. Model averaging utilizes in-sample model selection criteria and creates a combined model with better predictive ability than using any single model. In the field of Credit Scoring, the standard theory of scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more accurate scorecard that discriminates better between good and bad applicants. The pre-processing step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates. The time it takes to develop a scorecard may be reduced by utilizing the automated construction algorithm. / Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
129

Essays on bayesian and classical econometrics with small samples

Jarocinski, Marek 15 June 2006 (has links)
Esta tesis se ocupa de los problemas de la estimación econométrica con muestras pequeñas, en los contextos del los VARs monetarios y de la investigación empírica del crecimiento. Primero, demuestra cómo mejorar el análisis con VAR estructural en presencia de muestra pequeña. El primer capítulo adapta la especificación con prior intercambiable (exchangeable prior) al contexto del VAR y obtiene nuevos resultados sobre la transmisión monetaria en nuevos miembros de la Unión Europea. El segundo capítulo propone un prior sobre las tasas de crecimiento iniciales de las variables modeladas. Este prior resulta en la corrección del sesgo clásico de la muestra pequeña en series temporales y reconcilia puntos de vista Bayesiano y clásico sobre la estimación de modelos de series temporales. El tercer capítulo estudia el efecto del error de medición de la renta nacional sobre resultados empíricos de crecimiento económico, y demuestra que los procedimientos econométricos robustos a incertidumbre acerca del modelo son muy sensibles al error de medición en los datos. / This thesis deals with the problems of econometric estimation with small samples, in the contexts of monetary VARs and growth empirics. First, it shows how to improve structural VAR analysis on short datasets. The first chapter adapts the exchangeable prior specification to the VAR context, and obtains new findings about monetary transmission in New Member States. The second chapter proposes a prior on initial growth rates of modeled variables, which tackles the Classical small-sample bias in time series, and reconciles Bayesian and Classical points of view on time series estimation. The third chapter studies the effect of measurement error in income data on growth empirics, and shows that econometric procedures which are robust to model uncertainty are very sensitive to measurement error of the plausible size and properties.
130

Distribuições preditiva e implícita para ativos financeiros / Predictive and implied distributions of a stock price

Oliveira, Natália Lombardi de 01 June 2017 (has links)
Submitted by Alison Vanceto (alison-vanceto@hotmail.com) on 2017-08-28T13:57:07Z No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-06T13:18:03Z (GMT) No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-06T13:18:12Z (GMT) No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) / Made available in DSpace on 2017-09-06T13:28:02Z (GMT). No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) Previous issue date: 2017-06-01 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / We present two different approaches to obtain a probability density function for the stock?s future price: a predictive distribution, based on a Bayesian time series model, and the implied distribution, based on Black & Scholes option pricing formula. Considering the Black & Scholes model, we derive the necessary conditions to obtain the implied distribution of the stock price on the exercise date. Based on predictive densities, we compare the market implied model (Black & Scholes) with a historical based approach (Bayesian time series model). After obtaining the density functions, it is simple to evaluate probabilities of one being bigger than the other and to make a decision of selling/buying a stock. Also, as an example, we present how to use these distributions to build an option pricing formula. / Apresentamos duas abordagens para obter uma densidade de probabilidades para o preço futuro de um ativo: uma densidade preditiva, baseada em um modelo Bayesiano para série de tempo e uma densidade implícita, baseada na fórmula de precificação de opções de Black & Scholes. Considerando o modelo de Black & Scholes, derivamos as condições necessárias para obter a densidade implícita do preço do ativo na data de vencimento. Baseando-se nas densidades de previsão, comparamos o modelo implícito com a abordagem histórica do modelo Bayesiano. A partir destas densidades, calculamos probabilidades de ordem e tomamos decisões de vender/comprar um ativo. Como exemplo, apresentamos como utilizar estas distribuições para construir uma fórmula de precificação.

Page generated in 0.0766 seconds