501 |
Forecasting Global Equity Indices Using Large Bayesian VARsHuber, Florian, Krisztin, Tamás, Piribauer, Philipp 10 1900 (has links) (PDF)
This paper proposes a large Bayesian Vector Autoregressive (BVAR) model with common stochastic volatility to forecast global equity indices. Using a dataset consisting of monthly data on global stock indices the BVAR model inherently incorporates co-movements in the stock markets. The time-varying specification of the covariance structure moreover accounts for sudden shifts in the level of volatility. In an out-of-sample forecasting application we show that the BVAR model with stochastic volatility significantly outperforms the random walk both in terms of root mean squared errors as well as Bayesian log
predictive scores. The BVAR model without stochastic volatility, on the other hand, underperforms relative to the random walk. In a portfolio allocation exercise we moreover show that it is possible to use the forecasts obtained from our BVAR model with common stochastic volatility to set up simple investment strategies. Our results indicate that these simple investment schemes outperform a naive buy-and-hold strategy. (authors' abstract) / Series: Department of Economics Working Paper Series
|
502 |
E-banking operational risk assessment : a soft computing approach in the context of the Nigerian banking industryOchuko, Rita Erhovwo January 2012 (has links)
This study investigates E-banking Operational Risk Assessment (ORA) to enable the development of a new ORA framework and methodology. The general view is that E-banking systems have modified some of the traditional banking risks, particularly Operational Risk (OR) as suggested by the Basel Committee on Banking Supervision in 2003. In addition, recent E-banking financial losses together with risk management principles and standards raise the need for an effective ORA methodology and framework in the context of E-banking. Moreover, evaluation tools and / or methods for ORA are highly subjective, are still in their infant stages, and have not yet reached a consensus. Therefore, it is essential to develop valid and reliable methods for effective ORA and evaluations. The main contribution of this thesis is to apply Fuzzy Inference System (FIS) and Tree Augmented Naïve Bayes (TAN) classifier as standard tools for identifying OR, and measuring OR exposure level. In addition, a new ORA methodology is proposed which consists of four major steps: a risk model, assessment approach, analysis approach and a risk assessment process. Further, a new ORA framework and measurement metrics are proposed with six factors: frequency of triggering event, effectiveness of avoidance barriers, frequency of undesirable operational state, effectiveness of recovery barriers before the risk outcome, approximate cost for Undesirable Operational State (UOS) occurrence, and severity of the risk outcome. The study results were reported based on surveys conducted with Nigerian senior banking officers and banking customers. The study revealed that the framework and assessment tools gave good predictions for risk learning and inference in such systems. Thus, results obtained can be considered promising and useful for both E-banking system adopters and future researchers in this area.
|
503 |
Estimation bayésienne nonparamétrique de copulesGuillotte, Simon January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
504 |
Influence des facteurs émotionnels sur la résistance au changement dans les organisationsMenezes, Ilusca Lima Lopes de January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
505 |
Cross-domain sentiment classification using grams derived from syntax trees and an adapted naive Bayes approachCheeti, Srilaxmi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / There is an increasing amount of user-generated information in online documents, includ- ing user opinions on various topics and products such as movies, DVDs, kitchen appliances, etc. To make use of such opinions, it is useful to identify the polarity of the opinion, in other words, to perform sentiment classification. The goal of sentiment classification is to classify a given text/document as either positive, negative or neutral based on the words present in the document. Supervised learning approaches have been successfully used for sentiment classification in domains that are rich in labeled data. Some of these approaches make use of features such as unigrams, bigrams, sentiment words, adjective words, syntax trees (or variations of trees obtained using pruning strategies), etc. However, for some domains the amount of labeled data can be relatively small and we cannot train an accurate classifier using the supervised learning approach. Therefore, it is useful to study domain adaptation techniques that can transfer knowledge from a source domain that has labeled data to a target domain that has little or no labeled data, but a large amount of unlabeled data. We address this problem in the context of product reviews, specifically reviews of movies, DVDs and kitchen appliances. Our approach uses an Adapted Naive Bayes classifier (ANB) on top of the Expectation Maximization (EM) algorithm to predict the sentiment of a sentence. We use grams derived from complete syntax trees or from syntax subtrees as features, when training the ANB classifier. More precisely, we extract grams from syntax trees correspond- ing to sentences in either the source or target domains. To be able to transfer knowledge from source to target, we identify generalized features (grams) using the frequently co-occurring entropy (FCE) method, and represent the source instances using these generalized features. The target instances are represented with all grams occurring in the target, or with a reduced grams set obtained by removing infrequent grams. We experiment with different types of grams in a supervised framework in order to identify the most predictive types of gram, and further use those grams in the domain adaptation framework. Experimental results on several cross-domains task show that domain adaptation approaches that combine source and target data (small amount of labeled and some unlabeled data) can help learn classifiers for the target that are better than those learned from the labeled target data alone.
|
506 |
Bayesian Inference for High-Dimensional Data with Applications to Portfolio TheoryBauder, David 06 December 2018 (has links)
Die Gewichte eines Portfolios liegen meist als als Kombination des Produkts der Präzisionsmatrix und des Erwartungswertvektors vor. In der Praxis müssen diese Parameter geschätzt werden, allerdings ist die Beschreibung der damit verbundenen Schätzunsicherheit über eine Verteilung dieses Produktes eine Herausforderung. In dieser Arbeit wird demonstriert, dass ein geeignetes bayesianisches Modell nicht nur zu einer leicht zugänglichen Posteriori-Verteilung führt, sondern auch zu leicht interpretierbaren Beschreibungen des Portfoliorisikos, wie beispielsweise einer Ausfallwahrscheinlichkeit des gesamten Portfolios zu jedem Zeitpunkt.
Dazu werden die Parameter mit ihren konjugierten Prioris ausgestatet. Mit Hilfe bekannter Ergebnisse aus der Theorie multivariater Verteilungen ist es möglich, eine stochastische Darstellung für relevante Ausdrücke wie den Portfoliogewichten oder des effizienten Randes zu geben. Diese Darstellungen ermöglichen nicht nur die Bestimmung von Bayes-Schätzern der Parameter, sondern sind auch noch rechentechnisch hoch effizient, da Zufallszahlen nur aus bekannten und leicht zugänglichen Verteilungen gezogen werden. Insbesondere aber werden Markov-Chain-Monte-Carlo Methoden nicht benötigt.
Angewendet wird diese Methodik an einem mehrperiodigen Portfoliomodell für eine exponentielle Nutzenfunktion, am Tangentialportfolio, zur Schätzung des effizienten Randes, des globalen Minimum-Varianz-Portfolios wie auch am gesamten Mittelwert-Varianz Ansatzes. Für alle behandelten Portfoliomodelle werden für wichtige Größen stochastische Darstellungen oder Bayes-Schätzer gefunden. Die Praktikabilität und Flexibilität wie auch bestimmte Eigenschaften werden in Anwendungen mit realen Datensätzen oder Simulationen illustriert. / Usually, the weights of portfolio assets are expressed as a comination of the product of the precision matrix and the mean vector. These parameters have to be estimated in practical applications. But it is a challenge to describe the associated estimation risk of this product. It is demonstrated in this thesis, that a suitable Bayesian approach does not only lead to an easily accessible posteriori distribution, but also leads to easily interpretable risk measures. This also includes for example the default probability of the portfolio at all relevant points in time.
To approach this task, the parameters are endowed with their conjugate priors. Using results from the theory of multivariate distributions, stochastic representations for the portfolio parameter are derived, for example for the portfolio weights or the efficient frontier. These representations not only allow to derive Bayes estimates of these parameters, but are computationally highly efficient since all th necessary random variables are drawn from well known and easily accessible distributions. Most importantly, Markov-Chain-Monte-Carlo methods are not necessary.
These methods are applied to a multi-period portfolio for an exponential utility function, to the tangent portfolio, to estimate the efficient frontier and also to a general mean-variance approach. Stochastic representations and Bayes estimates are derived for all relevant parameters. The practicability and flexibility as well as specific properties are demonstrated using either real data or simulations.
|
507 |
Localização baseada em método de Monte Carlo e algoritmos genéticos para robótica móvel.Luis Fernando Almeida 00 December 2003 (has links)
A robótica móvel autônoma é uma área de pesquisa onde o foco primordial concentra-se na busca incessante de meios que possibilitem a operação de um robô móvel sem a intervenção humana e de um modo mais inteligente possível. Para isso, essa busca pode ser dividida em diferentes ênfases: planejamento de ações, mapeamento de ambiente e localização do robô dentro do mundo em que se encontra. Mais especificamente, o problema de determinação da localização é considerado por alguns como o fator mais importante para capacitar a autonomia de um robô móvel. Muito já foi proposto sobre técnicas de localização, e, dentre as mais recentes, destaca-se o algoritmo de localização Monte Carlo, uma técnica eficiente no que diz respeito à solução dos diversos problemas que abrangem estimação de posição de um robô móvel. O trabalho aqui apresentado tem por objetivo a implementação de um algoritmo de estimação de posição baseado no algoritmo de localização Monte Carlo em conjunto com um Algoritmo Genético. Aqui, a função deste último é minimizar erros acentuados de localização, ocasionados pela deficiência dos modelos probabilísticos que representam a dinâmica de movimento e a percepção sensorial do robô. Isso acontece, principalmente no caso de sensores do tipo sonar diante de obstáculos do tipo quina. O resultado obtido é o método de localização Monte Carlo Genético, que se apresentou como uma possível solução para minimização desses erros de localização. O grande empecilho, porém, constatado nessa abordagem, é o elevado número de parâmetros a serem configurados. O desafio, então, torna-se encontrar o ajuste ideal de parametrização para obtenção de melhor desempenho deste método.
|
508 |
Uma comparação de métodos de classificação aplicados à detecção de fraude em cartões de crédito / A comparison of classification methods applied to credit card fraud detectionGadi, Manoel Fernando Alonso 22 April 2008 (has links)
Em anos recentes, muitos algoritmos bio-inspirados têm surgido para resolver problemas de classificação. Em confirmação a isso, a revista Nature, em 2002, publicou um artigo que já apontava para o ano de 2003 o uso comercial de Sistemas Imunológicos Artificiais para detecção de fraude em instituições financeiras por uma empresa britânica. Apesar disso, não observamos, a luz de nosso conhecimento, nenhuma publicação científica com resultados promissores desde então. Nosso trabalho tratou de aplicar Sistemas Imunológicos Artificiais (AIS) para detecção de fraude em cartões de crédito. Comparamos AIS com os métodos de Árvore de Decisão (DT), Redes Neurais (NN), Redes Bayesianas (BN) e Naive Bayes (NB). Para uma comparação mais justa entre os métodos, busca exaustiva e algoritmo genético (GA) foram utilizados para selecionar um conjunto paramétrico otimizado, no sentido de minimizar o custo de fraude na base de dados de cartões de crédito cedida por um emissor de cartões de crédito brasileiro. Em adição à essa otimização, fizemos também uma análise e busca por parâmetros mais robustos via multi-resolução, estes parâmetros são apresentados neste trabalho. Especificidades de bases de fraude como desbalanceamento de dados e o diferente custo entre falso positivo e negativo foram levadas em conta. Todas as execuções foram realizadas no Weka, um software público e Open Source, e sempre foram utilizadas bases de teste para validação dos classificadores. Os resultados obtidos são consistentes com Maes et al. que mostra que BN são melhores que NN e, embora NN seja um dos métodos mais utilizados hoje, para nossa base de dados e nossas implementações, encontra-se entre os piores métodos. Apesar do resultado pobre usando parâmetros default, AIS obteve o melhor resultado com os parâmetros otimizados pelo GA, o que levou DT e AIS a apresentarem os melhores e mais robustos resultados entre todos os métodos testados. / In 2002, January the 31st, the famous journal Nature, with a strong impact in the scientific environment, published some news about immune based systems. Among the different considered applications, we can find detection of fraudulent financial transactions. One can find there the possibility of a commercial use of such system as close as 2003, in a British company. In spite of that, we do not know of any scientific publication that uses Artificial Immune Systems in financial fraud detection. This work reports results very satisfactory on the application of Artificial Immune Systems (AIS) to credit card fraud detection. In fact, scientific financial fraud detection publications are quite rare, as point out Phua et al. [PLSG05], in particular for credit card transactions. Phua et al. points out the fact that no public database of financial fraud transactions is available for public tests as the main cause of such a small number of publications. Two of the most important publications in this subject that report results about their implementations are the prized Maes (2000), that compares Neural Networks and Bayesian Networks in credit card fraud detection, with a favored result for Bayesian Networks and Stolfo et al. (1997), that proposed the method AdaCost. This thesis joins both these works and publishes results in credit card fraud detection. Moreover, in spite the non availability of Maes data and implementations, we reproduce the results of their and amplify the set of comparisons in such a way to compare the methods Neural Networks, Bayesian Networks, and also Artificial Immune Systems, Decision Trees, and even the simple Naïve Bayes. We reproduce in certain way the results of Stolfo et al. (1997) when we verify that the usage of a cost sensitive meta-heuristics, in fact generalized from the generalization done from the AdaBoost to the AdaCost, applied to several tested methods substantially improves it performance for all methods, but Naive Bayes. Our analysis took into account the skewed nature of the dataset, as well as the need of a parametric adjustment, sometimes through the usage of genetic algorithms, in order to obtain the best results from each compared method.
|
509 |
Estudo de expressão gênica em citros utilizando modelos lineares / Gene expression study in citrus using linear modelsFerreira Filho, Diógenes 12 February 2010 (has links)
Neste trabalho apresenta-se uma revisão da metodologia de experimentos de microarray relativas a sua instalação e análise estatística dos dados obtidos. A seguir, aplica-se essa metodologia na análise de dados de expressão gênica em citros, gerados por um experimento de macroarray, utilizando modelos lineares de efeitos fixos considerando a inclusão ou não de diferentes efeitos e considerando ajustes de modelos para cada gene separadamente e para todos os genes simultaneamente. Os experimentos de macroarray são similares aos experimentos de microarray, porém utilizam um menor número de genes. Em geral, são utilizados devido a restrições econômicas. Devido ao fato de terem sido utilizados poucos arrays no experimento analisado neste trabalho foi utilizada uma abordagem bayesiana empírica que utiliza estimativas de variância mais estáveis e que leva em consideração a correlação entre as repetições do gene dentro do array. Também foi utilizado um método de análise não paramétrico para contornar o problema da falta de normalidade para alguns genes. Os resultados obtidos em cada um dos métodos de análise descritos foram então comparados. / This paper presents a review of the methodology of microarray experiments for its installation and statistical analysis of data obtained. Then this methodology is applied in data analysis of gene expression in citrus, generated by a macroarray experiment, using linear models with fixed effects considering the inclusion or exclusion of different effects and considering adjustments of models for each gene separately and for all genes simultaneously. The macroarray experiments are similar to the microarray experiments, but use a smaller number of genes. In general, are used due to economic restrictions. Because they have been used a few arrays in the experiment analyzed in this study it was used a empirical Bayes approach that uses estimates of variance more stable and that takes into account the correlation among replicates of the gene within array. A non parametric analysis method was also used to outline the problem of the non normality for some genes. The results obtained in each of the described methods of analysis were then compared.
|
510 |
Making Models with BayesOlid, Pilar 01 December 2017 (has links)
Bayesian statistics is an important approach to modern statistical analyses. It allows us to use our prior knowledge of the unknown parameters to construct a model for our data set. The foundation of Bayesian analysis is Bayes' Rule, which in its proportional form indicates that the posterior is proportional to the prior times the likelihood. We will demonstrate how we can apply Bayesian statistical techniques to fit a linear regression model and a hierarchical linear regression model to a data set. We will show how to apply different distributions to Bayesian analyses and how the use of a prior affects the model. We will also make a comparison between the Bayesian approach and the traditional frequentist approach to data analyses.
|
Page generated in 0.0478 seconds