• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 90
  • 29
  • 9
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 310
  • 310
  • 91
  • 87
  • 59
  • 57
  • 50
  • 46
  • 38
  • 36
  • 34
  • 33
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Flexible cross layer design for improved quality of service in MANETs

Kiourktsidis, Ilias January 2011 (has links)
Mobile Ad hoc Networks (MANETs) are becoming increasingly important because of their unique characteristics of connectivity. Several delay sensitive applications are starting to appear in these kinds of networks. Therefore, an issue in concern is to guarantee Quality of Service (QoS) in such constantly changing communication environment. The classical QoS aware solutions that have been used till now in the wired and infrastructure wireless networks are unable to achieve the necessary performance in the MANETs. The specialized protocols designed for multihop ad hoc networks offer basic connectivity with limited delay awareness and the mobility factor in the MANETs makes them even more unsuitable for use. Several protocols and solutions have been emerging in almost every layer in the protocol stack. The majority of the research efforts agree on the fact that in such dynamic environment in order to optimize the performance of the protocols, there is the need for additional information about the status of the network to be available. Hence, many cross layer design approaches appeared in the scene. Cross layer design has major advantages and the necessity to utilize such a design is definite. However, cross layer design conceals risks like architecture instability and design inflexibility. The aggressive use of cross layer design results in excessive increase of the cost of deployment and complicates both maintenance and upgrade of the network. The use of autonomous protocols like bio-inspired mechanisms and algorithms that are resilient on cross layer information unavailability, are able to reduce the dependence on cross layer design. In addition, properties like the prediction of the dynamic conditions and the adaptation to them are quite important characteristics. The design of a routing decision algorithm based on Bayesian Inference for the prediction of the path quality is proposed here. The accurate prediction capabilities and the efficient use of the plethora of cross layer information are presented. Furthermore, an adaptive mechanism based on the Genetic Algorithm (GA) is used to control the flow of the data in the transport layer. The aforementioned flow control mechanism inherits GA’s optimization capabilities without the need of knowing any details about the network conditions, thus, reducing the cross layer information dependence. Finally, is illustrated how Bayesian Inference can be used to suggest configuration parameter values to the other protocols in different layers in order to improve their performance.
52

A cortical model of object perception based on Bayesian networks and belief propagation

Durá-Bernal, Salvador January 2011 (has links)
Evidence suggests that high-level feedback plays an important role in visual perception by shaping the response in lower cortical levels (Sillito et al. 2006, Angelucci and Bullier 2003, Bullier 2001, Harrison et al. 2007). A notable example of this is reflected by the retinotopic activation of V1 and V2 neurons in response to illusory contours, such as Kanizsa figures, which has been reported in numerous studies (Maertens et al. 2008, Seghier and Vuilleumier 2006, Halgren et al. 2003, Lee 2003, Lee and Nguyen 2001). The illusory contour activity emerges first in lateral occipital cortex (LOC), then in V2 and finally in V1, strongly suggesting that the response is driven by feedback connections. Generative models and Bayesian belief propagation have been suggested to provide a theoretical framework that can account for feedback connectivity, explain psychophysical and physiological results, and map well onto the hierarchical distributed cortical connectivity (Friston and Kiebel 2009, Dayan et al. 1995, Knill and Richards 1996, Geisler and Kersten 2002, Yuille and Kersten 2006, Deneve 2008a, George and Hawkins 2009, Lee and Mumford 2003, Rao 2006, Litvak and Ullman 2009, Steimer et al. 2009). The present study explores the role of feedback in object perception, taking as a starting point the HMAX model, a biologically inspired hierarchical model of object recognition (Riesenhuber and Poggio 1999, Serre et al. 2007b), and extending it to include feedback connectivity. A Bayesian network that captures the structure and properties of the HMAX model is developed, replacing the classical deterministic view with a probabilistic interpretation. The proposed model approximates the selectivity and invariance operations of the HMAX model using the belief propagation algorithm. Hence, the model not only achieves successful feedforward recognition invariant to position and size, but is also able to reproduce modulatory effects of higher-level feedback, such as illusory contour completion, attention and mental imagery. Overall, the model provides a biophysiologically plausible interpretation, based on state-of-theart probabilistic approaches and supported by current experimental evidence, of the interaction between top-down global feedback and bottom-up local evidence in the context of hierarchical object perception.
53

Modélisation d'éléments de structure en béton armé dégradés par corrosion : la problématique de l'interface acier/béton en présence de corrosion / Modelling of reinforced concrete structures subjected to corrosion : the specific case of the steel/concrete interface with corrosion.

Richard, Benjamin 14 September 2010 (has links)
Une des causes majeures responsables de la perte de performance des ouvrages en béton armé (fissuration excessive, perte de capacité portante) peut être imputée à la corrosion des armatures présentes dans les éléments structuraux. Ce phénomène est susceptible de se développer soit par carbonatation, soit par pénétration des ions chlorures par le béton d'enrobage. C'est alors que des produits de corrosion apparaissent et génèrent des contraintes de traction qui, dès dépassement de la résistance en traction, favorisent l'apparition de fissures. D'un point de vue pratique, dès que les premières fissures sont remarquées à la surface du béton, la corrosion a généralement atteint un stade avancée et des actions de maintenance doivent être envisagées. Cela entraînent d'importants coûts évitables si une prédiction satisfaisante avait pu être réalisée. Cette étude vise à apporter des éléments de réponse à cette problématique. Deux objectifs essentiels sont considérés dans ces travaux de thèse : le premier consiste à proposer des lois de comportement robuste et satisfaisantes permettant de bien décrire le comportement des éléments de structure existants et le second vise à introduire une méthode probabiliste permettant d'actuali ser les paramètres des deux modèles sur la base de l'information disponible sur le terrain. Un cadre constitutif générique couplant élasticité, endommagement isotrope et glissement interne, thermodynamiquement admissible, est pour cela développé. Cette classe de modèles est particularisée au cas de l'interface acier/béton en présence de corrosion et au cas du béton. Ces dernières peuvent être utilisées non seulement sous chargement monotone mais aussi sous chargement cyclique. Les lois proposées permettent de prendre en compte les effets hystérétiques, les déformations permanentes et l'effet unilatéral. En outre, ces dernières ont été validées sur différents cas tests. Des versions multifibres des lois précédemment mentionnées ont également été développées pour offrir à l'ingénieur des modèles simplifiés. Une prise en compte du caractère imparfait de l'interface acier/béton au sein du formalisme multifibre est notamment considérée. L'étape d'identification des paramètres mat ériaux n'est pas toujours aisée à réaliser en raison d'une part des incertitudes qui entachent ces derniers et, d'autre part, de la méconnaissance des mécanismes locaux. Ainsi, une méthodologie probabiliste complète permettant d'actualiser les paramètres d'entrée sur la base d'observations extérieures est proposée. Elle s'appuie sur une utilisation conjointe des réseaux bayésiens et de la théorie de la fiabilité. Elle permet ainsi de réduire l'écart entre la prédiction numérique et les mesures réalisées sur le terrain. Ce travail de thèse devrait contribuer à fournir aux gestionnaires d'ouvrage des outils d'aide à la décision leur permettant de mieux gérer leurs ouvrages en béton armé / A major source of a noticeable loss of performance (excessive cracking, loss of carrying load capacity) can be attributed to the corrosion phenomena induced either by carbonation or by chloride ions ingress. The corrosion products being expansive, tensile stresses are generated and usually lead to the cover concrete cracking when tensile strength is exceeded. From a practical point of view, when first observable signs of degradation are noticed on site, it is generally too late and maintenance actions have to be made. This results in important expenses that could have been avoided if a satisfying prediction had been made. This thesis aims to propose some answers to that problem. Two main objectives have been handled. The first one consists in formulating reliable constitutive models for a better understanding of the mechanical behaviour of existing concrete structures. The second objective aims to develop a probabilistic approach for updating t he mechanical model according to experimental information available on site. A general constitutive framework, thermodynamically admissible, has been proposed coupling elasticity, isotropic damage and internal sliding. This general framework has been declined in two specific constitutive models, on one hand for modelling the steel/concrete interface including corrosion and, on the other hand for modelling the concrete behaviour. Both models are validated on several structural cases. They can be used for monotonic and cyclic loadings. Besides, they account for non linear hysteretic effects, quasi unilateral effect, permanent strains, etc. Simplified versions of the proposed constitutive models are also proposed for engineering purposes within the framework of the multifiber beams theory. In the case of the steel/concrete interface, although a Timoshenko based kinematic is assumed, a non-perfect interface between steel and concrete can be considered locally. The material para meter identification is not always straightforward. Therefore, the use of robust updating methods can improve the accuracy of mechanical models. A complete probabilistic approach based on Bayesian Networks has been proposed. It allows not only considering the uncertainties related to mechanical parameters but also reducing the gap between experimental measurements and numerical predictions. This study provides to stakeholders pertinent decision tools for predicting the structural behaviour of degraded reinforced concrete structures
54

New Methods for Large-Scale Analyses of Social Identities and Stereotypes

Joseph, Kenneth 01 June 2016 (has links)
Social identities, the labels we use to describe ourselves and others, carry with them stereotypes that have significant impacts on our social lives. Our stereotypes, sometimes without us knowing, guide our decisions on whom to talk to and whom to stay away from, whom to befriend and whom to bully, whom to treat with reverence and whom to view with disgust. Despite these impacts of identities and stereotypes on our lives, existing methods used to understand them are lacking. In this thesis, I first develop three novel computational tools that further our ability to test and utilize existing social theory on identity and stereotypes. These tools include a method to extract identities from Twitter data, a method to infer affective stereotypes from newspaper data and a method to infer both affective and semantic stereotypes from Twitter data. Case studies using these methods provide insights into Twitter data relevant to the Eric Garner and Michael Brown tragedies and both Twitter and newspaper data from the “Arab Spring”. Results from these case studies motivate the need for not only new methods for existing theory, but new social theory as well. To this end, I develop a new sociotheoretic model of identity labeling - how we choose which label to apply to others in a particular situation. The model combines data, methods and theory from the social sciences and machine learning, providing an important example of the surprisingly rich interconnections between these fields.
55

Implementing Bayesian Networks for online threat detection

Pappaterra, Mauro José January 2018 (has links)
Cybersecurity threats have surged in the past decades. Experts agree that conventional security measures will soon not be enough to stop the propagation of more sophisticated and harmful cyberattacks. Recently, there has been a growing interest in mastering the complexity of cybersecurity by adopting methods borrowed from Artificial Intelligence (AI) in order to support automation. Moreover, entire security frameworks, such as DETECT (Decision Triggering Event Composer and Tracker), are designed aimed to the automatic and early detection of threats against systems, by using model analysis and recognising sequences of events and other tropes, inherent to attack patterns. In this project, I concentrate on cybersecurity threat assessment by the translation of Attack Trees (AT) into probabilistic detection models based on Bayesian Networks (BN). I also show how these models can be integrated and dynamically updated as a detection engine in the existing DETECT framework for automated threat detection, hence enabling both offline and online threat assessment. Integration in DETECT is important to allow real-time model execution and evaluation for quantitative threat assessment. Finally, I apply my methodology to some real-world case studies, evaluate models with sample data, perform data sensitivity analyses, then present and discuss the results.
56

Fusion de décisions dédiée à la surveillance des systèmes complexes / Decision fusion dedicated to the monitoring of complex systems

Tidriri, Khaoula 16 October 2018 (has links)
Le niveau de complexité croissant des systèmes et les exigences de performances et de sûreté de fonctionnement qui leur sont associées ont induit la nécessité de développer de nouvelles approches de surveillance. Les travaux de cette thèse portent sur la surveillance des systèmes complexes, notamment la détection, le diagnostic et le pronostic de défauts, avec une méthodologie basée sur la fusion de décisions. L’objectif principal est de proposer une approche générique de fusion de diverses méthodes de surveillance, dont la performance serait meilleure que celles des méthodes individuelles la composant. Pour cela, nous avons proposé une nouvelle démarche de fusion de décisions, basée sur la théorie Bayésienne. Cette démarche s’appuie sur une déduction théorique des paramètres du Réseau Bayésien en fonction des objectifs de performance à atteindre en surveillance. Le développement conduit à un problème multi-objectif sous contraintes, résolu par une approche lexicographique. La première étape se déroule hors-ligne et consiste à définir les objectifs de performance à respecter afin d’améliorer les performances globales du système. Les paramètres du réseau Bayésien permettant de respecter ces objectifs sont ensuite déduits de façon théorique. Enfin, le réseau Bayésien paramétré est utilisé en ligne afin de tester les performances de la fusion de décisions. Cette méthodologie est adaptée et appliquée d’une part à la détection et au diagnostic, et d’autre part au pronostic. Les performances sont évaluées en termes de taux de diagnostic de défauts (FDR) et taux de fausses alarmes (FAR) pour l’étape de détection et de diagnostic, et en durée de fonctionnement avant la défaillance du système (RUL) pour le pronostic. / Nowadays, systems are becoming more and more complex and require new effective methods for their supervision. This latter comprises a monitoring phase that aims to improve the system’s performances and ensure a safety production for humans and materials. This thesis work deals with fault detection, diagnosis and prognosis, with a methodology based on decisions fusion. The main issue concerns the integration of different decisions emanating from individual monitoring methods in order to obtain more reliable results. The methodology is based on a theoretical learning of the Bayesian network parameters, according to monitoring objectives to be reached. The development leads to a multi-objective problem under constraints, which is solved with a lexicographic approach. The first step is offline and consists of defining the objectives to be achieved in order to improve the overall performance of the system. The Bayesian network parameters respecting these objectives are then deduced theoretically. Finally, the parametrized Bayesian network is used online to test the decision fusion performances. These performances are evaluated in terms of Fault Diagnostic Rate (FDR) and False Alarm Rate (FAR) for the detection and diagnosis stage, and in terms of Remaining Useful Life (RUL) for the prognosis.
57

Approche de diagnostic des défauts d’un produit par intégration des données de traçabilité unitaire produit/process et des connaissances expertes / Product defects diagnosis approach by integrating product / process unitary traceability data and expert knowledge

Diallo, Thierno M. L. 10 December 2015 (has links)
Ces travaux de thèse, menés dans le cadre du projet FUI Traçaverre, visent à optimiser le rappel pour un processus qui n'est pas de type batch avec une traçabilité unitaire des articles produits. L'objectif étant de minimiser le nombre d'articles rappelés tout en s'assurant que tous les articles avec défaut sont rappelés. Pour cela, nous avons proposé un processus de rappel efficient qui intègre, d'une part, les possibilités offertes par la traçabilité unitaire et, d'autre part, utilise une fonction de diagnostic devenue indispensable avant le rappel effectif des produits. Dans le cas des systèmes industriels complexes pour lesquels l'expertise humaine est insuffisante et dont nous n'avons pas de modèle physique, la traçabilité unitaire offre une possibilité pour mieux comprendre et analyser le procédé de fabrication par une reconstitution de la vie du produit à travers les données de traçabilité. Le couplage des données de traçabilité unitaire produit/process représente une source potentielle de connaissance à mettre en oeuvre et à exploiter. Ces travaux de thèse proposent un modèle de données pour le couplage de ces données. Ce modèle de données est basé sur deux standards, l'un dédié à la production et l'autre portant sur la traçabilité. Après l'identification et l'intégration des données nécessaires, nous avons développé une fonction de diagnostic à base de données. La construction de cette fonction diagnostic a été réalisée par apprentissage et comprend l'intégration des connaissances sur le système pour réduire la complexité de l'algorithme d'apprentissage. Dans le processus de rappel proposé, lorsque l'équipement à l'origine du défaut nécessitant le rappel est identifié, l'état de santé de cet équipement au voisinage de l'instant de fabrication du produit contrôlé défectueux est évalué afin d'identifier les autres produits susceptibles de présenter le même défaut. L'approche globale proposée est appliquée à deux études de cas. La première étude a concerné l'industrie verrière. Le second cas d'application a porté sur le process benchmark Tennessee Eastman / This thesis, which is part of the Traçaverre Project, aims to optimize the recall when the production process is not batch type with a unit traceability of produced items. The objective is to minimize the number of recalled items while ensuring that all items with defect are recalled. We propose an efficient recall procedure that incorporates possibilities offered by the unitary traceability and uses a diagnostic function. For complex industrial systems for which human expertise is not sufficient and for which we do not have a physical model, the unitary traceability provides opportunities to better understand and analyse the manufacturing process by a re-enactment of the life of the product through the traceability data. The integration of product and process unitary traceability data represents a potential source of knowledge to be implemented and operate. This thesis propose a data model for the coupling of these data. This data model is based on two standards, one dedicated to the production and the other dealing with the traceability. We developed a diagnostic function based on data after having identified and integrated the necessary data. The construction of this diagnosis function was performed by a learning approach and comprises the integration of knowledge on the system to reduce the complexity of the learning algorithm. In the proposed recall procedure, when the equipment causing the fault is identified, the health status of this equipment in the neighbourhood of the manufacturing time of the defective product is evaluated in order to identify other products likely to present the same defect. The global proposed approach was applied to two case studies. The first study focuses on the glass industry. The second case of application deals with the benchmark Tennessee Eastman process
58

Redes Bayesianas aplicadas à análise do risco de crédito. / Bayesian networks applied to the anilysis of credit risk.

Karcher, Cristiane 26 February 2009 (has links)
Modelos de Credit Scoring são utilizados para estimar a probabilidade de um cliente proponente ao crédito se tornar inadimplente, em determinado período, baseadas em suas informações pessoais e financeiras. Neste trabalho, a técnica proposta em Credit Scoring é Redes Bayesianas (RB) e seus resultados foram comparados aos da Regressão Logística. As RB avaliadas foram as Bayesian Network Classifiers, conhecidas como Classificadores Bayesianos, com seguintes tipos de estrutura: Naive Bayes, Tree Augmented Naive Bayes (TAN) e General Bayesian Network (GBN). As estruturas das RB foram obtidas por Aprendizado de Estrutura a partir de uma base de dados real. Os desempenhos dos modelos foram avaliados e comparados através das taxas de acerto obtidas da Matriz de Confusão, da estatística Kolmogorov-Smirnov e coeficiente Gini. As amostras de desenvolvimento e de validação foram obtidas por Cross-Validation com 10 partições. A análise dos modelos ajustados mostrou que as RB e a Regressão Logística apresentaram desempenho similar, em relação a estatística Kolmogorov- Smirnov e ao coeficiente Gini. O Classificador TAN foi escolhido como o melhor modelo, pois apresentou o melhor desempenho nas previsões dos clientes maus pagadores e permitiu uma análise dos efeitos de interação entre variáveis. / Credit Scoring Models are used to estimate the insolvency probability of a customer, in a period, based on their personal and financial information. In this text, the proposed model for Credit Scoring is Bayesian Networks (BN) and its results were compared to Logistic Regression. The BN evaluated were the Bayesian Networks Classifiers, with structures of type: Naive Bayes, Tree Augmented Naive Bayes (TAN) and General Bayesian Network (GBN). The RB structures were developed using a Structure Learning technique from a real database. The models performance were evaluated and compared through the hit rates observed in Confusion Matrix, Kolmogorov-Smirnov statistic and Gini coefficient. The development and validation samples were obtained using a Cross-Validation criteria with 10-fold. The analysis showed that the fitted BN models have the same performance as the Logistic Regression Models, evaluating the Kolmogorov-Smirnov statistic and Gini coefficient. The TAN Classifier was selected as the best BN model, because it performed better in prediction of bad customers and allowed an interaction effects analysis between variables.
59

Sistema evolutivo eficiente para aprendizagem estrutural de redes Bayesianas / Efficient evolutionary system for learning BN structures

Villanueva Talavera, Edwin Rafael 21 September 2012 (has links)
Redes Bayesianas (RB) são ferramentas probabilísticas amplamente aceitas para modelar e fazer inferências em domínios sob incertezas. Uma das maiores dificuldades na construção de uma RB é determinar a sua estrutura de modelo, a qual representa a estrutura de interdependências entre as variáveis modeladas. A estimativa exata da estrutura de modelo a partir de dados observados é, de forma geral, impraticável já que o número de estruturas possíveis cresce de forma super-exponencial com o número de variáveis. Métodos eficientes de aprendizagem aproximada tornam-se, portanto, essenciais para a construção de RBs verossímeis. O presente trabalho apresenta o Sistema Evolutivo Eficiente para Aprendizagem Estrutural de RBs, ou abreviadamente, EES-BN. Duas etapas de aprendizagem compõem EES-BN. A primeira etapa é encarregada de reduzir o espaço de busca mediante a aprendizagem de uma superestrutura. Para tal fim foram desenvolvidos dois métodos efetivos: Opt01SS e OptHPC, ambos baseados em testes de independência. A segunda etapa de EES-BN é um esquema de busca evolutiva que aproxima a estrutura do modelo respeitando as restrições estruturais aprendidas na superestrutura. Três blocos principais integram esta etapa: recombinação, mutação e injeção de diversidade. Para recombinação foi desenvolvido um novo operador (MergePop) visando ganhar eficiência de busca, o qual melhora o operador Merge de Wong e Leung (2004). Os operadores nos blocos de mutação e injeção de diversidade foram também escolhidos procurando um adequado equilíbrio entre exploração e utilização de soluções. Todos os blocos de EES-BN foram estruturados para operar colaborativamente e de forma auto-ajustável. Em uma serie de avaliações experimentais em RBs conhecidas de variado tamanho foi encontrado que EES-BN consegue aprender estruturas de RBs significativamente mais próximas às estruturas verdadeiras do que vários outros métodos representativos estudados (dois evolutivos: CCGA e GAK2, e dois não evolutivos: GS e MMHC). EES-BN tem mostrado também tempos computacionais competitivos, melhorando marcadamente os tempos dos outros métodos evolutivos e superando também ao GS nas redes de grande porte. A efetividade de EES-BN foi também comprovada em dois problemas relevantes em Bioinformática: i) reconstrução da rede deinterações intergênicas a partir de dados de expressão gênica, e ii) modelagem do chamado desequilíbrio de ligação a partir de dados genotipados de marcadores genéticos de populações humanas. Em ambas as aplicações, EES-BN mostrou-se capaz de capturar relações interessantes de significância biológica estabelecida. / Bayesian networks (BN) are probabilistic tools widely accepted for modeling and reasoning in domains under uncertainty. One of the most difficult tasks in the construction of a BN is the determination of its model structure, which is the inter-dependence structure of the problem variables. The exact estimation of the model structure from observed data is generally infeasible, since the number of possible structures grows super-exponentially with the number of variables. Efficient approximate methods are therefore essential for the construction of credible BN. In this work we present the Efficient Evolutionary System for learning BN structures (EES-BN). This system is composed by two learning phases. The first phase is responsible for the reduction of the search space by estimating a superstructure. For this task were developed two methods (Opt01SS and OptHPC), both based in independence tests. The second phase of EES-BN is an evolutionary design for finding the optimal model structure using the superstructure as the search space. Three main blocks compose this phase: recombination, mutation and diversity injection. With the aim to gain search efficiency was developed a new recombination operator (MergePop), which improves the Merge operator of Wong e Leung (2004). The operators for mutation and recombination blocks were also selected aiming to have an appropriate balance between exploitation and exploration of the solutions. All blocks in EES-BN were structured to operate in a collaborative and self-regulated fashion. Through a series of experiments and comparisons on benchmark BNs of varied dimensionality was found that EES-BN is able to learn BN structures markedly closer to the gold standard networks than various other representative methods (two evolutionary: CCGA and GAK2, and two non-evolutionary methods: GS and MMHC). The computational times of EES-BN were also found competitive, improving notably the times of the evolutionary methods and also the GS in the larger networks. The effectiveness of EES-BN was also verified in two real problems in bioinformatics: i) the reconstruction of a gene regulatory network from gene-expression data, and ii) the modeling of the linkage disequilibrium structures from genetic marker genotyped data of human populations. In both applications EES-BN proved to be able to recover interesting relationships with proven biological meaning.
60

Redes Bayesianas: um método para avaliação de interdependência e contágio em séries temporais multivariadas / Bayesian Networks: a method for evaluation of interdependence and contagion in multivariate time series

Carvalho, João Vinícius de França 25 April 2011 (has links)
O objetivo deste trabalho consiste em identificar a existência de contágio financeiro utilizando a metodologia de redes bayesianas. Além da rede bayesiana, a análise da interdependência de mercados internacionais em períodos de crises financeiras, ocorridas entre os anos 1996 e 2009, foi modelada com outras duas técnicas - modelos GARCH multivariados e de Cópulas, envolvendo países nos quais foi possível avaliar seus efeitos e que foram objetos de estudos similares na literatura. Com os períodos de crise bem definidos e metodologia calcada na teoria de grafos e na inferência bayesiana, executou-se uma análise sequencial, em que as realidades que precediam períodos de crise foram consideradas situações a priori para os eventos (verossimilhanças). Desta combinação resulta a nova realidade (a posteriori), que serve como priori para o período subsequente e assim por diante. Os resultados apontaram para grande interligação entre os mercados e diversas evidências de contágio em períodos de crise financeira, com causadores bem definidos e com grande respaldo na literatura. Ademais, os pares de países que apresentaram evidências de contágio financeiro pelas redes bayesianas em mais períodos de crises foram os mesmos que apresentaram os mais altos valores dos parâmetros estimados pelas cópulas e também aqueles cujos parâmetros foram mais fortemente significantes no modelo GARCH multivariado. Assim, os resultados obtidos pelas redes bayesianas tornam-se mais relevantes, o que sugere boa aderência deste modelo ao conjunto de dados utilizados neste estudo. Por fim, verificou-se que, após as diversas crises, os mercados estavam muito mais interligados do que no período inicialmente adotado. / This work aims to identify the existence of financial contagion using a metodology of Bayesian networks. Besides Bayesian networks, the analysis of the international markets\' interdependence in times of financial crises, occurred between 1996 and 2009, was modeled using two other techniques - multivariate GARCH models and Copulas models, involving countries in which its effects were possible to assess and which were subject to similar studies in the literature. With well-defined crisis periods and a metodology based on graph theory and Bayesian inference, a sequential analysis was executed, in which the realities preceding periods of crisis were considered to be prior situations to the events (likelihood). From this combination results the new posterior reality, which serves as a prior to the subsequent period and so on. The results pointed to a large interconnection between markets and several evidences of contagion in times of financial crises, with well-defined responsibles and highly supported by the literature. Moreover, the pairs of countries that show evidence of financial contagion by Bayesian networks in over periods of crises were the same as that presented the highest values of the parameters estimated by copulas and the most strongly significant parameters in the multivariate GARCH model. Thus, the results obtained by Bayesian networks become more relevant, suggesting good adherence of the model to the data set used in this study. Finally, it was found that after the various crises, the markets were much more connected.

Page generated in 0.0514 seconds