Spelling suggestions: "subject:"bayesian network"" "subject:"eayesian network""
81 |
A Bayesian Network Approach to the Self-organization and Learning in Intelligent AgentsSahin, Ferat 25 September 2000 (has links)
A Bayesian network approach to self-organization and learning is introduced for use with intelligent agents. Bayesian networks, with the help of influence diagrams, are employed to create a decision-theoretic intelligent agent. Influence diagrams combine both Bayesian networks and utility theory. In this research, an intelligent agent is modeled by its belief, preference, and capabilities attributes. Each agent is assumed to have its own belief about its environment. The belief aspect of the intelligent agent is accomplished by a Bayesian network. The goal of an intelligent agent is said to be the preference of the agent and is represented with a utility function in the decision theoretic intelligent agent. Capabilities are represented with a set of possible actions of the decision-theoretic intelligent agent. Influence diagrams have utility nodes and decision nodes to handle the preference and capabilities of the decision-theoretic intelligent agent, respectively.
Learning is accomplished by Bayesian networks in the decision-theoretic intelligent agent. Bayesian network learning methods are discussed intensively in this paper. Because intelligent agents will explore and learn the environment, the learning algorithm should be implemented online. None of the existent Bayesian network learning algorithms has online learning. Thus, an online Bayesian network learning method is proposed to allow the intelligent agent learn during its exploration.
Self-organization of the intelligent agents is accomplished because each agent models other agents by observing their behavior. Agents have belief, not only about environment, but also about other agents. Therefore, an agent takes its decisions according to the model of the environment and the model of the other agents. Even though each agent acts independently, they take the other agents behaviors into account to make a decision. This permits the agents to organize themselves for a common task.
To test the proposed intelligent agent's learning and self-organizing abilities, Windows application software is written to simulate multi-agent systems. The software, IntelliAgent, lets the user design decision-theoretic intelligent agents both manually and automatically. The software can also be used for knowledge discovery by employing Bayesian network learning a database.
Additionally, we have explored a well-known herding problem to obtain sound results for our intelligent agent design. In the problem, a dog tries to herd a sheep to a certain location, i.e. a pen. The sheep tries to avoid the dog by retreating from the dog. The herding problem is simulated using the IntelliAgent software. Simulations provided good results in terms of the dog's learning ability and its ability to organize its actions according to the sheep's (other agent) behavior.
In summary, a decision-theoretic approach is applied to the self-organization and learning problems in intelligent agents. Software was written to simulate the learning and self-organization abilities of the proposed agent design. A user manual for the software and the simulation results are presented.
This research is supported by the Office of Naval Research with the grant number N00014-98-1-0779. Their financial support is greatly appreciated. / Ph. D.
|
82 |
Design of Joint Verification-Correction Strategies for Engineered SystemsXu, Peng 28 June 2022 (has links)
System verification is a critical process in the development of engineered systems. Engineers gain confidence in the correct functionality of the system by executing system verification. Traditionally, system verification is implemented by conducting a verification strategy (VS) consisting of verification activities (VA). A VS can be generated using industry standards, expert experience, or quantitative-based methods. However, two limitations exist in these previous studies. First, as an essential part of system verification, correction activities (CA) are used to correct system errors or defects identified by VAs. However, CAs are usually simplified and treated as a component associated with VAs instead of independent decisions. Even though this simplification may accelerate the VS design, it results in inferior VSs because the optimization of correction decisions is ignored. Second, current methods have not handled the issue of complex engineered systems. As the number of activities increases, the magnitude of the possible VSs becomes so large that finding the optimal VS is impossible or impractical. Therefore, these limitations leave room for improving the VS design, especially for complex engineered systems.
This dissertation presents a joint verification-correction model (JVCM) to address these gaps. The basic idea of this model is to provide an engineering paradigm for complex engineered systems that simultaneously consider decisions about VAs and CAs. The accompanying research problem is to develop a modeling and analysis framework to solve for joint verification-correction strategies (JVCS). This dissertation aims to address them in three steps. First, verification processes (VP) are modeled mathematically to capture the impacts of VAs and CAs. Second, a JVCM with small strategy spaces is established with all conditions of a VP. A modified backward induction method is proposed to solve for an optimal JVCS in small strategy spaces. Third, a UCB-based tree search approach is designed to find near-optimal JVCSs in large strategy spaces. A case study is conducted and analyzed in each step to show the feasibility of the proposed models and methods. / Doctor of Philosophy / System verification is a critical step in the life cycle of system development. It is used to check that a system conforms to its design requirements. Traditionally, system verification is implemented by conducting a verification strategy (VS) consisting of verification activities (VA). A VS can be generated using industry standards, expert experience, or quantitative-based methods. However, two limitations exist in these methods. First, as an essential part of system verification, correction activities (CA) are used to correct system errors or defects identified by VAs. However, CAs are usually simplified and treated as remedial measures that depend on the results of VAs instead of independent decision choices. Even though this simplification may accelerate the VS design, it results in inferior VSs because the optimization of correction decisions is ignored. Second, current methods have not handled the issue of large systems. As the number of activities increases, the total number of possible VSs becomes so large that it is impossible to find the optimal solution. Therefore, these limitations leave room for improving the VS design, especially for large systems.
This dissertation presents a joint verification-correction model (JVCM) to address these gaps. The basic idea of this model is to provide a paradigm for large systems that simultaneously consider decisions about VAs and CAs. The accompanying research problem is to develop a modeling and analysis framework to solve for joint verification-correction strategies (JVCS). This dissertation aims to address them in three steps. First, verification processes (VP) are modeled mathematically to capture the impacts of VAs and CAs. Second, a JVCM with small strategy spaces is established with all conditions of a VP. A modified backward induction method is proposed to solve for an optimal JVCS in small strategy spaces. Third, a UCB-based tree search approach is designed to find near-optimal JVCSs in large strategy spaces. A case study is conducted and analyzed in each step to show the feasibility of the proposed models and methods.
|
83 |
Designing and modeling high-throughput phenotyping data in quantitative geneticsYu, Haipeng 09 April 2020 (has links)
Quantitative genetics aims to bridge the genome to phenome gap. The advent of high-throughput genotyping technologies has accelerated the progress of genome to phenome mapping, but a challenge remains in phenotyping. Various high-throughput phenotyping (HTP) platforms have been developed recently to obtain economically important phenotypes in an automated fashion with less human labor and reduced costs. However, the effective way of designing HTP has not been investigated thoroughly. In addition, high-dimensional HTP data bring up a big challenge for statistical analysis by increasing computational demands. A new strategy for modeling high-dimensional HTP data and elucidating the interrelationships among these phenotypes are needed. Previous studies used pedigree-based connectetdness statistics to study the design of phenotyping. The availability of genetic markers provides a new opportunity to evaluate connectedness based on genomic data, which can serve as a means to design HTP. This dissertation first discusses the utility of connectedness spanning in three studies. In the first study, I introduced genomic connectedness and compared it with traditional pedigree-based connectedness. The relationship between genomic connectedness and prediction accuracy based on cross-validation was investigated in the second study. The third study introduced a user-friendly connectedness R package, which provides a suite of functions to evaluate the extent of connectedness. In the last study, I proposed a new statistical approach to model high-dimensional HTP data by leveraging the combination of confirmatory factor analysis and Bayesian network. Collectively, the results from the first three studies suggested the potential usefulness of applying genomic connectedness to design HTP. The statistical approach I introduced in the last study provides a new avenue to model high-dimensional HTP data holistically to further help us understand the interrelationships among phenotypes derived from HTP. / Doctor of Philosophy / Quantitative genetics aims to bridge the genome to phenome gap. With the advent of genotyping technologies, the genomic information of individuals can be included in a quantitative genetic model. A new challenge is to obtain sufficient and accurate phenotypes in an automated fashion with less human labor and reduced costs. The high-throughput phenotyping (HTP) technologies have emerged recently, opening a new opportunity to address this challenge. However, there is a paucity of research in phenotyping design and modeling high-dimensional HTP data. The main themes of this dissertation are 1) genomic connectedness that could potentially be used as a means to design a phenotyping experiment and 2) a novel statistical approach that aims to handle high-dimensional HTP data. In the first three studies, I first compared genomic connectedness with pedigree-based connectedness. This was followed by investigating the relationship between genomic connectedness and prediction accuracy derived from cross-validation. Additionally, I developed a connectedness R package that implements a variety of connectedness measures. The fourth study investigated a novel statistical approach by leveraging the combination of dimension reduction and graphical models to understand the interrelationships among high-dimensional HTP data.
|
84 |
Reliability Assessment of IoT-enabled Systems using Fault Trees and Bayesian NetworksAbdulhamid, Alhassan, Kabir, Sohag, Ghafir, Ibrahim, Lei, Ci 18 January 2024 (has links)
No / The Internet of Things (IoT) has brought significant advancements in various domains, providing innovative and efficient solutions. However, ensuring the safe design and operation of IoT devices is
crucial, as the consequences of component failure can range from system
downtime to dangerous operating states. Several methods have been proposed to evaluate the failure behaviours of IoT-based systems, including
Fault Tree Analysis (FTA), a methodology adopted from other safetycritical domains. This study integrated FTA and Bayesian Network (BN)
models to assess IoT system reliability based on components’ reliability
data and other statistical information. The integrated model achieved
efficient predictive failure analysis, considering combinations of 12 basic
events to quantify the overall system’s reliability. The model also enables
criticality analysis, ranking basic events based on their contributions to
system failure and providing a guide for design modification in order to
enhance IoT safety. By comparing failure data in FTA and criticality
indices obtained using the BN model, the proposed integration offers a
probabilistic estimation of IoT system failure and a viable safety guide
for designing IoT systems.
|
85 |
Fuzzy Bayesian estimation and consequence modeling of the domino effects of methanol storage tanksPouyakian, M., Laal, F., Jafari, M.J., Nourai, F., Kabir, Sohag 07 April 2022 (has links)
Yes / In this study, a Fuzzy Bayesian network (FBN) approach was proposed to analyze the domino effects of pool fire in storage tanks. Failure probabilities were calculated using triangular fuzzy numbers, the combined Center of area (CoA)/Sum-Product method, and the BN approach. Consequence modeling, probit equations, and Leaky-Noisy OR (L-NOR) gates were used to analyze the domino effects, and modify conditional probability tables (CPTs). Methanol storage tanks were selected to confirm the practical feasibility of the suggested method. Then the domino probability using bow-tie analysis (BTA), and FBN in the first and second levels was compared, and the Ratio of Variation (RoV) was used for sensitivity analysis. The probability of the domino effect in the first and second levels (FBN) was 0.0071472631 and 0.0090630640, respectively. The results confirm that this method is a suitable tool for analyzing the domino effects and using FBN and L-NOR gate is a good way for assessing the reliability of tanks. / National Petrochemical Company (NPC) of Iran
|
86 |
Finding Causal Relationships Among Metrics In A Cloud-Native Environment / Att hitta orsakssamband bland Mätvärden i ett moln-native MiljöRishi Nandan, Suresh January 2023 (has links)
Automatic Root Cause Analysis (RCA) systems aim to streamline the process of identifying the underlying cause of software failures in complex cloud-native environments. These systems employ graph-like structures to represent causal relationships between different components of a software application. These relationships are typically learned through performance and resource utilization metrics of the microservices in the system. To accomplish this objective, numerous RCA systems utilize statistical algorithms, specifically those falling under the category of causal discovery. These algorithms have demonstrated their utility not only in RCA systems but also in a wide range of other domains and applications. Nonetheless, there exists a research gap in the exploration of the feasibility and efficacy of multivariate time series causal discovery algorithms for deriving causal graphs within a microservice framework. By harnessing metric time series data from Prometheus and applying these algorithms, we aim to shed light on their performance in a cloudnative environment. Furthermore, we have introduced an adaptation in the form of an ensemble causal discovery algorithm. Our experimentation with this ensemble approach, conducted on datasets with known causal relationships, unequivocally demonstrates its potential in enhancing the precision of detected causal connections. Notably, our ultimate objective was to ascertain reliable causal relationships within Ericsson’s cloud-native system ’X,’ where the ground truth is unavailable. The ensemble causal discovery approach triumphs over the limitations of employing individual causal discovery algorithms, significantly augmenting confidence in the unveiled causal relationships. As a practical illustration of the utility of the ensemble causal discovery techniques, we have delved into the domain of anomaly detection. By leveraging causal graphs within our study, we have successfully applied this technique to anomaly detection within the Ericsson system. / System för automatisk rotorsaksanalys (RCA) syftar till att effektivisera process för att identifiera den underliggande orsaken till programvarufel i komplexa molnbaserade miljöer. Dessa system använder grafliknande strukturer att representera orsakssamband mellan olika komponenter i en mjukvaruapplikation. Dessa relationer lär man sig vanligtvis genom prestanda och resursutnyttjande mätvärden för mikrotjänsterna i systemet. För att uppnå detta mål använder många RCAsystem statistiska algoritmer, särskilt de som faller under kategorin orsaksupptäckt. Dessa algoritmer har visat att de inte är användbara endast i RCA-system men även inom en lång rad andra domäner och applikationer. Icke desto mindre finns det en forskningslucka i utforskningen av genomförbarhet och effektivitet av orsaksupptäckt av multivariat tidsserie algoritmer för att härleda kausala grafer inom ett mikrotjänstramverk. Genom att utnyttja metriska tidsseriedata från Prometheus och tillämpa Dessa algoritmer strävar vi efter att belysa deras prestanda i ett moln- inhemsk miljö. Dessutom har vi infört en anpassning i formen av en ensemble kausal upptäcktsalgoritm. Vårt experiment med denna ensemblemetod, utförd på datauppsättningar med kända orsakssamband relationer, visar otvetydigt sin potential för att förbättra precisionen hos upptäckta orsakssamband. Särskilt vår ultimata Målet var att fastställa tillförlitliga orsakssamband inom Ericssons molnbaserade systemet ’X’, där grundsanningen inte är tillgänglig. De ensemble kausal discovery approach segrar över begränsningarna av att använda individuella kausala upptäcktsalgoritmer, avsevärt öka förtroendet för de avslöjade orsakssambanden. Som en praktisk illustration av nyttan av ensemblens kausal upptäcktstekniker har vi fördjupat oss i anomalidomänen upptäckt. Genom att utnyttja kausala grafer inom vår studie har vi framgångsrikt tillämpat denna teknik för att detektera anomali inom Ericsson system
|
87 |
L’évolution modulaire des protéines : un point de vue phylogénétique / A phylogenetic view of the modular evolution of proteinsSertier, Anne-Sophie 12 September 2011 (has links)
La diversité du monde vivant repose pour une large part sur la diversité des protéines codées dans les génomes. Comment une telle diversité a-t-elle été générée ? La théorie classique postule que cette diversité résulte à la fois de la divergence de séquence et de la combinatoire des arrangements de protéines en domaines à partir de quelques milliers de domaines anciens, mais elle n’explique pas les nombreuses protéines orphelines.Dans cette thèse, nous avons étudié l’évolution des protéines du point de vue de leur décomposition en domaines en utilisant trois bases de données : HOGENOM (familles de protéines homologues), Pfam (familles de domaines expertisées) et ProDom (familles de modules protéiques construites automatiquement). Chaque famille d’HOGENOM a ainsi été décomposée en domaines de Pfam ou modules de ProDom. Nous avons modélisé l’évolution de ces familles par un réseau Bayésien basé sur l’arbre phylogénétique des espèces. Dans le cadre de ce modèle, on peut reconstituer rigoureusement les scénarios d’évolution les plus probables qui reflètent la présence ou l’absence de chaque protéine, domaine ou module dans les espèces ancestrales. La mise en relation de ces scénarios permet d’analyser l’émergence de nouvelles protéines en fonctions de domaines ou modules ancestraux. L’analyse avec Pfam suggère que la majorité de ces événements résulte de réarrangements de domaines anciens, en accord avec la théorie classique. Cependant une part très significative de la diversité des protéines est alors négligée. L’analyse avec ProDom, au contraire, suggère que la majorité des nouvelles protéines ont recruté de nouveaux modules protéiques. Nous discutons les biais de Pfam et de ProDom qui permettent d’expliquer ces points de vue différents. Nous proposons que l’émergence de nouveaux modules protéiques peut résulter d’un turn-over rapide de séquences codantes, et que cette innovation au niveau des modules est essentielle à l’apparition de nombreuses protéines nouvelles tout au long de l’évolution. / The diversity of life derives mostly from the variety of proteins coded in genomes. How did evolution produce such a tremendous diversity ? The classical theory postulates that this diversity results both from sequence divergence and from the combinatorial arrangements of a few thousand primary protein domain types. However this does not account for the increasing number of entirely unique proteins as found in most genomes.In this thesis, we study the evolution of proteins from the point of view of their domain decomposition and rely on three databases : HOGENOM (homologous protein families), Pfam (manually curated protein domain families) and ProDom (automatically built protein module families). Each protein family from HOGENOM has thus been decomposed into Pfam domains or ProDom modules. We have modelled the evolution of these families using a Bayesian network based on the phylogenetic species tree. In the framework of this model, we can rigorously reconstitute the most likely evolutionary scenarios reflecting the presence or absence of each protein, domain or module in ancestral species. The comparison of these scenarios allows us to analyse the emergence of new proteins in terms of ancestral domains or modules. Pfam analysis suggests that the majority of protein innovations results from rearrangements of ancient domains, in agreement with the classical paradigm of modular protein evolution. However a very significant part of protein diversity is then neglected. On the other hand ProDom analysis suggests that the majority of new proteins have recruited novel protein modules. We discuss the respective biases of Pfam and ProDom underlying these contrasting views. We propose that the emergence of new protein modules may result from a fast turnover of coding sequences and that this module innovation is essential to the emergence of numerous novel proteins throughout evolution
|
88 |
Learning probabilistic relational models: a novel approach. / Aprendendo modelos probabilísticos relacionais: uma nova abordagem.Mormille, Luiz Henrique Barbosa 17 August 2018 (has links)
While most statistical learning methods are designed to work with data stored in a single table, many large datasets are stored in relational database systems. Probabilistic Relational Models (PRM) extend Bayesian networks by introducing relations and individuals, thus making it possible to represent information in a relational database. However, learning a PRM from relational data is a more complex task than learning a Bayesian Network from \"flat\" data. The main difficulties that arise while learning a PRM are establishing what are the legal dependency structures, searching for possible structures, and scoring them. This thesis focuses on the development of a novel approach to learn the structure of a PRM, describes a package in the R language to support the learning framework, and applies it to a real, large scale scenario of a city named Atibaia, in the state of São Paulo, Brazil. The research is based on a database combining three different tables, each representing one class in the domain of study. The first table contains 27 attributes from 110,816 citizens of Atibaia. The second table contains 9 attributes from 20,162 companies located in the city. And finally, the third table has 8 attributes from 327 census sectors (small territorial units that comprise the city of Atibaia). The proposed framework is applied to learn a PRM structure and parameters from the database. The model is used to verify if the Social Class of a person can be explained by the location where they live, their neighbors, and the companies nearby. Preliminary experiments have been conducted and a paper published in the 2017 Symposium on Knowledge Discovery, Mining and Learning (KDMiLe). The algorithm performance was further evaluated by extensive experimentation, and a broader study using Serasa Experian data was conducted. Finally, the package in the R language that supports our method was refined along with proper documentation and a tutorial. / Embora a maioria dos métodos de aprendizado estatístico tenha sido desenvolvida para se trabalhar com dados armazenados em uma única tabela, muitas bases de dados estão armazenadas em bancos de dados relacionais. Modelos Probabilísticos Relacionai (PRM) estendem Redes Bayesianas introduzindo relações e indivíduos, tornando possível a representação de informação em uma base de dados relacional. Entretanto, aprender um PRM através de dados relacionais é uma tarefa mais complexa que aprender uma Rede Bayesiana de uma única tabela. As maiores dificuldades que se impõe enquanto se aprende um PRM são estabelecer quais são as estruturas de dependência legais, procurar por possíveis estruturas, e avalia-las. Esta tese foca em desenvolver um novo método de aprendizado de estruturas de PRM, descrever um pacote na linguagem R que suporte este método e aplica-lo a um cenário real e de grande escala, a cidade de Atibaia, no estado de São Paulo, Brasil. Esta pesquisa está baseada em uma base de dados combinando três tabelas distintas, cada uma representando uma classe no domínio de estudo. A primeira tabela contém 27 atributos de 110.816 habitantes de Atibaia, e a segunda tabela contém 9 atributos de 20.162 empresas da cidade. Por fim, a terceira tabela possui 8 atributos para 327 setores censitários (pequenas unidades territoriais que formam a cidade de Atibaia). A proposta é aplicada para aprender-se a estrutura de um PRM e seus parâmetros através desta base de dados. O modelo foi utilizado para verificar se a classe social de uma pessoa pode ser explicada pelo local onde ela vive, seus vizinhos e as companhias próximas. Experimentos preliminares foram conduzidos e um artigo foi publicado no Symposium on Knowledge Discovery, Mining and Learning (KDMiLe). O desempenho do algoritmo foi reavaliada através de extensiva experimentação, e um estudo mais amplo foi conduzido com os dados da Serasa Experian. Por fim, o pacote em R que suporta o método proposto foi refinado, e documentação e tutorial apropriado foram descritos.
|
89 |
APLICABILIDADE DE MEMÓRIA LÓGICA COMO FERRAMENTA COADJUVANTE NO DIAGNÓSTICO DAS DOENÇAS GENÉTICASLeite Filho, Hugo Pereira 25 August 2006 (has links)
Made available in DSpace on 2016-08-10T10:55:25Z (GMT). No. of bitstreams: 1
Hugo Pereira Leite Filho.pdf: 1747513 bytes, checksum: 1d5d4b0eff9478fb7f58eca6fa166bec (MD5)
Previous issue date: 2006-08-25 / This study has involved the interaction among knowledge in very distinctive
areas, or else: informatics, engineering e genetics, emphasizing the building of a taking
decision backing system methodology.
The aim of this study has been the development of a tool to help in the diagnosis
of chromosomal aberrations, presenting like tutorial model the Turner Syndrome. So to
do that there have been used classification techniques based in decision trees,
probabilistic networks (Naïve Bayes, TAN e BAN) and neural MLP network (from
English, Multi- Layer Perception) and training algorithm by error retro propagation.
There has been chosen an algorithm and a tool able to propagate evidence and
develop efficient inference techniques able to originate appropriate techniques to
combine the expert knowledge with defined data in a databank.
We have come to a conclusion about the best solution to work out the shown
problem in this study that was the Naïve Bayes model, because this one presented the
greatest accuracy. The decision - ID3, TAN e BAN tree models presented solutions to
the indicated problem, but those were not as much satisfactory as the Naïve Bayes.
However, the neural network did not promote a satisfactory solution. / O estudo envolveu a interação entre áreas de conhecimento bastante distintas, a
saber: informática, engenharia e genética, com ênfase na metodologia da construção de
um sistema de apoio à tomada de decisão.
Este estudo tem como objetivo o desenvolvimento de uma ferramenta para o
auxílio no diagnóstico de anomalias cromossômicas, apresentando como modelo tutorial
a Síndrome de Turner. Para isso foram utilizadas técnicas de classificação baseadas em
árvores de decisão, redes probabilísticas (Naïve Bayes, TAN e BAN) e rede neural MLP
(do inglês, Multi- Layer Perceptron) com algoritmo de treinamento por retropropagação
de erro.
Foi escolhido um algoritmo e uma ferramenta capaz de propagar evidências e
desenvolver as técnicas de inferência eficientes capazes de gerar técnicas apropriadas
para combinar o conhecimento do especialista com dados definidos em uma base de
dados.
Chegamos a conclusão que a melhor solução para o domínio do problema
apresentado neste estudo foi o modelo Naïve Bayes, pois este modelo apresentou maior
acurácia. Os modelos árvore de decisão-ID3, TAN e BAN apresentaram soluções para o
domínio do problema sugerido, mas as soluções não foram tão satisfatória quanto o
Naïve Bayes. No entanto, a rede neural não promoveu solução satisfatória.
|
90 |
Um sistema multiagente para geração automática de uma rede Bayesiana para análise de riscos de tecnologia de informaçãoCruz, Anderson da 31 March 2011 (has links)
Submitted by Fabricia Fialho Reginato (fabriciar) on 2015-06-17T01:05:19Z
No. of bitstreams: 1
AndersonCruz.pdf: 7416070 bytes, checksum: 04604058e57f063a3ae4859f4aef2c09 (MD5) / Made available in DSpace on 2015-06-17T01:05:19Z (GMT). No. of bitstreams: 1
AndersonCruz.pdf: 7416070 bytes, checksum: 04604058e57f063a3ae4859f4aef2c09 (MD5)
Previous issue date: 2011 / Nenhuma / A cada ano, a informação ganha mais importância no meio corporativo e a utilização de sistemas de tecnologia de informação é cada vez mais comum em empresas dos mais diversos segmentos. No entanto, estes sistemas são compostos por aplicações que são sujeitas a vulnerabilidades que podem comprometer a confidencialidade, integridade e disponibilidade, ou seja, a segurança destas informações. Fornecedores de tecnologia estão sempre corrigindo falhas em suas ferramentas e disponibilizando correção para seu produtos para que estes se tornem mais seguros. O processo de correção de uma falha leva um determinado tempo até que o cliente possa atualizar o seu sistema. Muitas vezes este tempo não é suficiente para evitar um incidente de segurança, o que torna necessário soluções de contorno para diminuir riscos referentes aos aplicativos vulneráveis. O processo de análise/avaliação na Gestão de Riscos prioriza as ações que são tomadas para mitigar estes riscos. Este processo é árduo, envolvendo a identificação de vulnerabilidades para as aplicações utilizadas na empresa, sendo que o número de vulnerabilidades aumenta diariamente. Para apoiar na análise de riscos de tecnologia da informação, este trabalho propõe um método para geração automática de uma Rede Bayesiana baseado em sistemas multiagentes. O sistema multiagente conta com quatro agentes, sendo um destinado a monitorar as vulnerabilidades na National Vulnerability Database, outro para monitorar os ativos que compõem o negócio da organização, outro para monitorar os incidentes ocorridos no ambiente da organização e outro, destinado a reunir todas estas informações com o intuindo de determinar um fator de risco para os ativos da organização. Este último agente utiliza Redes Bayesianas para determinar o risco dos ativos. O método proposto mostrou-se eficiente para identificar mudanças no ambiente da organização e alterar o risco dos ativos de acordo com os diversos fatores que influenciam no seu cálculo, como o surgimento e/ou alteração de uma vulnerabilidade, surgimento e/ou alteração na base de dados de configuração de ativos da organização e identificação e/ou alteração no relato de incidentes de segurança no ambiente da empresa. Este resultado deve-se a utilização de Redes Bayesianas para calcular o risco dos ativos, visto que esta é capaz de considerar a relação causal entre os ativos da organização. / Every year information gains more significance in the corporative scenario and information technology system use is incresingly more commom on different segment companies. However, these systems are composed by applications that are under vulnerabilities that can compromise the confidentiality, integrity and availability, i.e. infomation security. Technology providers are always correcting flaws in their tools and providing it for their products in order for them to be safer. The flaw correction process considers some time until the client update his system. Many times this window is not enough to avoid a security incident, which turns necessary workarounds for minimizing risks concerning these vulnerable applications. The risks evaluation/analysis process aims primarily actions to mitigate these risks. This process is ardous, involving the identification of vulnerabilities in the used applications of a company, with this number growing each day. For supporting the information technology risks evaluation/analysis, a method for automated generation of a Bayesian Network multiagent system was proposed. This system is composed by four agents, one being destinated to monitoring vulnerabilities in National Vulnerability Database, another one for monitoring actives that compose the organition business, another one is responsible for to monitor the incidents occured in the organization enviroment and another one to gather all these information with the objective of determining a risk factor for the organization actives. The last one uses a Bayesian Network in order to determine the risk factor for the organization actives. The proposed method has shown to be effective in the identification of enviroment changes and in the changing of active risks according with several factors that influence its calculation, such as the emergence or changing of vulnerabilities, emergence or alteration on the business organization actives configuration database or identification and alteration of security incidents report on the company enviroment.
|
Page generated in 0.0488 seconds