• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 78
  • 29
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 77
  • 74
  • 54
  • 50
  • 45
  • 41
  • 38
  • 34
  • 32
  • 32
  • 31
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Redes Bayesianas aplicadas à análise do risco de crédito. / Bayesian networks applied to the anilysis of credit risk.

Cristiane Karcher 26 February 2009 (has links)
Modelos de Credit Scoring são utilizados para estimar a probabilidade de um cliente proponente ao crédito se tornar inadimplente, em determinado período, baseadas em suas informações pessoais e financeiras. Neste trabalho, a técnica proposta em Credit Scoring é Redes Bayesianas (RB) e seus resultados foram comparados aos da Regressão Logística. As RB avaliadas foram as Bayesian Network Classifiers, conhecidas como Classificadores Bayesianos, com seguintes tipos de estrutura: Naive Bayes, Tree Augmented Naive Bayes (TAN) e General Bayesian Network (GBN). As estruturas das RB foram obtidas por Aprendizado de Estrutura a partir de uma base de dados real. Os desempenhos dos modelos foram avaliados e comparados através das taxas de acerto obtidas da Matriz de Confusão, da estatística Kolmogorov-Smirnov e coeficiente Gini. As amostras de desenvolvimento e de validação foram obtidas por Cross-Validation com 10 partições. A análise dos modelos ajustados mostrou que as RB e a Regressão Logística apresentaram desempenho similar, em relação a estatística Kolmogorov- Smirnov e ao coeficiente Gini. O Classificador TAN foi escolhido como o melhor modelo, pois apresentou o melhor desempenho nas previsões dos clientes maus pagadores e permitiu uma análise dos efeitos de interação entre variáveis. / Credit Scoring Models are used to estimate the insolvency probability of a customer, in a period, based on their personal and financial information. In this text, the proposed model for Credit Scoring is Bayesian Networks (BN) and its results were compared to Logistic Regression. The BN evaluated were the Bayesian Networks Classifiers, with structures of type: Naive Bayes, Tree Augmented Naive Bayes (TAN) and General Bayesian Network (GBN). The RB structures were developed using a Structure Learning technique from a real database. The models performance were evaluated and compared through the hit rates observed in Confusion Matrix, Kolmogorov-Smirnov statistic and Gini coefficient. The development and validation samples were obtained using a Cross-Validation criteria with 10-fold. The analysis showed that the fitted BN models have the same performance as the Logistic Regression Models, evaluating the Kolmogorov-Smirnov statistic and Gini coefficient. The TAN Classifier was selected as the best BN model, because it performed better in prediction of bad customers and allowed an interaction effects analysis between variables.
62

Modelagem probabilística de aspectos afetivos do aluno em um jogo educacional colaborativo

Pontarolo, Edilson January 2008 (has links)
Este trabalho apresenta o processo de construção de um modelo de inferência de emoções que um aluno sente em relação a outros alunos durante interação síncrona em um contexto de jogo colaborativo de aprendizagem. A inferência de emoções está psicologicamente fundamentada na abordagem da avaliação cognitiva e foram investigadas relações entre objetivos e normas comportamentais do aluno e aspectos de sua personalidade. Especificamente, foram empregados o modelo OCC de emoções e o modelo Big-Five (Cinco Grandes Fatores) de traços de personalidade para a fundamentação teórica da modelagem. O modelo afetivo representa a vergonha e orgulho apresentados pelo aluno em resposta à avaliação cognitiva de suas próprias ações e a reprovação e admiração apresentadas pelo aluno em resposta a ações de seu parceiro de jogo, a partir da avaliação do comportamento observável dos parceiros representado por suas interações no jogo colaborativo, em relação a normas comportamentais do aluno. A fim de suportar a incerteza presente na informação afetiva e cognitiva do aluno, adotou-se uma representação deste conhecimento através de Rede Bayesiana. Um refinamento qualitativo parcial e a respectiva parametrização quantitativa do modelo probabilístico foram efetuados a partir da análise de uma base de casos obtida através da condução de experimentos. A fim de prover um ambiente experimental, foi concebido e prototipado um jogo colaborativo no qual dois indivíduos conjugam esforços a fim de resolver problemas lógicos comuns à dupla, através de ações coordenadas, negociação simples e comunicação estruturada, em competição com outras duplas. / This work presents the construction of a model to infer emotions a student feels towards other students during synchronous interaction in the context of a collaborative learning game. The emotions inference is psychologically based on cognitive appraisal theory. Some relations between students’ personality and their goals and behavioral standards were also investigated. This modeling was based on OCC emotion model and Big-Five personality model. The affective model represents the student’s proud and shame as an answer to the cognitive appraisal of her/his own attributed interactions, and the student’s admiration and reproach as an answer to the cognitive appraisal of her/his partner attributed interactions, both according to the student’s behavioral standards. Bayesian Network knowledge representation was employed to better stand for the uncertainty present in the student’s cognitive and affective information. Employing a data-driven procedure, the probabilistic model was partially refined in terms of qualitative relations and quantitative parameters. Experimental data were obtained by using a game prototype implemented in order to support a collaborative dynamics of coordinated action, simple negotiation and structured communication, through which students interacted in order to solve shared problems, during synchronous competition with other students.
63

Sequential Quantum-Dot Cellular Automata Design And Analysis Using Dynamic Bayesian Networks

Venkataramani, Praveen 29 October 2008 (has links)
The increasing need for low power and stunningly fast devices in Complementary Metal Oxide Semiconductor Very large Scale Integration (CMOS VLSI) circuits, directs the stream towards scaling of the same. However scaling at sub-micro level and nano level pose quantum mechanical effects and thereby limits further scaling of CMOS circuits. Researchers look into new aspects in nano regime that could effectively resolve this quandary. One such technology that looks promising at nano-level is the quantum dot cellular automata (QCA). The basic operation of QCA is based on transfer of charge rather than the electrons itself. The wave nature of these electrons and their uncertainty in device operation demands a probabilistic approach to study their operation. The data is assigned to a QCA cell by positioning two electrons into four quantum dots. However the site in which the electrons settles is uncertain and depends on various factors. In an ideal state, the electrons position themselves diagonal to each other, through columbic repulsion, to a low energy state. The quantum cell is said to be polarized to +1 or -1, based on the alignment of the electrons. In this thesis, we put forth a probabilistic model to design sequential QCA in Bayesian networks. The timing constraints inherent in sequential circuits due to the feedback path, makes it difficult to assign clock zones in a way that the outputs arrive at the same time instant. Hence designing circuits that have many sequential elements is time consuming. The model presented in this paper is fast and could be used to design sequential QCA circuits without the need to align the clock zones. One of the major advantages of our model lies in its ability to accurately capture the polarization of each cell of the sequential QCA circuits. We discuss the architecture of some of the basic sequential circuits such as J-K flip flop (FF), RAM memory cell and s27 benchmark circuit designed in QCADesigner. We analyze the circuits using a state-of-art Dynamic Bayesian Networks (DBN). To our knowledge this is the first time sequential circuits are analyzed using DBN. For the first time, Estimated Posterior Importance Sampling Algorithm (EPIS) is used to determine the probabilistic values, to study the effect due to variations in physical dimension and operating temperature on output polarization in QCA circuits.
64

Causal learning techniques using multi-omics data for carcass and meat quality traits in Nelore cattle /

Bresolin, Tiago. January 2019 (has links)
Orientador: Lucia Galvão de Albuquerque / Resumo: Registros de características quantitativas e informações genotípicas cole- tadas para cada animal são utilizados para identificar regiões do genoma associadas à variação fenotípica. No entanto, essas investigações são, geralmente, realizadas com base em testes estatísticos de correlação ou associação, que não implicam em causalidade. A fim de explorar amplamente essas informações, métodos poderosos de inferência causal foram desenvolvidos para estimar os efeitos causais entre as variáveis estudadas. Apesar do progresso significativo neste campo, inferir os efeitos causais entre variáveis aleatórias contínuas ainda é um desafio e poucos estudos têm explorado as relações causais em genética quantitativa e no melhoramento animal. Neste contexto, dois estudos foram realizados com os seguintes objetivos: 1) Buscar as relações causais entre as características de carcaça e qualidade de carne usando um modelo de equação estrutural (MEE), sob modelo linear misto em bovinos da raça Nelore, e 2) Reconstruir redes de genes-fenótipos e realizar análise de rede causal por meio da integração de dados fenotípicos, genotípicos e transcriptômicos em bovinos da raça Nelore. Para o primeiro estudo, um total de 4.479 animais com informação fenotípica para o peso da carcaça quente (PCQ), área de olho lombo (AOL), espessura de gordura subcutânea (EGS), força de cisalhamento (FC) e marmoreio (MAR) foram usados. Os animais foram genotipados usando os painéis BovineHD Bead- Chip e GeneSeek Genomic Pro... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Quantitative traits and genotypes information have been collected for each animal and used to identify genome regions related to phenotypes variation. However, these investigations are, usually, performed based on correlation or association statistical tests, which do not imply in causation. In order to fully explore these information, powerful causal inference methods have been developed to estimate causal effects among the variables under study. Despite significant progress in this field infer causal effect among random variables remains a challenge and some few studies have explored causal relationships in quantitative genetics and animal breeding. In this context, two studies were performed with the following objectives: 1) Search for the causal relationship among carcass yield and meat quality traits using a structural equation model (SEM), under linear mixed model context in Nelore cattle, and 2) Reconstruct gene-phenotype networks and perform causal network analysis through the integrating of phenotypic, genotypic, and transcriptomic data in Nelore cattle. For the first study, a total of 4,479 animals with phenotypic information for hot carcass weight (HCW), longissimus muscle area (LMA), backfat thickness (BF), Warner-Bratzler shear force (WBSF), and marbling score (MB) traits were used. Animals were genotyped using BovineHD BeadChip and GeneSeek Genomic Profiler Indicus HD - GGP75Ki. For causal inference using SEM a multistep procedure methodology was used as follow:... (Complete abstract click electronic access below) / Doutor
65

Assessing the use of voting methods to improve Bayesian network structure learning

Abu-Hakmeh, Khaldoon Emad 27 August 2012 (has links)
Structure inference in learning Bayesian networks remains an active interest in machine learning due to the breadth of its applications across numerous disciplines. As newer algorithms emerge to better handle the task of inferring network structures from observational data, network and experiment sizes heavily impact the performance of these algorithms. Specifically difficult is the task of accurately learning networks of large size under a limited number of observations, as often encountered in biological experiments. This study evaluates the performance of several leading structure learning algorithms on large networks. The selected algorithms then serve as a committee, which then votes on the final network structure. The result is a more selective final network, containing few false positives, with compromised ability to detect all network features.
66

Situation Assessment in a Stochastic Environment using Bayesian Networks / Situationsuppfattning med Bayesianska nätverk i en stokastisk omgivning.

Ivansson, Johan January 2002 (has links)
The mental workload for fighter pilots in modern air combat is extremely high. The pilot has to make fast dynamic decisions under high uncertainty and high time pressure. This is hard to perform in close encounters, but gets even harder when operating beyond visual range when the sensors of an aircraft become the pilot's eyes and ears. Although sensors provide good estimates for position and speed of an opponent, there is a big loss in the assessment of a situation. Important tactical events or situations can occur without the pilot noticing, which can change the outcome of a mission completely. This makes the development of an automated situation assessment system very important for future fighter aircraft. This Master Thesis investigates the possibilities to design and implement an automated situation assessment system in a fighter aircraft. A Fuzzy-Bayesian hybrid technique is used in order to cope with the stochastic environment and making the development of the tactical situations library as clear and simple as possible.
67

A Bayesian Framework for Software Regression Testing

Mir arabbaygi, Siavash January 2008 (has links)
Software maintenance reportedly accounts for much of the total cost associated with developing software. These costs occur because modifying software is a highly error-prone task. Changing software to correct faults or add new functionality can cause existing functionality to regress, introducing new faults. To avoid such defects, one can re-test software after modifications, a task commonly known as regression testing. Regression testing typically involves the re-execution of test cases developed for previous versions. Re-running all existing test cases, however, is often costly and sometimes even infeasible due to time and resource constraints. Re-running test cases that do not exercise changed or change-impacted parts of the program carries extra cost and gives no benefit. The research community has thus sought ways to optimize regression testing by lowering the cost of test re-execution while preserving its effectiveness. To this end, researchers have proposed selecting a subset of test cases according to a variety of criteria (test case selection) and reordering test cases for execution to maximize a score function (test case prioritization). This dissertation presents a novel framework for optimizing regression testing activities, based on a probabilistic view of regression testing. The proposed framework is built around predicting the probability that each test case finds faults in the regression testing phase, and optimizing the test suites accordingly. To predict such probabilities, we model regression testing using a Bayesian Network (BN), a powerful probabilistic tool for modeling uncertainty in systems. We build this model using information measured directly from the software system. Our proposed framework builds upon the existing research in this area in many ways. First, our framework incorporates different information extracted from software into one model, which helps reduce uncertainty by using more of the available information, and enables better modeling of the system. Moreover, our framework provides flexibility by enabling a choice of which sources of information to use. Research in software measurement has proven that dealing with different systems requires different techniques and hence requires such flexibility. Using the proposed framework, engineers can customize their regression testing techniques to fit the characteristics of their systems using measurements most appropriate to their environment. We evaluate the performance of our proposed BN-based framework empirically. Although the framework can help both test case selection and prioritization, we propose using it primarily as a prioritization technique. We therefore compare our technique against other prioritization techniques from the literature. Our empirical evaluation examines a variety of objects and fault types. The results show that the proposed framework can outperform other techniques on some cases and performs comparably on the others. In sum, this thesis introduces a novel Bayesian framework for optimizing regression testing and shows that the proposed framework can help testers improve the cost effectiveness of their regression testing tasks.
68

A Bayesian Framework for Software Regression Testing

Mir arabbaygi, Siavash January 2008 (has links)
Software maintenance reportedly accounts for much of the total cost associated with developing software. These costs occur because modifying software is a highly error-prone task. Changing software to correct faults or add new functionality can cause existing functionality to regress, introducing new faults. To avoid such defects, one can re-test software after modifications, a task commonly known as regression testing. Regression testing typically involves the re-execution of test cases developed for previous versions. Re-running all existing test cases, however, is often costly and sometimes even infeasible due to time and resource constraints. Re-running test cases that do not exercise changed or change-impacted parts of the program carries extra cost and gives no benefit. The research community has thus sought ways to optimize regression testing by lowering the cost of test re-execution while preserving its effectiveness. To this end, researchers have proposed selecting a subset of test cases according to a variety of criteria (test case selection) and reordering test cases for execution to maximize a score function (test case prioritization). This dissertation presents a novel framework for optimizing regression testing activities, based on a probabilistic view of regression testing. The proposed framework is built around predicting the probability that each test case finds faults in the regression testing phase, and optimizing the test suites accordingly. To predict such probabilities, we model regression testing using a Bayesian Network (BN), a powerful probabilistic tool for modeling uncertainty in systems. We build this model using information measured directly from the software system. Our proposed framework builds upon the existing research in this area in many ways. First, our framework incorporates different information extracted from software into one model, which helps reduce uncertainty by using more of the available information, and enables better modeling of the system. Moreover, our framework provides flexibility by enabling a choice of which sources of information to use. Research in software measurement has proven that dealing with different systems requires different techniques and hence requires such flexibility. Using the proposed framework, engineers can customize their regression testing techniques to fit the characteristics of their systems using measurements most appropriate to their environment. We evaluate the performance of our proposed BN-based framework empirically. Although the framework can help both test case selection and prioritization, we propose using it primarily as a prioritization technique. We therefore compare our technique against other prioritization techniques from the literature. Our empirical evaluation examines a variety of objects and fault types. The results show that the proposed framework can outperform other techniques on some cases and performs comparably on the others. In sum, this thesis introduces a novel Bayesian framework for optimizing regression testing and shows that the proposed framework can help testers improve the cost effectiveness of their regression testing tasks.
69

Exploiting Structure in Backtracking Algorithms for Propositional and Probabilistic Reasoning

Li, Wei January 2010 (has links)
Boolean propositional satisfiability (SAT) and probabilistic reasoning represent two core problems in AI. Backtracking based algorithms have been applied in both problems. In this thesis, I investigate structure-based techniques for solving real world SAT and Bayesian networks, such as software testing and medical diagnosis instances. When solving a SAT instance using backtracking search, a sequence of decisions must be made as to which variable to branch on or instantiate next. Real world problems are often amenable to a divide-and-conquer strategy where the original instance is decomposed into independent sub-problems. Existing decomposition techniques are based on pre-processing the static structure of the original problem. I propose a dynamic decomposition method based on hypergraph separators. Integrating this dynamic separator decomposition into the variable ordering of a modern SAT solver leads to speedups on large real world SAT problems. Encoding a Bayesian network into a CNF formula and then performing weighted model counting is an effective method for exact probabilistic inference. I present two encodings for improving this approach with noisy-OR and noisy-MAX relations. In our experiments, our new encodings are more space efficient and can speed up the previous best approaches over two orders of magnitude. The ability to solve similar problems incrementally is critical for many probabilistic reasoning problems. My aim is to exploit the similarity of these instances by forwarding structural knowledge learned during the analysis of one instance to the next instance in the sequence. I propose dynamic model counting and extend the dynamic decomposition and caching technique to multiple runs on a series of problems with similar structure. This allows us to perform Bayesian inference incrementally as the evidence, parameter, and structure of the network change. Experimental results show that my approach yields significant improvements over previous model counting approaches on multiple challenging Bayesian network instances.
70

Online Learning of Non-Stationary Networks, with Application to Financial Data

Hongo, Yasunori January 2012 (has links)
<p>In this paper, we propose a new learning algorithm for non-stationary Dynamic Bayesian Networks is proposed. Although a number of effective learning algorithms for non-stationary DBNs have previously been proposed and applied in Signal Pro- cessing and Computational Biology, those algorithms are based on batch learning algorithms that cannot be applied to online time-series data. Therefore, we propose a learning algorithm based on a Particle Filtering approach so that we can apply that algorithm to online time-series data. To evaluate our algorithm, we apply it to the simulated data set and the real-world financial data set. The result on the simulated data set shows that our algorithm performs accurately makes estimation and detects change. The result applying our algorithm to the real-world financial data set shows several features, which are suggested in previous research that also implies the effectiveness of our algorithm.</p> / Thesis

Page generated in 0.0489 seconds