• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 560
  • 200
  • 89
  • 62
  • 22
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 1255
  • 224
  • 181
  • 179
  • 159
  • 119
  • 114
  • 105
  • 100
  • 95
  • 91
  • 90
  • 90
  • 88
  • 86
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Measuring inconsistency in probabilistic knowledge bases / Medindo inconsistência em bases de conhecimento probabilístico

Glauber De Bona 22 January 2016 (has links)
In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community. / Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA.
72

Abdução clássica e abdução probabilística: a busca pela explicação de dados reais / Classic and probabilistic abduction: the search for the explanation of real data

Alexandre Matos Arruda 16 April 2014 (has links)
A busca por explicações de fatos ou fenômenos é algo que sempre permeou o raciocínio humano. Desde a antiguidade, o ser humano costuma observar fatos e, de acordo com eles e o conhecimento presente, criar hipóteses que possam explicá-los. Um exemplo clássico é quando temos consulta médica e o médico, após verificar todos os sintomas, descobre qual é a doença e os meios de tratá-la. Essa construção de explicações, dado um conjunto de evidências que o indiquem, chamamos de \\textit{abdução}. A abdução tradicional para a lógica clássica estabelece que o dado meta não é derivado da base de conhecimento, ou seja, dada uma base de conhecimento $\\Gamma$ e um dado meta $A$ temos $\\Gamma ot \\vdash A$. Métodos clássicos de abdução buscam gerar um novo dado $H$ que, juntamente com uma base de conhecimento $\\Gamma$, possamos inferir $A$ ($\\Gamma \\cup H \\vdash A$). Alguns métodos tradicionais utilizam o tableaux (como em \\cite) para a geração da fórmula $H$. Aqui, além de lidarmos com a abdução baseada em corte, através do KE-tableaux, que não necessita assumir que o dado meta não seja derivado da base de conhecimento, lidaremos também com a lógica probabilística, redescoberta por Nilsson, em \\cite, onde temos a atribuição de probabilidades a fórmulas. Dizemos que uma instância em lógica probabilística é consistente se existe uma distribuição probabilística consistente sobre as valorações. Determinar essa distribuição probabilística é que o chamamos de problema PSAT. O objetivo de nosso trabalho é definir e estabelecer o que é uma abdução em Lógica Probabilística (abdução em PSAT) e, além disso, fornecer métodos de abdução para PSAT: dada uma instância PSAT $\\left\\langle \\Gamma, \\Psi ightangle$ na forma normal atômica \\cite e uma fórmula $A$ tal que existe uma distribuição probabi bylística $\\pi$ que satisfaz $\\left\\langle \\Gamma, \\Psi ightangle$ e $\\pi(A) = 0$, cada método é capaz de gerar uma fórmula $H$ tal que $\\left\\langle \\Gamma \\cup H , \\Psi ightangle \\!\\!|\\!\\!\\!\\approx A$ onde $\\pi(A) > 0$ para toda distribuição $\\pi$ que satisfaça $\\left\\langle \\Gamma \\cup H , \\Psi ightangle$. Iremos também demonstrar que alguns dos métodos apresentados são corretos e completos na geração de fórmulas $H$ que satisfaçam as condições de abdução. / The search for explanations of facts or phenomena is something that has always permeated human reasoning. Since antiquity, the human being usually observes facts and, according to them and his knowledge, create hypotheses that can explain them. A classic example is when we have medical consultation and the doctor, after checking all the symptoms, discovers what is the disease and the ways to treat it. This construction of explanations, given a set of evidence, we call \\textit. In traditional abduction methods it is assumed that the goal data has not yet been explained, that is, given a background knowledge base $\\Gamma$ and a goal data $A$ we have $\\Gamma ot \\vdash A$. Classical methods want to generate a new datum $H$ in such way that with the background knowledge base $\\Gamma$, we can infer $A$ ($\\Gamma \\cup H \\vdash A$). Some traditional methods use the analytical tableaux (see \\cite) for the generation of $H$. Here we deal with a cut-based abduction, with the KE-tableaux, which do not need to assume that the goal data is not derived from the knowledge base, and, moreover, with probabilistic logic (PSAT), rediscovered in \\cite, where we have probabilistic assignments to logical formulas. A PSAT instance is consistent if there is a probabilistic distribution over the assignments. The aim of our work is to define and establish what is an abduction in Probabilistic Logic (abduction for PSAT) and, moreover, provide methods for PSAT abduction: given a PSAT instance $\\left\\langle \\Gamma, \\Psi ightangle$ in atomic normal form \\cite and a formula $A$ such that there is a probabilistic distribution $\\pi$ that satisfies $\\left\\langle \\Gamma, \\Psi ightangle$ and $\\pi(A)=0$, each method is able to generate a formula $H$ such that $\\left\\langle \\Gamma \\cup H , \\Psi ightangle \\!\\!|\\!\\!\\!\\approx A$ where $\\pi(A) > 0$ for all distribution $\\pi$ that satisfies $\\left\\langle \\Gamma \\cup H , \\Psi ightangle$. We demonstrated that some of the our methods, shown in this work, are correct and complete for the generation of $H$.
73

Probabilistic Models for Spatially Aggregated Data / 空間集約データのための確率モデル

Tanaka, Yusuke 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22586号 / 情博第723号 / 新制||情||124(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 石井 信, 教授 下平 英寿 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
74

Computer-Aided Synthesis of Probabilistic Models / Computer-Aided Synthesis of Probabilistic Models

Andriushchenko, Roman January 2020 (has links)
Předkládaná práce se zabývá problémem automatizované syntézy pravděpodobnostních systémů: máme-li rodinu Markovských řetězců, jak lze efektivně identifikovat ten který odpovídá zadané specifikaci? Takové rodiny často vznikají v nejrůznějších oblastech inženýrství při modelování systémů s neurčitostí a rozhodování i těch nejjednodušších syntézních otázek představuje NP-těžký problém. V dané práci my zkoumáme existující techniky založené na protipříklady řízené induktivní syntéze (counterexample-guided inductive synthesis, CEGIS) a na zjemňování abstrakce (counterexample-guided abstraction refinement, CEGAR) a navrhujeme novou integrovanou metodu pro pravděpodobnostní syntézu. Experimenty nad relevantními modely demonstrují, že navržená technika je nejenom srovnatelná s moderními metodami, ale ve většině případů dokáže výrazně překonat, někdy i o několik řádů, existující přístupy.
75

Real-time probabilistic reasoning system using Lambda architecture

Anikwue, Arinze January 2019 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2019 / The proliferation of data from sources like social media, and sensor devices has become overwhelming for traditional data storage and analysis technologies to handle. This has prompted a radical improvement in data management techniques, tools and technologies to meet the increasing demand for effective collection, storage and curation of large data set. Most of the technologies are open-source. Big data is usually described as very large dataset. However, a major feature of big data is its velocity. Data flow in as continuous stream and require to be actioned in real-time to enable meaningful, relevant value. Although there is an explosion of technologies to handle big data, they are usually targeted at processing large dataset (historic) and real-time big data independently. Thus, the need for a unified framework to handle high volume dataset and real-time big data. This resulted in the development of models such as the Lambda architecture. Effective decision-making requires processing of historic data as well as real-time data. Some decision-making involves complex processes, depending on the likelihood of events. To handle uncertainty, probabilistic systems were designed. Probabilistic systems use probabilistic models developed with probability theories such as hidden Markov models with inference algorithms to process data and produce probabilistic scores. However, development of these models requires extensive knowledge of statistics and machine learning, making it an uphill task to model real-life circumstances. A new research area called probabilistic programming has been introduced to alleviate this bottleneck. This research proposes the combination of modern open-source big data technologies with probabilistic programming and Lambda architecture on easy-to-get hardware to develop a highly fault-tolerant, and scalable processing tool to process both historic and real-time big data in real-time; a common solution. This system will empower decision makers with the capacity to make better informed resolutions especially in the face of uncertainty. The outcome of this research will be a technology product, built and assessed using experimental evaluation methods. This research will utilize the Design Science Research (DSR) methodology as it describes guidelines for the effective and rigorous construction and evaluation of an artefact. Probabilistic programming in the big data domain is still at its infancy, however, the developed artefact demonstrated an important potential of probabilistic programming combined with Lambda architecture in the processing of big data.
76

Evaluation of Stochastic Magnetic Tunnel Junctions as Building Blocks for Probabilistic Computing

Orchi Hassan (9862484) 17 December 2020 (has links)
<p>Probabilistic computing has been proposed as an attractive alternative for bridging the computational gap between the classical computers of today and the quantum computers of tomorrow. It offers to accelerate the solution to many combinatorial optimization and machine learning problems of interest today, motivating the development of dedicated hardware. Similar to the ‘bit’ of classical computing or ‘q-bit’ of quantum computing, probabilistic bit or ‘p-bit’ serve as a fundamental building-block for probabilistic hardware. p-bits are robust classical quantities, fluctuating rapidly between its two states, envisioned as three-terminal devices with a stochastic output controlled by its input. It is possible to implement fast and efficient hardware p-bits by modifying the present day magnetic random access memory (MRAM) technology. In this dissertation, we evaluate the design and performance of low-barrier magnet (LBM) based p-bit realizations.<br> </p> <p>LBMs can be realized from perpendicular magnets designed to be close to the in-plane transition or from circular in-plane magnets. Magnetic tunnel junctions (MTJs) built using these LBMs as free layers can be integrated with standard transistors to implement the three-terminal p-bit units. A crucial parameter that determines the response of these devices is the correlation-time of magnetization. We show that for magnets with low energy barriers (Δ ≤ k<sub>B</sub>T) the circular disk magnets with in-plane magnetic anisotropy (IMA) can lead to correlation-times in <i>sub-ns</i> timescales; two orders of magnitude smaller compared to magnets having perpendicular magnetic anisotropy (PMA). We show that this striking difference is due to a novel precession-like fluctuation mechanism that is enabled by the large demagnetization field in mono-domain circular disk magnets. Our predictions on fast fluctuations in LBM magnets have recently received experimental confirmation as well.<br></p> <p>We provide a detailed energy-delay performance evaluation of the stochastic MTJ (s-MTJ) based p-bit hardware. We analyze the hardware using benchmarked SPICE multi-physics modules and classify the necessary and sufficient conditions for designing them. We connect our device performance analysis to systems-level metrics by emphasizing problem and substrate independent figures-of-merit such as flips per second and dissipated energy per flip that can be used to classify probabilistic hardware. </p>
77

5th International Probabilistic Workshop: 28-29 November 2007, Ghent, Belgium

Taerwe, Luc, Proske, Dirk 10 December 2008 (has links)
These are the proceedings of the 5th International Probabilistic Workshop. Even though the 5th anniversary of a conference might not be of such importance, it is quite interesting to note the development of this probabilistic conference. Originally, the series started as the 1st and 2nd Dresdner Probabilistic Symposium, which were launched to present research and applications mainly dealt with at Dresden University of Technology. Since then, the conference has grown to an internationally recognised conference dealing with research on and applications of probabilistic techniques, mainly in the field of structural engineering. Other topics have also been dealt with such as ship safety and natural hazards. Whereas the first conferences in Dresden included about 12 presentations each, the conference in Ghent has attracted nearly 30 presentations. Moving from Dresden to Vienna (University of Natural Resources and Applied Life Sciences) to Berlin (Federal Institute for Material Research and Testing) and then finally to Ghent, the conference has constantly evolved towards a truly international level. This can be seen by the language used. The first two conferences were entirely in the German language. During the conference in Berlin however, the change from the German to English language was especially apparent as some presentations were conducted in German and others in English. Now in Ghent all papers will be presented in English. Participants now, not only come from Europe, but also from other continents. Although the conference will move back to Germany again next year (2008) in Darmstadt, the international concept will remain, since so much work in the field of probabilistic safety evaluations is carried out internationally. In two years (2009) the conference will move to Delft, The Netherlands and probably in 2010 the conference will be held in Szczecin, Poland. Coming back to the present: the editors wish all participants a successful conference in Ghent.
78

6th International Probabilistic Workshop - 32. Darmstädter Massivbauseminar: 26-27 November 2008 ; Darmstadt, Germany 2008 ; Technische Universität Darmstadt

Graubner, Carl-Alexander, Schmidt, Holger, Proske, Dirk 10 December 2008 (has links)
These are the proceedings of the 6th International Probabilistic Workshop, formerly known as Dresden Probabilistic Symposium or International Probabilistic Symposium. The workshop was held twice in Dresden, then it moved to Vienna, Berlin, Ghent and finally to Darmstadt in 2008. All of the conference cities feature some specialities. However, Darmstadt features a very special property: The element number 110 was named Darmstadtium after Darmstadt: There are only very few cities worldwide after which a chemical element is named. The high element number 110 of Darmstadtium indicates, that much research is still required and carried out. This is also true for the issue of probabilistic safety concepts in engineering. Although the history of probabilistic safety concepts can be traced back nearly 90 years, for the practical applications a long way to go still remains. This is not a disadvantage. Just as research chemists strive to discover new element properties, with the application of new probabilistic techniques we may advance the properties of structures substantially. (Auszug aus Vorwort)
79

Evaluation of Epistemic Uncertainties in Probabilistic Risk Assessments : Philosophical Review of Epistemic Uncertainties in Probabilistic Risk Assessment Models Applied to Nuclear Power Plants - Fukushima Daiichi Accident as a Case Study

Rawandi, Omed A. January 2020 (has links)
Safety and risk assessment are key priorities for nuclear power plants. Probabilistic risk assessment (PRA) is a method for quantitative evaluation of accident risk, in particular severe nuclear core damage and the associated release of radioactive materials into the environment. The reliability and certainty of PRA have at times been questioned, especially when real-world observations have indicated that the frequency of nuclear accidents is higher than the probabilities predicted by PRA. This thesis provides a philosophical review of the epistemic uncertainties in PRA, using the Fukushima Daiichi accident of March 2011 as a case study. The thesis provides an overview of the PRA model structure, its key elements, and possible sources of uncertainty, in an attempt to understand the deviation between the real frequency of nuclear core-melt accidents and the probabilities predicted by PRA.The analyses in this thesis address several sources of epistemic uncertainty in PRA. Analyses of the PRA approach reveal the difficulty involved in covering all possible initiating events, all component and system failures, as well as their possible combinations in the risk evaluations. This difficulty represents the source of a characteristic epistemic uncertainty, referred to as completeness uncertainty. Analyses from the case study (the Fukushima Daiichi accident) illustrate this difficulty, as the PRA failed to identify a combined earthquake and tsunami, with the resultant flooding and consequent power failure and total blackout, as an initiating causal event in its logic structure.The analyses further demonstrate how insufficient experience and knowledge, as well as a lack of empirical data, lead to incorrect assumptions, which are used by the model as input parameters to estimate the probabilities of accidents. With limited availability of input data, decision-makers rely upon the subjective judgements and individual experiences of experts, which adds a further source of epistemic uncertainty to the PRA, usually referred to as input parameter uncertainty. As a typical example from the case study, the Fukushima Daiichi accident revealed that the PRA had underestimated the height of a possible tsunami. Consequently, the risk mitigation systems (e.g. the barrier seawalls) built to protect the power plant were inadequate due to incorrect input data.Poor assumptions may also result in improper modeling of failure modes and sequences in the PRA logic structure, which makes room for an additional source of epistemic uncertainty referred to as model uncertainty. For instance, the Fukushima Daiichi accident indicated insufficient backup of the power supply, because the possibility of simultaneous failure of several emergency diesel generators was assumed to be negligibly small. However, that was exactly what happened when 12 out of the 13 generators failed at the same time as a result of flooding.Furthermore, the analyses highlight the difficulty of modeling the human interventions and actions, in particular during the course of unexpected accidents, taking into account the physiological and psychological effects on the cognitive performance of humans, which result in uncertain operator interventions. This represents an additional source of epistemic uncertainty, usually referred to as uncertainty in modeling human interventions. As a result, there may be an increase in the probability of human error, characterized by a delay in making a diagnosis, formulating a response and taking action. Even this statement confirms the complexity of modelling human errors. In the case of the Fukushima Daiichi accident, lack ofvsufficient instructions for dealing with this "unexpected" accident made the coordination of operators' interventions almost impossible.Given the existence of all these sources of epistemic uncertainty, it would be reasonable to expect such a detected deviation between the real frequency of nuclear core-melt accidents and the probabilities predicted by PRA.It is, however, important to highlight that the occurrence of the Fukushima Daiichi accident could lie within the uncertainty distribution that the PRA model predicted prior to the accident. Hence, from the probabilistic point of view, the occurrence of a single unexpected accident should be interpreted with care, especially in political and commercial debates. Despite the limitations that have been highlighted in this thesis, the model still can provide valuable insights for systematic examination of safety systems, risk mitigation approaches, and strategic plans aimed at protecting the nuclear power plants against failures. Nevertheless, the PRA model does have development potentials, which deserves serious attention. The validity of calculated frequencies in PRA is restricted to the parameter under study. This validity can be improved by adding further relevant scenarios to the PRA, improving the screening approaches and collecting more input data through better collaboration between nuclear power plants world-wide. Lessons learned from the Fukushima Daiichi accident have initiated further studies aimed at covering additional scenarios. In subsequent IAEA safety report series, external hazards in multi-unit nuclear power plants have been considered. Such an action shows that PRA is a dynamic approach that needs continuous improvement toward better reliability.
80

Reduced Order Modeling of Dynamic Systems for Decreasing Computational Burden in Uncertainty Quantification

Cohn, Brian E. 12 October 2018 (has links)
No description available.

Page generated in 0.0958 seconds