Spelling suggestions: "subject:"inference"" "subject:"lnference""
51 |
Functional inferences over heterogeneous dataNuamah, Kwabena Amoako January 2018 (has links)
Inference enables an agent to create new knowledge from old or discover implicit relationships between concepts in a knowledge base (KB), provided that appropriate techniques are employed to deal with ambiguous, incomplete and sometimes erroneous data. The ever-increasing volumes of KBs on the web, available for use by automated systems, present an opportunity to leverage the available knowledge in order to improve the inference process in automated query answering systems. This thesis focuses on the FRANK (Functional Reasoning for Acquiring Novel Knowledge) framework that responds to queries where no suitable answer is readily contained in any available data source, using a variety of inference operations. Most question answering and information retrieval systems assume that answers to queries are stored in some form in the KB, thereby limiting the range of answers they can find. We take an approach motivated by rich forms of inference using techniques, such as regression, for prediction. For instance, FRANK can answer “what country in Europe will have the largest population in 2021?" by decomposing Europe geo-spatially, using regression on country population for past years and selecting the country with the largest predicted value. Our technique, which we refer to as Rich Inference, combines heuristics, logic and statistical methods to infer novel answers to queries. It also determines what facts are needed for inference, searches for them, and then integrates the diverse facts and their formalisms into a local query-specific inference tree. Our primary contribution in this thesis is the inference algorithm on which FRANK works. This includes (1) the process of recursively decomposing queries in way that allows variables in the query to be instantiated by facts in KBs; (2) the use of aggregate functions to perform arithmetic and statistical operations (e.g. prediction) to infer new values from child nodes; and (3) the estimation and propagation of uncertainty values into the returned answer based on errors introduced by noise in the KBs or errors introduced by aggregate functions. We also discuss many of the core concepts and modules that constitute FRANK. We explain the internal “alist” representation of FRANK that gives it the required flexibility to tackle different kinds of problems with minimal changes to its internal representation. We discuss the grammar for a simple query language that allows users to express queries in a formal way, such that we avoid the complexities of natural language queries, a problem that falls outside the scope of this thesis. We evaluate the framework with datasets from open sources.
|
52 |
Transitive inference and arbitrarily applicable comparative relations : a behaviour-analytic model of relational reasoningMunnelly, Anita January 2013 (has links)
The transitive inference (TI) problem (i.e., if A > B and B > C, then A > C) has traditionally been considered a hallmark of logical reasoning. However, considerable debate exists regarding the psychological processes involved when individuals perform TI tasks. The current thesis therefore sought to further explore this issue with adult humans as the population sample. Following a review of the literature, the first empirical chapter, Chapter 2, adopted a traditional TI task and exposed participants to training and testing with a simultaneous discrimination paradigm. In addition, the chapter sought to examine the potential facilitative effects of awareness and repeated exposure to training and test phases on the emergence of TI. Results broadly demonstrated that awareness led to more accurate responses at test, and that for a number of participants, repeated exposure to training and test phases, allowed the targeted performances to emerge over time. Chapter 3 developed and determined the utility of a novel behaviour-analytic account of TI as a form of derived comparative relational responding. For the most part, findings revealed that the model has the potential to generate arbitrarily applicable comparative responding in adults, comparable to TI. However, findings from Chapter 3 also revealed that despite the implementation of a number of interventions, response accuracy was still weak on a number of the targeted relations. Chapter 4 developed a variant of the Relational Completion Procedure (RCP) to examine derived comparative responding to 'More-than' and 'Less-than' relations, as an extension of the behavioural account of TI adopted in Chapter 3. Findings revealed that, for the most part, the protocol was effective in establishing the targeted relations, and that the linearity (e.g., A < B, B < C) of training pairs was not found to effect the emergence of this pattern of responding. Chapter 5 sought to explore the transformation of discriminative functions via a 5- member relational network of 'More-than' and 'Less-than' relations. Findings revealed that, across four experiments, approximately half of the participants displayed the predicted patterns of performance. That is, half of the participants responded 'less' to the stimuli ranked lower in the network (A and B) and 'more' to the stimuli ranked higher in the network (D and E), on the basis of training with stimulus C. The utility of the current behaviour-analytic approach to the study of TI is discussed.
|
53 |
A critical analysis of the thesis of the symmetry between explanation and prediction : including a case study of evolutionary theoryLee, Robert Wai-Chung January 1979 (has links)
One very significant characteristic of Hempel's covering-law models of scientific explanation, that is, the deductive-nomological model and the inductive-statistical model, is the supposed symmetry between explanation and prediction. In brief, the symmetry thesis asserts that explanation and prediction have the same logical structure; in other words, if an explanation of an. event had been taken account of in time, then it could have served as a basis for predicting the event in question, and vice versa. The present thesis is a critical analysis of the validity of this purported symmetry between explanation and prediction.
The substance of the thesis begins with a defence against some common misconceptions of the symmetry thesis, for example, the idea that the symmetry concerns statements but not arguments. Specifically, Grunbaum's interpretation of the symmetry thesis as pertaining to the logical inferability rather than the epistemological symmetry between explanation and prediction is examined.
The first sub-thesis of the symmetry thesis, namely that "Every adequate explanation is a potential prediction," is then analyzed. Purported counterexamples such as evolutionary theory and the paresis case are critically
examined and consequently dismissed. Since there are conflicting views regarding the nature of explanation and prediction in evolutionary theory, a case study of the theory is also presented.
Next, the second sub-thesis of the symmetry thesis, namely that "Every adequate prediction is a potential explanation," is discussed. In particular, the barometer case is discharged as a counterexample to the second sub-thesis when the explanatory power of indicator laws is properly understood.
Finally, Salmon's current causal-relevance model of explanation, which claims to be an alternative to Hempel's inductive-statistical model, is critically analyzed. A modified inductive-statistical model of explanation is also proposed. This modified model retains the nomological ingredient of Hempel's original inductive-statistical model, but it is immune to criticisms raised against the latter.
In conclusion, I maintain that there is indeed a symmetry between explanation and prediction. But since deductive-nomological explanation and prediction are essentially different from inductive-statistical explanation and prediction, the form the symmetry takes between deductive-nomological explanation and prediction differs from the form it exhibits between inductive-statistical explanation and prediction. / Arts, Faculty of / Philosophy, Department of / Graduate
|
54 |
Bayesian inference in parameter estimation of bioprocessesMathias, Nigel January 2024 (has links)
The following thesis explores the use of Bayes’ theorem for modelling bioprocesses, specifically using a combination of data-driven modelling techniques and Bayesian inference, to address practical concerns that arise when estimating parameters. This thesis is divided into four chapters, including a novel contribution to the use of sur- rogate modelling and parameter estimation algorithms for noisy data.
The 2nd chapter addresses the problem of high computational expense when estimat- ing parameters using complex models. The main solution here is the use of surrogate modelling. This method was then applied to a high-fidelity model provided by Sarto- rius AG. In this, a 3-batch run (simulated) of the bioreactor was passed through the algorithm, and two influential parameters, the growth and death rates of the live cell cultures, were estimated.
The 3rd chapter addresses other challenges that arise in parameter estimation prob- lems. Specifically, the issue of having limited data on a new process can be addressed using historical data, a distinct feature in Bayesian Learning. Finally, the problem with choosing the “right” model for a given process is studied through the use of a term in Bayesian inference known as the evidence. In this, the evidence is used to select between a series of models based on both model complexity and goodness-of-fit to the data. / Thesis / Master of Applied Science (MASc)
|
55 |
Variational inference for Gaussian-jump processes with application in gene regulationOcone, Andrea January 2013 (has links)
In the last decades, the explosion of data from quantitative techniques has revolutionised our understanding of biological processes. In this scenario, advanced statistical methods and algorithms are becoming fundamental to decipher the dynamics of biochemical mechanisms such those involved in the regulation of gene expression. Here we develop mechanistic models and approximate inference techniques to reverse engineer the dynamics of gene regulation, from mRNA and/or protein time series data. We start from an existent variational framework for statistical inference in transcriptional networks. The framework is based on a continuous-time description of the mRNA dynamics in terms of stochastic differential equations, which are governed by latent switching variables representing the on/off activity of regulating transcription factors. The main contributions of this work are the following. We speeded-up the variational inference algorithm by developing a method to compute a posterior approximate distribution over the latent variables using a constrained optimisation algorithm. In addition to computational benefits, this method enabled the extension to statistical inference in networks with a combinatorial model of regulation. A limitation of this framework is the fact that inference is possible only in transcriptional networks with a single-layer architecture (where a single or couples of transcription factors regulate directly an arbitrary number of target genes). The second main contribution in this work is the extension of the inference framework to hierarchical structures, such as feed-forward loop. In the last contribution we define a general structure for transcription-translation networks. This work is important since it provides a general statistical framework to model complex dynamics in gene regulatory networks. The framework is modular and scalable to realistically large systems with general architecture, thus representing a valuable alternative to traditional differential equation models. All models are embedded in a Bayesian framework; inference is performed using a variational approach and compared to exact inference where possible. We apply the models to the study of different biological systems, from the metabolism in E. coli to the circadian clock in the picoalga O. tauri.
|
56 |
Scalable Gaussian process inference using variational methodsMatthews, Alexander Graeme de Garis January 2017 (has links)
Gaussian processes can be used as priors on functions. The need for a flexible, principled, probabilistic model of functional relations is common in practice. Consequently, such an approach is demonstrably useful in a large variety of applications. Two challenges of Gaussian process modelling are often encountered. These are dealing with the adverse scaling with the number of data points and the lack of closed form posteriors when the likelihood is non-Gaussian. In this thesis, we study variational inference as a framework for meeting these challenges. An introductory chapter motivates the use of stochastic processes as priors, with a particular focus on Gaussian process modelling. A section on variational inference reviews the general definition of Kullback-Leibler divergence. The concept of prior conditional matching that is used throughout the thesis is contrasted to classical approaches to obtaining tractable variational approximating families. Various theoretical issues arising from the application of variational inference to the infinite dimensional Gaussian process setting are settled decisively. From this theory we are able to give a new argument for existing approaches to variational regression that settles debate about their applicability. This view on these methods justifies the principled extensions found in the rest of the work. The case of scalable Gaussian process classification is studied, both for its own merits and as a case study for non-Gaussian likelihoods in general. Using the resulting algorithms we find credible results on datasets of a scale and complexity that was not possible before our work. An extension to include Bayesian priors on model hyperparameters is studied alongside a new inference method that combines the benefits of variational sparsity and MCMC methods. The utility of such an approach is shown on a variety of example modelling tasks. We describe GPflow, a new Gaussian process software library that uses TensorFlow. Implementations of the variational algorithms discussed in the rest of the thesis are included as part of the software. We discuss the benefits of GPflow when compared to other similar software. Increased computational speed is demonstrated in relevant, timed, experimental comparisons.
|
57 |
Efficient Computation of Probabilities of Events Described by Order Statistics and Application to a Problem of QueuesJones, Lee K., Larson, Richard C., 1943- 05 1900 (has links)
Consider a set of N i.i.d. random variables in [0, 1]. When the experimental values of the random variables are arranged in ascending order from smallest to largest, one has the order statistics of the set of random variables. In this note an O(N3) algorithm is developed for computing the probability that the order statistics vector lies in a given rectangle. The new algorithm is then applied to a problem of statistical inference in queues. Illustrative computational results are included.
|
58 |
Efficient implementation of Markov chain Monte CarloFan, Yanan January 2001 (has links)
No description available.
|
59 |
Data contamination versus model deviationFonseca, Viviane Grunert da January 1999 (has links)
No description available.
|
60 |
An investigation into the effects of right hemisphere brain damage on human communicationMott, Natasha Liane January 2001 (has links)
No description available.
|
Page generated in 0.0519 seconds