Spelling suggestions: "subject:"inference"" "subject:"lnference""
271 |
Studies in the completeness and efficiency of theorem-proving by resolutionKowalski, Robert Anthony January 1970 (has links)
Inference systems Τ and search strategies E for T are distinguished from proof procedures β = (T,E) The completeness of procedures is studied by studying separately the completeness of inference systems and of search strategies. Completeness proofs for resolution systems are obtained by the construction of semantic trees. These systems include minimal α-restricted binary resolution, minimal α-restricted M-clash resolution and maximal pseudo-clash resolution. Certain refinements of hyper-resolution systems with equality axioms are shown to be complete and equivalent to refinements of the pararmodulation method for dealing with equality. The completeness and efficiency of search strategies for theorem-proving problems is studied in sufficient generality to include the case of search strategies for path-search problems in graphs. The notion of theorem-proving problem is defined abstractly so as to be dual to that of and" or tree. Special attention is given to resolution problems and to search strategies which generate simpler before more complex proofs. For efficiency, a proof procedure (T,E) requires an efficient search strategy E as well as an inference system T which admits both simple proofs and relatively few redundant and irrelevant derivations. The theory of efficient proof procedures outlined here is applied to proving the increased efficiency of the usual method for deleting tautologies and subsumed clauses. Counter-examples are exhibited for both the completeness and efficiency of alternative methods for deleting subsumed clauses. The efficiency of resolution procedures is improved by replacing the single operation of resolving a clash by the two operations of generating factors of clauses and of resolving a clash of factors. Several factoring methods are investigated for completeness. Of these the m-factoring method is shown to be always more efficient than the Wos-Robinson method.
|
272 |
Parallel algorithms for generalized N-body problem in high dimensions and their applications for bayesian inference and image analysisXiao, Bo 12 January 2015 (has links)
In this dissertation, we explore parallel algorithms for general N-Body problems in high dimensions, and their applications in machine learning and image analysis on distributed infrastructures.
In the first part of this work, we proposed and developed a set of basic tools built on top of Message Passing Interface and OpenMP for massively parallel nearest neighbors search. In particular, we present a distributed tree structure to index data in arbitrary number of dimensions, and a novel algorithm that eliminate the need for collective coordinate exchanges during tree construction. To the best of our knowledge, our nearest neighbors package is the first attempt that scales to millions of cores in up to a thousand dimensions.
Based on our nearest neighbors search algorithms, we present "ASKIT", a parallel fast kernel summation tree code with a new near-far field decomposition and a new compact representation for the far field. Specially our algorithm is kernel independent. The efficiency of new near far decomposition depends only on the intrinsic dimensionality of data, and the new far field representation only relies on the rand of sub-blocks of the kernel matrix.
In the second part, we developed a Bayesian inference framework and a variational formulation for a MAP estimation of the label field for medical image segmentation. In particular, we propose new representations for both likelihood probability and prior probability functions, as well as their fast calculation. Then a parallel matrix free optimization algorithm is given to solve the MAP estimation. Our new prior function is suitable for lots of spatial inverse problems.
Experimental results show our framework is robust to noise, variations of shapes and artifacts.
|
273 |
Measuring the causal effect of air temperature on violent crimeSöderdahl, Fabian, Hammarström, Karl January 2015 (has links)
This thesis aimed to apply the causal framework with potential outcomes to examine the causal effect of air temperature on reported violent crimes in Swedish municipalities. The Generalized Estimating Equations method was used on yearly, monthly and also July only data for the time period 2002-2014. One significant causal effect was established but the majority of the results pointed to there being no causal effect between air temperature and reported violent crimes.
|
274 |
Message Passing Algorithms for Facility Location ProblemsLazic, Nevena 09 June 2011 (has links)
Discrete location analysis is one of the most widely studied branches of operations research, whose applications arise in a wide variety of settings. This thesis describes a powerful new approach to facility location problems - that of message passing inference in probabilistic graphical models. Using this framework, we develop new heuristic algorithms, as well as a new approximation algorithm for a particular problem type.
In machine learning applications, facility location can be seen a discrete formulation of clustering and mixture modeling problems. We apply the developed algorithms to such problems in computer vision. We tackle the problem of motion segmentation in video sequences by formulating it as a facility location instance and demonstrate the advantages of message passing algorithms over current segmentation methods.
|
275 |
Bayesian Inference for Stochastic Volatility ModelsMen, Zhongxian January 1012 (has links)
Stochastic volatility (SV) models provide a natural framework for a
representation of time series for financial asset returns. As a
result, they have become increasingly popular in the finance
literature, although they have also been applied in other fields
such as signal processing, telecommunications, engineering, biology,
and other areas.
In working with the SV models, an important issue arises as how to
estimate their parameters efficiently and to assess how well they
fit real data. In the literature, commonly used estimation methods
for the SV models include general methods of moments, simulated
maximum likelihood methods, quasi Maximum likelihood method, and
Markov Chain Monte Carlo (MCMC) methods. Among these approaches,
MCMC methods are most flexible in dealing with complicated structure
of the models. However, due to the difficulty in the selection of
the proposal distribution for Metropolis-Hastings methods, in
general they are not easy to implement and in some cases we may also
encounter convergence problems in the implementation stage. In the
light of these concerns, we propose in this thesis new estimation
methods for univariate and multivariate SV models. In the simulation
of latent states of the heavy-tailed SV models, we recommend the
slice sampler algorithm as the main tool to sample the proposal
distribution when the Metropolis-Hastings method is applied. For the
SV models without heavy tails, a simple Metropolis-Hastings method
is developed for simulating the latent states. Since the slice
sampler can adapt to the analytical structure of the underlying
density, it is more efficient. A sample point can be obtained from
the target distribution with a few iterations of the sampler,
whereas in the original Metropolis-Hastings method many sampled
values often need to be discarded.
In the analysis of multivariate time series, multivariate SV models
with more general specifications have been proposed to capture the
correlations between the innovations of the asset returns and those
of the latent volatility processes. Due to some restrictions on the
variance-covariance matrix of the innovation vectors, the estimation
of the multivariate SV (MSV) model is challenging. To tackle this
issue, for a very general setting of a MSV model we propose a
straightforward MCMC method in which a Metropolis-Hastings method is
employed to sample the constrained variance-covariance matrix, where
the proposal distribution is an inverse Wishart distribution. Again,
the log volatilities of the asset returns can then be simulated via
a single-move slice sampler.
Recently, factor SV models have been proposed to extract hidden
market changes. Geweke and Zhou (1996) propose a factor SV model
based on factor analysis to measure pricing errors in the context of
the arbitrage pricing theory by letting the factors follow the
univariate standard normal distribution. Some modification of this
model have been proposed, among others, by Pitt and Shephard (1999a)
and Jacquier et al. (1999). The main feature of the factor SV
models is that the factors follow a univariate SV process, where the
loading matrix is a lower triangular matrix with unit entries on the
main diagonal. Although the factor SV models have been successful in
practice, it has been recognized that the order of the component may
affect the sample likelihood and the selection of the factors.
Therefore, in applications, the component order has to be considered
carefully. For instance, the factor SV model should be fitted to
several permutated data to check whether the ordering affects the
estimation results. In the thesis, a new factor SV model is
proposed. Instead of setting the loading matrix to be lower
triangular, we set it to be column-orthogonal and assume that each
column has unit length. Our method removes the permutation problem,
since when the order is changed then the model does not need to be
refitted. Since a strong assumption is imposed on the loading
matrix, the estimation seems even harder than the previous factor
models. For example, we have to sample columns of the loading matrix
while keeping them to be orthonormal. To tackle this issue, we use
the Metropolis-Hastings method to sample the loading matrix one
column at a time, while the orthonormality between the columns is
maintained using the technique proposed by Hoff (2007). A von
Mises-Fisher distribution is sampled and the generated vector is
accepted through the Metropolis-Hastings algorithm.
Simulation studies and applications to real data are conducted to
examine our inference methods and test the fit of our model.
Empirical evidence illustrates that our slice sampler within MCMC
methods works well in terms of parameter estimation and volatility
forecast. Examples using financial asset return data are provided to
demonstrate that the proposed factor SV model is able to
characterize the hidden market factors that mainly govern the
financial time series. The Kolmogorov-Smirnov tests conducted on
the estimated models indicate that the models do a reasonable job in
terms of describing real data.
|
276 |
Syllogistic inferencing in brain injured subjectsDroge, Janet. January 1987 (has links)
No description available.
|
277 |
Decision Strategies : Something Old, Something New, and Something BorrowedKerimi, Neda January 2011 (has links)
In this thesis, some old decision strategies are investigated and a new one that furthers our understanding of how decisions are made is introduced. Three studies are presented. In Study I and II, strategies are investigated in terms of inferences and in Study III, strategies are investigated in terms of preferences. Inferences refer to decisions regarding facts, e.g., whether a patient has a heart disease or not. Preferences refer to decision makers’ personal preferences between different choice alternatives, e.g., which flat out of many to choose. In all three studies, both non-compensatory strategies and compensatory strategies were investigated. In compensatory strategies, a high value in one attribute cannot compensate for a low value in another, while in non-compensatory strategies such compensation is possible. Results from Study I showed that both compensatory (logistic regression) and non-compensatory (fast and frugal) strategies make inferences equally well, but logistic regression strategies are more frugal (i.e., use fewer cues) than the fast and frugal strategies. Study II showed that the results were independent of the degree of expertise. The good inferential ability of both non-compensatory and compensatory strategies suggests there might be room for a strategy that can combine the strengths of the two. Study III introduces such a strategy, the Concordant-ranks (CR) strategy. Results from Study III showed that choices and attractiveness evaluations followed this new strategy. This strategy dictates a choice of an alternative with concordant ranks between attribute values and attribute weights when alternatives are about equally attractive. CR also serves as a proxy for finding the alternative with the shortest distance to an ideal. The CR strategy combines the computational simplicity of non-compensatory strategies with the superior information integration ability of compensatory strategies. / At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Submitted. Paper 2: Submitted.
|
278 |
Valid estimation and prediction inference in analysis of a computer modelNagy, Béla 11 1900 (has links)
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the
past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output.
Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments".
The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried
locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response
surface of the original computer model.
One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make
assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able
to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable.
In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching
coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
|
279 |
Bayesian model of axon guidanceDuncan Mortimer Unknown Date (has links)
An important mechanism during nervous system development is the guidance of axons by chemical gradients. The structure responsible for responding to chemical cues in the embryonic environment is the axonal growth cone -- a structure combining sensory and motor functions to direct axon growth. In this thesis, we develop a series of mathematical models for the gradient-based guidance of axonal growth cones, based on the idea that growth cones might be optimised for such a task. In particular, we study axon guidance from the framework of Bayesian decision theory, an approach that has recently proved to be very successful in understanding higher level sensory processing problems. We build our models in complexity, beginning with a one-dimensional array of chemoreceptors simply trying to decide whether an external gradient points to the right or the left. Even with this highly simplified model, we can obtain a good fit of theory to experiment. Furthermore, we find that the information a growth cone can obtain about the locations of its receptors has a strong influence on the functional dependence of gradient sensing performance on average concentration. We find that the shape of the sensitivity curve is robust to changes in the precise inference strategy used to determine gradient detection, and depends only on the information the growth cone can obtain about the locations of its receptors. We then consider the optimal distribution of guidance cues for guidance over long range, and find that the same upper limit on guidance distance is reached regardless of whether only bound, or only unbound receptors signal. We also discuss how information from multiple cues ought to be combined for optimal guidance. In chapters 5 and 6, we extend our model to two-dimensions, and to explicitly include temporal dynamics. The two-dimensional case yields results which are essentially equivalent to the one dimensional model. In contrast, explicitly including temporal dynamics in our leads to some significant departures from the one-dimensional and two-dimensional models, depending on the timescales over which various processes operate. Overall, we suggest that decision theory, in addition to providing a useful normative approach to studying growth cone chemotaxis, might provide a framework for understanding some of the biochemical pathways involved in growth cone chemotaxis, and in the chemotaxis of other eukaryotic cells.
|
280 |
Security and privacy model for association databasesKong, Yibing. January 2003 (has links)
Thesis (M.Comp.Sc.)--University of Wollongong, 2003. / Typescript. Bibliographical references: leaf 93-96.
|
Page generated in 0.0562 seconds