• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 28
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 23
  • 22
  • 21
  • 17
  • 17
  • 17
  • 16
  • 13
  • 12
  • 12
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of Sampling-Based Algorithms for Multisensor Distributed Target Tracking

Nguyen, Trang 16 May 2003 (has links)
Nonlinear filtering is certainly very important in estimation since most real-world problems are nonlinear. Recently a considerable progress in the nonlinear filtering theory has been made in the area of the sampling-based methods, including both random (Monte Carlo) and deterministic (quasi-Monte Carlo) sampling, and their combination. This work considers the problem of tracking a maneuvering target in a multisensor environment. A novel scheme for distributed tracking is employed that utilizes a nonlinear target model and estimates from local (sensor-based) estimators. The resulting estimation problem is highly nonlinear and thus quite challenging. In order to evaluate the performance capabilities of the architecture considered, advanced sampling-based nonlinear filters are implemented: particle filter (PF), unscented Kalman filter (UKF), and unscented particle filter (UPF). Results from extensive Monte Carlo simulations using different configurations of these algorithms are obtained to compare their effectiveness for solving the distributed target tracking problem.
2

Data contamination versus model deviation

Fonseca, Viviane Grunert da January 1999 (has links)
No description available.
3

Approximation methods and inference for stochastic biochemical kinetics

Schnoerr, David Benjamin January 2016 (has links)
Recent experiments have shown the fundamental role that random fluctuations play in many chemical systems in living cells, such as gene regulatory networks. Mathematical models are thus indispensable to describe such systems and to extract relevant biological information from experimental data. Recent decades have seen a considerable amount of modelling effort devoted to this task. However, current methodologies still present outstanding mathematical and computational hurdles. In particular, models which retain the discrete nature of particle numbers incur necessarily severe computational overheads, greatly complicating the tasks of characterising statistically the noise in cells and inferring parameters from data. In this thesis we study analytical approximations and inference methods for stochastic reaction dynamics. The chemical master equation is the accepted description of stochastic chemical reaction networks whenever spatial effects can be ignored. Unfortunately, for most systems no analytic solutions are known and stochastic simulations are computationally expensive, making analytic approximations appealing alternatives. In the case where spatial effects cannot be ignored, such systems are typically modelled by means of stochastic reaction-diffusion processes. As in the non-spatial case an analytic treatment is rarely possible and simulations quickly become infeasible. In particular, the calibration of models to data constitutes a fundamental unsolved problem. In the first part of this thesis we study two approximation methods of the chemical master equation; the chemical Langevin equation and moment closure approximations. The chemical Langevin equation approximates the discrete-valued process described by the chemical master equation by a continuous diffusion process. Despite being frequently used in the literature, it remains unclear how the boundary conditions behave under this transition from discrete to continuous variables. We show that this boundary problem results in the chemical Langevin equation being mathematically ill-defined if defined in real space due to the occurrence of square roots of negative expressions. We show that this problem can be avoided by extending the state space from real to complex variables. We prove that this approach gives rise to real-valued moments and thus admits a probabilistic interpretation. Numerical examples demonstrate better accuracy of the developed complex chemical Langevin equation than various real-valued implementations proposed in the literature. Moment closure approximations aim at directly approximating the moments of a process, rather then its distribution. The chemical master equation gives rise to an infinite system of ordinary differential equations for the moments of a process. Moment closure approximations close this infinite hierarchy of equations by expressing moments above a certain order in terms of lower order moments. This is an ad hoc approximation without any systematic justification, and the question arises if the resulting equations always lead to physically meaningful results. We find that this is indeed not always the case. Rather, moment closure approximations may give rise to diverging time trajectories or otherwise unphysical behaviour, such as negative mean values or unphysical oscillations. They thus fail to admit a probabilistic interpretation in these cases, and care is needed when using them to not draw wrong conclusions. In the second part of this work we consider systems where spatial effects have to be taken into account. In general, such stochastic reaction-diffusion processes are only defined in an algorithmic sense without any analytic description, and it is hence not even conceptually clear how to define likelihoods for experimental data for such processes. Calibration of such models to experimental data thus constitutes a highly non-trivial task. We derive here a novel inference method by establishing a basic relationship between stochastic reaction-diffusion processes and spatio-temporal Cox processes, two classes of models that were considered to be distinct to each other to this date. This novel connection naturally allows to compute approximate likelihoods and thus to perform inference tasks for stochastic reaction-diffusion processes. The accuracy and efficiency of this approach is demonstrated by means of several examples. Overall, this thesis advances the state of the art of modelling methods for stochastic reaction systems. It advances the understanding of several existing methods by elucidating fundamental limitations of these methods, and several novel approximation and inference methods are developed.
4

Estimation and the Stress-Strength Model

Brownstein, Naomi 01 January 2007 (has links)
The paper considers statistical inference for R = P(X < Y) in the case when both X and Y have generalized gamma distributions. The maximum likelihood estimators for R are developed in the case when either all three parameters of the generalized gamma distributions are unknown or when the shape parameters are known. In addition, objective Bayes estimators based on non informative priors are constructed when the shape parameters are known. Finally, the uniform minimum variance unbiased estimators (UMVUE) are derived in the case when only the scale parameters are unknown.
5

BALLWORLD: A FRAMEWORK FOR LEARNING STATISTICAL INFERENCE AND STREAM PROCESSING

Ravali, Yeluri January 2017 (has links)
No description available.
6

Efficient Computation of Probabilities of Events Described by Order Statistics and Application to a Problem of Queues

Jones, Lee K., Larson, Richard C., 1943- 05 1900 (has links)
Consider a set of N i.i.d. random variables in [0, 1]. When the experimental values of the random variables are arranged in ascending order from smallest to largest, one has the order statistics of the set of random variables. In this note an O(N3) algorithm is developed for computing the probability that the order statistics vector lies in a given rectangle. The new algorithm is then applied to a problem of statistical inference in queues. Illustrative computational results are included.
7

Complex Feature Recognition: A Bayesian Approach for Learning to Recognize Objects

Viola, Paul 01 November 1996 (has links)
We have developed a new Bayesian framework for visual object recognition which is based on the insight that images of objects can be modeled as a conjunction of local features. This framework can be used to both derive an object recognition algorithm and an algorithm for learning the features themselves. The overall approach, called complex feature recognition or CFR, is unique for several reasons: it is broadly applicable to a wide range of object types, it makes constructing object models easy, it is capable of identifying either the class or the identity of an object, and it is computationally efficient--requiring time proportional to the size of the image. Instead of a single simple feature such as an edge, CFR uses a large set of complex features that are learned from experience with model objects. The response of a single complex feature contains much more class information than does a single edge. This significantly reduces the number of possible correspondences between the model and the image. In addition, CFR takes advantage of a type of image processing called 'oriented energy'. Oriented energy is used to efficiently pre-process the image to eliminate some of the difficulties associated with changes in lighting and pose.
8

Essays on Efficiency Analysis

Asava-Vallobh, Norabajra 2009 May 1900 (has links)
This dissertation consists of four essays which investigate efficiency analysis, especially when non-discretionary inputs exist. A new approach of the multi-stage Data Envelopment Analysis (DEA) for non-discretionary inputs, statistical inference discussions, and applications are provided. In the first essay, I propose a multi-stage DEA model to address the non-discretionary input issue, and provide a simulation analysis that illustrates the implementation and potential advantages of the new approach relative to the leading existing multi-stage models of non-discretionary inputs, such as Ruggiero's 1998 model and Fried, Lovell, Schmidt, and Yaisawarng's 2002 model. Furthermore, the simulation results also suggest that the constant returns to scale assumption seems to be preferred when observations have similar sizes, but variable returns to scale may be more appropriate when their scales are different. In the second essay, I make comments on Simar and Wilson work of 2007. My simulation evidence shows that traditional statistical inference does not underperform the bootstrap process proposed by Simar and Wilson. Moreover, my results also show that the truncated model recommended by Simar and Wilson does not outperform the tobit model in terms of statistical inference. Therefore, the traditional method, t-test, and the tobit model should continue to be considered applicable tools for a multi-stage DEA model with non-discretionary inputs, despite contrary claims by Simar and Wilson. The third essay raises an example of applying my new approach to data from Texas school districts. The results suggest that a lagged variable (e.g. students' performance in the previous year), a variable which has been used in the literature, may not play an important role in determining efficiency scores. This implies that one may not need access to panel data on individual scores to study school efficiency. My final essay applies a standard DEA model and the Malmquist productivity index to commercial banks in Thailand in order to compare their efficiency and productivity before and after Thailand?s Financial Sector Master Plan (FSMP) that was implemented in 2004.
9

Compression and Classification of Imagery

Tabesh, Ali January 2006 (has links)
Problems at the intersection of compression and statistical inference recur frequently due to the concurrent use of signal and image compression and classification algorithms in many applications. This dissertation addresses two such problems: statistical inference on compressed data, and rate-allocation for joint compression and classification.Features of the JPEG2000 standard make possible the development of computationally efficient algorithms to achieve such a goal for imagery compressed using this standard. We propose the use of the information content (IC) of wavelet subbands, defined as the number of bytes that the JPEG2000 encoder spends to compress the subbands, for content analysis. Applying statistical learning frameworks for detection and classification, we present experimental results for compressed-domain texture image classification and cut detection in video. Our results indicate that reasonable performance can be achieved, while saving computational and bandwidth resources. IC features can also be used for preliminary analysis in the compressed domain to identify candidates for further analysis in the decompressed domain.In many applications of image compression, the compressed image is to be presented to human observers and statistical decision-making systems. In such applications, the fidelity criterion with respect to which the image is compressed must be selected to strike an appropriate compromise between the (possibly conflicting) image quality criteria for the human and machine observers. We present tractable distortion measures based on the Bhattacharyya distance (BD) and a new upper bound on the quantized probability of error that make possible closed form expressions for rate allocation to image subbands and show their efficacy in maintaining the aforementioned balance between compression and classification. The new bound offers two advantages over the BD in that it yields closed-form solutions for rate-allocation in problems involving correlated sources and more than two classes.
10

Elevers statistiska slutsatser : En kvalitativ studie som undersöker hur elever utan formell statistisk träning drar slutsatser utifrån statistiska data / Pupils’ statistical conclusions : A qualitative study which investigates how student without formal statistical training draw conclusions based on statistical data

Abrahamsson, Gustav January 2022 (has links)
This study examines how students without formal statistics draw conclusions based on statistical data. The study refers to how pupil draw conclusions based on existing data, reason within the subject statistics and which aspects pupil may have difficulty in perceiving. To investigate this, the following research question were used: How do Swedish students without formal statistical training express informal statistical inferences?     This study is based on informal statistical inference which means that how to draw conclusions based on already existing data about what will happen in the next step. To collect data for this study, the data collection method focusgroup interviews with students in year 5 was used. The collected data were compiled, analyzed and compared with previous research. The results showed that students have difficulty drawing conclusions based on the existing data because they do not see it in a larger context.

Page generated in 0.12 seconds