• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 12
  • 4
  • 2
  • Tagged with
  • 57
  • 57
  • 18
  • 16
  • 14
  • 11
  • 11
  • 11
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Data Assimilation for Spatial Temporal Simulations Using Localized Particle Filtering

Long, Yuan 15 December 2016 (has links)
As sensor data becomes more and more available, there is an increasing interest in assimilating real time sensor data into spatial temporal simulations to achieve more accurate simulation or prediction results. Particle Filters (PFs), also known as Sequential Monte Carlo methods, hold great promise in this area as they use Bayesian inference and stochastic sampling techniques to recursively estimate the states of dynamic systems from some given observations. However, PFs face major challenges to work effectively for complex spatial temporal simulations due to the high dimensional state space of the simulation models, which typically cover large areas and have a large number of spatially dependent state variables. As the state space dimension increases, the number of particles must increase exponentially in order to converge to the true system state. The purpose of this dissertation work is to develop localized particle filtering to support PFs-based data assimilation for large-scale spatial temporal simulations. We develop a spatially dependent particle-filtering framework that breaks the system state and observation data into sub-regions and then carries out localized particle filtering based on these spatial regions. The developed framework exploits the spatial locality property of system state and observation data, and employs the divide-and-conquer principle to reduce state dimension and data complexity. Within this framework, we propose a two-level automated spatial partitioning method to provide optimized and balanced spatial partitions with less boundary sensors. We also consider different types of data to effectively support data assimilation for spatial temporal simulations. These data include both hard data, which are measurements from physical devices, and soft data, which are information from messages, reports, and social network. The developed framework and methods are applied to large-scale wildfire spread simulations and achieved improved results. Furthermore, we compare the proposed framework to existing particle filtering based data assimilation frameworks and evaluate the performance for each of them.
12

Pravděpodobnostní modely pro lokalizaci bezpilotního letounu testované na reálných datech / Pravděpodobnostní modely pro lokalizaci bezpilotního letounu testované na reálných datech

Figura, Juraj January 2014 (has links)
The thesis addresses the dynamic state estimation problem for the field of robotics, particularly for unmanned aerial vehicles (UAVs). Based on data collected from an UAV, we design several probabilistic models for estimation of its state (mainly speed and rotation angles), including the configurations where one of the sensors is not available. We use Kalman filter and Particle filter and focus on learning the model parameters using EM algorithm. The EM algorithm is then adjusted with respect to non-Gaussian density of some sensor errors and modified using model complexity penalization terms for better generalization. We implement these methods in MATLAB environment and evaluate on separate datasets. We also analyze data from a ground robot and use our implementation of Particle filter for estimation of its position. Powered by TCPDF (www.tcpdf.org)
13

Nonparametric Message Passing Methods for Cooperative Localization and Tracking

Savic, Vladimir January 2012 (has links)
The objective of this thesis is the development of cooperative localization and tracking algorithms using nonparametric message passing techniques. In contrast to the most well-known techniques, the goal is to estimate the posterior probability density function (PDF) of the position of each sensor. This problem can be solved using Bayesian approach, but it is intractable in general case. Nevertheless, the particle-based approximation (via nonparametric representation), and an appropriate factorization of the joint PDFs (using message passing methods), make Bayesian approach acceptable for inference in sensor networks. The well-known method for this problem, nonparametric belief propagation (NBP), can lead to inaccurate beliefs and possible non-convergence in loopy networks. Therefore, we propose four novel algorithms which alleviate these problems: nonparametric generalized belief propagation (NGBP) based on junction tree (NGBP-JT), NGBP based on pseudo-junction tree (NGBP-PJT), NBP based on spanning trees (NBP-ST), and uniformly-reweighted NBP (URW-NBP). We also extend NBP for cooperative localization in mobile networks. In contrast to the previous methods, we use an optional smoothing, provide a novel communication protocol, and increase the efficiency of the sampling techniques. Moreover, we propose novel algorithms for distributed tracking, in which the goal is to track the passive object which cannot locate itself. In particular, we develop distributed particle filtering (DPF) based on three asynchronous belief consensus (BC) algorithms: standard belief consensus (SBC), broadcast gossip (BG), and belief propagation (BP). Finally, the last part of this thesis includes the experimental analysis of some of the proposed algorithms, in which we found that the results based on real measurements are very similar with the results based on theoretical models.
14

Dynamic Factored Particle Filtering for Context-Specific Correlations

Mostinski, Dimitri 03 May 2007 (has links)
In order to control any system one needs to know the system's current state. In many real-world scenarios the state of the system cannot be determined with certainty due to the sensors being noisy or simply missing. In cases like these one needs to use probabilistic inference techniques to compute the likely states of the system and because such cases are common, there are lots of techniques to choose from in the field of Artificial Intelligence. Formally, we must compute a probability distribution function over all possible states. Doing this exactly is difficult because the number of states is exponential in the number of variables in the system and because the joint PDF may not have a closed form. Many approximation techniques have been developed over the years, but none ideally suited the problem we faced. Particle filtering is a popular scheme that approximates the joint PDF over the variables in the system by a set of weighted samples. It works even when the joint PDF has no closed form and the size of the sample can be adjusted to trade off accuracy for computation time. However, with many variables the size of the sample required for a good approximation can still become prohibitively large. Factored particle filtering uses the structure of variable dependencies to split the problem into many smaller subproblems and scales better if such decomposition is possible. However, our problem was unusual because some normally independent variables would become strongly correlated for short periods of time. This dynamically-changing dependency structure was not handled effectively by existing techniques. Considering variables to be always correlated meant the problem did not scale, considering them to be always independent introduced errors too large to tolerate. It was necessary to develop an approach that would utilize variables' independence whenever possible, but not introduce large errors when variables become correlated. We have developed a new technique for monitoring the state of the system for a class of systems with context-specific correlations. It is based on the idea of caching the context in which correlations arise and otherwise keeping the variables independent. Our evaluation shows that our technique outperforms existing techniques and is the first viable solution for the class of problems we consider.
15

Dynamic Factored Particle Filtering for Context-Specific Correlations

Mostinski, Dimitri 03 May 2007 (has links)
In order to control any system one needs to know the system's current state. In many real-world scenarios the state of the system cannot be determined with certainty due to the sensors being noisy or simply missing. In cases like these one needs to use probabilistic inference techniques to compute the likely states of the system and because such cases are common, there are lots of techniques to choose from in the field of Artificial Intelligence. Formally, we must compute a probability distribution function over all possible states. Doing this exactly is difficult because the number of states is exponential in the number of variables in the system and because the joint PDF may not have a closed form. Many approximation techniques have been developed over the years, but none ideally suited the problem we faced. Particle filtering is a popular scheme that approximates the joint PDF over the variables in the system by a set of weighted samples. It works even when the joint PDF has no closed form and the size of the sample can be adjusted to trade off accuracy for computation time. However, with many variables the size of the sample required for a good approximation can still become prohibitively large. Factored particle filtering uses the structure of variable dependencies to split the problem into many smaller subproblems and scales better if such decomposition is possible. However, our problem was unusual because some normally independent variables would become strongly correlated for short periods of time. This dynamically-changing dependency structure was not handled effectively by existing techniques. Considering variables to be always correlated meant the problem did not scale, considering them to be always independent introduced errors too large to tolerate. It was necessary to develop an approach that would utilize variables' independence whenever possible, but not introduce large errors when variables become correlated. We have developed a new technique for monitoring the state of the system for a class of systems with context-specific correlations. It is based on the idea of caching the context in which correlations arise and otherwise keeping the variables independent. Our evaluation shows that our technique outperforms existing techniques and is the first viable solution for the class of problems we consider.
16

Structural Estimation Using Sequential Monte Carlo Methods

Chen, Hao January 2011 (has links)
<p>This dissertation aims to introduce a new sequential Monte Carlo (SMC) based estimation framework for structural models used in macroeconomics and industrial organization. Current Markov chain Monte Carlo (MCMC) estimation methods for structural models suffer from slow Markov chain convergence, which means parameter and state spaces of interest might not be properly explored unless huge numbers of samples are simulated. This could lead to insurmountable computational burdens for the estimation of those structural models that are expensive to solve. In contrast, SMC methods rely on the principle of sequential importance sampling to jointly evolve simulated particles, thus bypassing the dependence on Markov chain convergence altogether. This dissertation will explore the feasibility and the potential benefits to estimating structural models using SMC based methods.</p><p> Chapter 1 casts the structural estimation problem in the form of inference of hidden Markov models and demonstrates with a simple growth model.</p><p> Chapter 2 presents the key ingredients, both conceptual and theoretical, to successful SMC parameter estimation strategies in the context of structural economic models.</p><p> Chapter 3, based on Chen, Petralia and Lopes (2010), develops SMC estimation methods for dynamic stochastic general equilibrium (DSGE) models. SMC algorithms allow a simultaneous filtering of time-varying state vectors and estimation of fixed parameters. We first establish empirical feasibility of the full SMC approach by comparing estimation results from both MCMC batch estimation and SMC on-line estimation on a simple neoclassical growth model. We then estimate a large scale DSGE model for the Euro area developed in Smets and Wouters (2003) with a full SMC approach, and revisit the on-going debate between the merits of reduced form and structural models in the macroeconomics context by performing sequential model assessment between the DSGE model and various VAR/BVAR models.</p><p> Chapter 4 proposes an SMC estimation procedure and show that it readily applies to the estimation of dynamic discrete games with serially correlated endogenous state variables. I apply this estimation procedure to a dynamic oligopolistic game of entry using data from the generic pharmaceutical industry and demonstrate that the proposed SMC method can potentially better explore the parameter posterior space while being more computationally efficient than MCMC estimation. In addition, I show how the unobserved endogenous cost paths could be recovered using particle smoothing, both with and without parameter uncertainty. Parameter estimates obtained using this SMC based method largely concur with earlier findings that spillover effect from market entry is significant and plays an important role in the generic drug industry, but that it might not be as high as previously thought when full model uncertainty is taken into account during estimation.</p> / Dissertation
17

Online Learning of Non-Stationary Networks, with Application to Financial Data

Hongo, Yasunori January 2012 (has links)
<p>In this paper, we propose a new learning algorithm for non-stationary Dynamic Bayesian Networks is proposed. Although a number of effective learning algorithms for non-stationary DBNs have previously been proposed and applied in Signal Pro- cessing and Computational Biology, those algorithms are based on batch learning algorithms that cannot be applied to online time-series data. Therefore, we propose a learning algorithm based on a Particle Filtering approach so that we can apply that algorithm to online time-series data. To evaluate our algorithm, we apply it to the simulated data set and the real-world financial data set. The result on the simulated data set shows that our algorithm performs accurately makes estimation and detects change. The result applying our algorithm to the real-world financial data set shows several features, which are suggested in previous research that also implies the effectiveness of our algorithm.</p> / Thesis
18

Statistical methods for 2D image segmentation and 3D pose estimation

Sandhu, Romeil Singh 26 October 2010 (has links)
The field of computer vision focuses on the goal of developing techniques to exploit and extract information from underlying data that may represent images or other multidimensional data. In particular, two well-studied problems in computer vision are the fundamental tasks of 2D image segmentation and 3D pose estimation from a 2D scene. In this thesis, we first introduce two novel methodologies that attempt to independently solve 2D image segmentation and 3D pose estimation separately. Then, by leveraging the advantages of certain techniques from each problem, we couple both tasks in a variational and non-rigid manner through a single energy functional. Thus, the three theoretical components and contributions of this thesis are as follows: Firstly, a new distribution metric for 2D image segmentation is introduced. This is employed within the geometric active contour (GAC) framework. Secondly, a novel particle filtering approach is proposed for the problem of estimating the pose of two point sets that differ by a rigid body transformation. Thirdly, the two techniques of image segmentation and pose estimation are coupled in a single energy functional for a class of 3D rigid objects. After laying the groundwork and presenting these contributions, we then turn to their applicability to real world problems such as visual tracking. In particular, we present an example where we develop a novel tracking scheme for 3-D Laser RADAR imagery. However, we should mention that the proposed contributions are solutions for general imaging problems and therefore can be applied to medical imaging problems such as extracting the prostate from MRI imagery
19

INTEGRATED DECISION MAKING FOR PLANNING AND CONTROL OF DISTRIBUTED MANUFACTURING ENTERPRISES USING DYNAMIC-DATA-DRIVEN ADAPTIVE MULTI-SCALE SIMULATIONS (DDDAMS)

Celik, Nurcin January 2010 (has links)
Discrete-event simulation has become one of the most widely used analysis tools for large-scale, complex and dynamic systems such as supply chains as it can take randomness into account and address very detailed models. However, there are major challenges that are faced in simulating such systems, especially when they are used to support short-term decisions (e.g., operational decisions or maintenance and scheduling decisions considered in this research). First, a detailed simulation requires significant amounts of computation time. Second, given the enormous amount of dynamically-changing data that exists in the system, information needs to be updated wisely in the model in order to prevent unnecessary usage of computing and networking resources. Third, there is a lack of methods allowing dynamic data updates during the simulation execution. Overall, in a simulation-based planning and control framework, timely monitoring, analysis, and control is important not to disrupt a dynamically changing system. To meet this temporal requirement and address the above mentioned challenges, a Dynamic-Data-Driven Adaptive Multi-Scale Simulation (DDDAMS) paradigm is proposed to adaptively adjust the fidelity of a simulation model against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective data update. To the best of our knowledge, the proposed DDDAMS methodology is one of the first efforts to present a coherent integrated decision making framework for timely planning and control of distributed manufacturing enterprises.To this end, comprehensive system architecture and methodologies are first proposed, where the components include 1) real time DDDAM-Simulation, 2) grid computing modules, 3) Web Service communication server, 4) database, 5) various sensors, and 6) real system. Four algorithms are then developed and embedded into a real-time simulator for enabling its DDDAMS capabilities such as abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation. As part of the developed algorithms, improvements are made to the resampling techniques for sequential Bayesian inferencing, and their performance is benchmarked in terms of their resampling qualities and computational efficiencies. Grid computing and Web Services are used for computational resources management and inter-operable communications among distributed software components, respectively. A prototype of proposed DDDAM-Simulation was successfully implemented for preventive maintenance scheduling and part routing scheduling in a semiconductor manufacturing supply chain, where the results look quite promising.
20

An adaptive feature-based tracking system

Pretorius, Eugene 03 1900 (has links)
Thesis (MSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2008. / In this paper, tracking tools are developed based on object features to robustly track the object using particle filtering. Automatic on-line initialisation techniques use motion detection and dynamic background modelling to extract features of moving objects. Automatically adapting the feature models during tracking is implemented and tested.

Page generated in 0.119 seconds