271 |
Gaussian Mixture Model Based SLAM: Theory and Application the Department of Aerospace EngineeringTurnowicz, Matthew Ryan 08 December 2017 (has links)
This dissertation describes the development of a method for simultaneous localization and mapping (SLAM)algorithm which is suitable for high dimensional vehicle and map states. The goal of SLAM is to be able to navigate autonomously without the use of external aiding sources for vehicles. SLAM's combination of the localization and mapping problems makes it especially difficult to solve accurately and efficiently, due to the shear size of the unknown state vector. The vehicle states are typically constant in number while the map states increase with time. The increasing number of unknowns in the map state makes it impossible to use traditional Kalman filters to solve the problem- the covariance matrix grows too large and the computational complexity becomes too overwhelming. Particle filters have proved beneficial for alleviating the complexity of the SLAM problem for low dimensional vehicle states, but there is little work done for higher dimensional states. This research provides an Gaussian Mixture Model based alternative to the particle filtering SLAM methods, and provides a further partition that alleviates the vehicle state dimensionality problem with the standard particle filter. A SLAM background and basic theory is provided in the early chapters. A description of the new algorithm is provided in detail. Simulations are run demonstrating the performance of the algorithm, and then an aerial SLAM platform is developed for further testing. The aerial SLAM system uses a RGBD camera as well as an inertial measurement unit to collect SLAM data, and the ground truth is captured using an indoor optical motion capture system. Details on image processing and specifics on the inertial integration are provided. The performance of the algorithm is compared to a state of the art particle filtering based SLAM algorithm, and the results are discussed. Further work performed while working in the industry is described, which involves SLAM for adding transponders onto long-baseline acoustic arrays and stereo-inertial SLAM for 3D reconstruction of deep-water sub-sea structures. Finally, a neatly packaged production line version of the stereo-inertial SLAM system presented.
|
272 |
Effectiveness of Crop Reflectance Sensors on Detection of Cotton (Gossypium Hirsutum L.) Growth and Nitrogen StatusRaper, Tyson Brant 06 August 2011 (has links)
Cotton (Gossypium hirsutum L.) reflectance has potential to drive variable rate N (VRN) applications, but more precise definitions of relationships between sensor-observed reflectance, plant height, and N status are necessary. The objectives of this study were to define effectiveness and relationships between three commercially available sensors, and examine relationships of wavelengths and indices obtained by a spectrometer to plant height and N status. Field trials were conducted during 2008-2010 growing seasons at Mississippi State, MS. Fertilizer N rates ranged from 0-135 kg N ha-1 to establish growth differences. Sensor effects were significant, but sensors monitoring Normalized Difference Vegetation Index (NDVI) failed to correlate well with early-season N status. Wavelengths and indices utilizing the red-edge correlated most strongly with N status. Both Guyot’s Red Edge Index (REI) and Canopy Chlorophyll Content Index (I) correlated consistently with N status independent of biomass status early enough in the growing season to drive VRN.
|
273 |
Are “remember” And “know” The Same Process?—a Perspective From Reaction Time DataZeng, Min 01 January 2007 (has links) (PDF)
The remember-know paradigm is widely used in recognition memory research to explore the mechanisms underlying recognition judgments. The most intriguing question about the paradigm that needs to be answered is: Are the processes that underlie “remember” and “know” responses the same or different? The extant remember-know models provide different answers. The dual-process model (Yonelinas, 1994) assumes that “remember” and “know” judgments are made with qualitatively different underlying processes. The one-dimensional Signal Detection Theory (SDT) model (Donaldson, 1996; Hirshman & Master, 1997) and the Sum-difference Theory of Remembering and Knowing (STREAK) model assume that “remember” and “know” judgments are made with same underlying processes but different response criteria. In this thesis, three experiments were conducted to evaluate these models. The remember-know models were fit to the accuracy data to see which model provides the best account for the ROC data. In addition, the reaction time data were fit with ex-Gaussian distributions and the best-fit skew parameters were used to reveal whether the underlying strategic processes for “remember” and “know” judgments are same or not.
The results of the remember-know model fit were mixed: In the first experiment with list length manipulation, 6 out of 8 cases were best fit with the one-dimensional models and the other 2 cases were best fit with the dual-process models; in the second experiment with list strength manipulation, 11 out of 18 cases were best fit with the one-dimensional models, another 6 cases were best fit with the dual-process models and the rest one case were best fit with the STREAK model; in the third experiment with response bias manipulation, 6 out of 16 cases were best fit with the one-dimensional models and the other 10 cases were best fit with the dual-process models.
The results of ex-Gaussian fit to RT data supported the one-dimensional model better: for the subjects who provide enough overlapping data in comparison of the distributions of hits followed by “remember” and “know” judgments, the values of skew parameter did not differ for “remember” and “know” responses in 7 out of 8 cases. This indicates that the same process underlies “remember” and “know” responses.
|
274 |
Learning safe predictive control with gaussian processesVan Niekerk, Benjamin January 2019 (has links)
A research report submitted in partial fulfillment of the requirements for the degree of Master of Science in School of Computer Science and Applied Mathematics to the Faculty of Science University of Witwatersrand, 2019 / Learning-based methods have recently become popular in control engineering, achieving good performance on a number of challenging tasks. However, in complex environments where data efficiency and safety are critical, current methods remain unsatisfactory. As a step toward addressing these shortcomings, we propose a learning-based approach that combines Gaussian process regression with model predictive control. Using sparse spectrum Gaussian processes, we extend previous work by learning a model of the dynamics incrementally from a stream ofsensory data. Utilizinglearned dynamics and model uncertainty, we develop a controller that can learn and plan in real-time under non-linear constraints. We test our approach on pendulum and cartpole swing up problems and demonstrate the benefits of learning on a challenging autonomous racing task. Additionally, we show that learned dynamics models can be transferred to new tasks without any additional training. / TL (2020)
|
275 |
Modeling Temperature Reduction in Tendons Using Gaussian Processes Within a Dynamic Linear ModelWyss, Richard David 02 July 2009 (has links) (PDF)
The time it takes an athlete to recover from an injury can be highly influenced by training procedures as well as the medical care and physical therapy received. When an injury occurs to the muscles or tendons of an athlete, it is desirable to cool the muscles and tendons within the body to reduce inflammation, thereby reducing the recovery time. Consequently, finding a method of treatment that is effective in reducing tendon temperatures is beneficial to increasing the speed at which the athlete is able to recover. In this project, Bayesian inference with Gaussian processes will be used to model the effect that different treatments have in reducing tendon temperature within the ankle. Gaussian processes provide a powerful methodology for modeling data that exhibit complex characteristics such as nonlinear behavior while retaining mathematical simplicity.
|
276 |
An Applied Investigation of Gaussian Markov Random FieldsOlsen, Jessica Lyn 26 June 2012 (has links) (PDF)
Recently, Bayesian methods have become the essence of modern statistics, specifically, the ability to incorporate hierarchical models. In particular, correlated data, such as the data found in spatial and temporal applications, have benefited greatly from the development and application of Bayesian statistics. One particular application of Bayesian modeling is Gaussian Markov Random Fields. These methods have proven to be very useful in providing a framework for correlated data. I will demonstrate the power of GMRFs by applying this method to two sets of data; a set of temporal data involving car accidents in the UK and a set of spatial data involving Provo area apartment complexes. For the first set of data, I will examine how including a seatbelt covariate effects our estimates for the number of car accidents. In the second set of data, we will scrutinize the effect of BYU approval on apartment complexes. In both applications we will investigate Laplacian approximations when normal distribution assumptions do not hold.
|
277 |
ON RECONSTRUCTING GAUSSIAN MIXTURES FROM THE DISTANCE BETWEEN TWO SAMPLES: AN ALGEBRAIC PERSPECTIVEKindyl Lu Zhao King (15347239) 25 April 2023 (has links)
<p>This thesis is concerned with the problem of characterizing the orbits of certain probability density functions under the action of the Euclidean group. Our motivating application is the recognition of a point configuration where the coordinates of the points are measured under noisy conditions. Consider a random variable X in R<sup>d</sup> with probability density function ρ(x). Let x<sub>1</sub> and x<sub>2</sub> be independent random samples following ρ(x). Define ∆ as the squared Euclidean distance between x<sub>1</sub> and x<sub>2</sub>. It has previously been shown that two distributions ρ(x) and ρ(x) consisting of Dirac delta distributions in generic positions that have the same respective distributions of ∆ are necessarily related by a rigid motion. That is, there exists some rigid motion g in the Euclidean group E(d) such that ρ(x) = ρ(g · x) for all x ∈ R<sup>d</sup> . To account for noise in the measurements, we assume X is a random variable in R<sup>d</sup> whose density is a k-component mixture of Gaussian distributions with means in generic position. We further assume that the covariance matrices of the Gaussian components are equal and of the form Σ = σ<sup>2</sup>1<sub>d</sub> with 0 ≤ σ<sup>2</sup> ∈ R. In Theorem 3.1.1 and Theorem 3.2.1, we prove that, when σ<sup>2</sup> is known, generic k-component Gaussian mixtures are uniquely reconstructible up to a rigid motion from the density of ∆. A more general formulation is proven in Theorem 3.2.3. Similarly, when σ<sup>2</sup> is unknown, we prove in Theorem 4.1.1 and Theorem 4.1.2 that generic equally-weighted k-component Gaussian mixtures with k = 1 and k = 2 are uniquely reconstructible up to a rigid motion from the distribution of ∆. There are at most three non-equivalent equally weighted 3-component Gaussian mixtures up to a rigid motion having the same distribution of ∆, as proven in Theorem 4.1.3. In Theorem 4.1.4, we present a test to check if, for a given k and d, the number of non-equivalent equally-weighted k-component Gaussian mixtures in R<sup>d</sup> having the same distribution of ∆ is at most (k choose 2) + 1. Numerical computations showed that distributions with k = 4, 5, 6, 7 such that d ≤ k −2 and (k, d) = (8, 1) pass the test, and thus have a finite number of reconstructions up to a rigid motion. When σ<sup>2</sup> is unknown and the mixture weights are also unknown, we prove in Theorem 4.2.1 that there are at most four non-equivalent 2-component Gaussian mixtures up to a rigid motion having the same distribution of ∆. </p>
|
278 |
Effects Of Atmospheric Turbulence On The Propagation Of Flattened Gaussian Optical BeamsCowan, Doris 01 January 2006 (has links)
In an attempt to mitigate the effects of the atmosphere on the coherence of an optical (laser) beam, interest has recently been shown in changing the beam shape to determine if a different power distribution at the transmitter will reduce the effects of the random fluctuations in the refractive index. Here, a model is developed for the field of a flattened Gaussian beam as it propagates through atmospheric turbulence, and the resulting effects upon the scintillation of the beam and upon beam wander are determined. A comparison of these results is made with the like effects on a standard TEM00 Gaussian beam. The theoretical results are verified by comparison with a computer simulation model for the flattened Gaussian beam. Further, a determination of the probability of fade and of mean fade time under weak fluctuation conditions is determined using the widely accepted lognormal model. Although this model has been shown to be somewhat optimistic when compared to results obtained in field tests, it has value here in allowing us to compare the effects of atmospheric conditions on the fade statistics of the FGB with those of the lowest order Gaussian beam. The effective spot size of the beam, as it compares to the spot size of the lowest order Gaussian beam, is also analyzed using Carter's definition of spot size for higher order Gaussian beams.
|
279 |
Development of an Integrated Gaussian Process Metamodeling Application for Engineering DesignBaukol, Collin R 01 June 2009 (has links) (PDF)
As engineering technologies continue to grow and improve, the complexities in the engineering models which utilize these technologies also increase. This seemingly endless cycle of increased computational power and demand has sparked the need to create representative models, or metamodels, which accurately reflect these complex design spaces in a computationally efficient manner. As research into metamodeling and using advanced metamodeling techniques continues, it is important to remember design engineers who need to use these advancements. Even experienced engineers may not be well versed in the material and mathematical background that is currently required to generate and fully comprehend advanced complex metamodels. A metamodeling environment which utilizes an advanced metamodeling technique known as Gaussian Process is being developed to help bridge the gap that is currently growing between the research community and design engineers. This tool allows users to easily create, modify, query, and visually/numerically assess the quality of metamodels for a broad spectrum of design challenges.
|
280 |
Visual Analytics for High Dimensional Simulation EnsemblesDahshan, Mai Mansour Soliman Ismail 10 June 2021 (has links)
Recent advancements in data acquisition, storage, and computing power have enabled scientists from various scientific and engineering domains to simulate more complex and longer phenomena. Scientists are usually interested in understanding the behavior of a phenomenon in different conditions. To do so, they run multiple simulations with different configurations (i.e., parameter settings, boundary/initial conditions, or computational models), resulting in an ensemble dataset. An ensemble empowers scientists to quantify the uncertainty in the simulated phenomenon in terms of the variability between ensemble members, the parameter sensitivity and optimization, and the characteristics and outliers within the ensemble members, which could lead to valuable insight(s) about the simulated model.
The size, complexity, and high dimensionality (e.g., simulation input and output parameters) of simulation ensembles pose a great challenge in their analysis and exploration. Ensemble visualization provides a convenient way to convey the main characteristics of the ensemble for enhanced understanding of the simulated model. The majority of the current ensemble visualization techniques are mainly focused on analyzing either the ensemble space or the parameter space. Most of the parameter space visualizations are not designed for high-dimensional data sets or did not show the intrinsic structures in the ensemble. Conversely, ensemble space has been visualized either as a comparative visualization of a limited number of ensemble members or as an aggregation of multiple ensemble members omitting potential details of the original ensemble. Thus, to unfold the full potential of simulation ensembles, we designed and developed an approach to the visual analysis of high-dimensional simulation ensembles that merges sensemaking, human expertise, and intuition with machine learning and statistics.
In this work, we explore how semantic interaction and sensemaking could be used for building interactive and intelligent visual analysis tools for simulation ensembles. Specifically, we focus on the complex processes that derive meaningful insights from exploring and iteratively refining the analysis of high dimensional simulation ensembles when prior knowledge about ensemble features and correlations is limited or/and unavailable. We first developed GLEE (Graphically-Linked Ensemble Explorer), an exploratory visualization tool that enables scientists to analyze and explore correlations and relationships between non-spatial ensembles and their parameters. Then, we developed Spatial GLEE, an extension to GLEE that explores spatial data while simultaneously considering spatial characteristics (i.e., autocorrelation and spatial variability) and dimensionality of the ensemble. Finally, we developed Image-based GLEE to explore exascale simulation ensembles produced from in-situ visualization. We collaborated with domain experts to evaluate the effectiveness of GLEE using real-world case studies and experiments from different domains.
The core contribution of this work is a visual approach that enables the exploration of parameter and ensemble spaces for 2D/3D high dimensional ensembles simultaneously, three interactive visualization tools that explore search, filter, and make sense of non-spatial, spatial, and image-based ensembles, and usage of real-world cases from different domains to demonstrate the effectiveness of the proposed approach. The aim of the proposed approach is to help scientists gain insights by answering questions or testing hypotheses about the different aspects of the simulated phenomenon or/and facilitate knowledge discovery of complex datasets. / Doctor of Philosophy / Scientists run simulations to understand complex phenomena and processes that are expensive, difficult, or even impossible to reproduce in the real world. Current advancements in high-performance computing have enabled scientists from various domains, such as climate, computational fluid dynamics, and aerodynamics to run more complex simulations than before. However, a single simulation run would not be enough to capture all features in a simulated phenomenon. Therefore, scientists run multiple simulations using perturbed input parameters, initial and boundary conditions, or different models resulting in what is known as an ensemble. An ensemble empowers scientists to understand the model's behavior by studying relationships between and among ensemble members, the optimal parameter settings, and the influence of input parameters on the simulation output, which could lead to useful knowledge and insights about the simulated phenomenon.
To effectively analyze and explore simulation ensembles, visualization techniques play a significant role in facilitating knowledge discoveries through graphical representations. Ensemble visualization offers scientists a better way to understand the simulated model. Most of the current ensemble visualization techniques are designed to analyze or/and explore either the ensemble space or the parameter space. Therefore, we designed and developed a visual analysis approach for exploring and analyzing high-dimensional parameter and ensemble spaces simultaneously by integrating machine learning and statistics with sensemaking and human expertise.
The contribution of this work is to explore how to use semantic interaction and sensemaking to explore and analyze high-dimensional simulation ensembles. To do so, we designed and developed a visual analysis approach that manifested in an exploratory visualization tool, GLEE (Graphically-Linked Ensemble Explorer), that allowed scientists to explore, search, filter, and make sense of high dimensional 2D/3D simulations ensemble. GLEE's visualization pipeline and interaction techniques used deep learning, feature extraction, spatial regression, and Semantic Interaction (SI) techniques to support the exploration of non-spatial, spatial, and image-based simulation ensembles. GLEE different visualization tools were evaluated with domain experts from different fields using real-world case studies and experiments.
|
Page generated in 0.0583 seconds