• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 831
  • 92
  • 87
  • 86
  • 34
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 1513
  • 266
  • 257
  • 241
  • 213
  • 188
  • 187
  • 170
  • 169
  • 167
  • 162
  • 157
  • 145
  • 138
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Semi-Supervised Anomaly Detection and Heterogeneous Covariance Estimation for Gaussian Processes

Crandell, Ian C. 12 December 2017 (has links)
In this thesis, we propose a statistical framework for estimating correlation between sensor systems measuring diverse physical phenomenon. We consider systems that measure at different temporal frequencies and measure responses with different dimensionalities. Our goal is to provide estimates of correlation between all pairs of sensors and use this information to flag potentially anomalous readings. Our anomaly detection method consists of two primary components: dimensionality reduction through projection and Gaussian process (GP) regression. We use non-metric multidimensional scaling to project a partially observed and potentially non-definite covariance matrix into a low dimensional manifold. The projection is estimated in such a way that positively correlated sensors are close to each other and negatively correlated sensors are distant. We then fit a Gaussian process given these positions and use it to make predictions at our observed locations. Because of the large amount of data we wish to consider, we develop methods to scale GP estimation by taking advantage of the replication structure in the data. Finally, we introduce a semi-supervised method to incorporate expert input into a GP model. We are able to learn a probability surface defined over locations and responses based on sets of points labeled by an analyst as either anomalous or nominal. This allows us to discount the influence of points resembling anomalies without removing them based on a threshold. / Ph. D.
132

Learning a Spatial Field in Minimum Time with a Team of Robots

Suryan, Varun January 2018 (has links)
We study an informative path planning problem where the goal is to minimize the time required to learn a spatial field. Specifically, our goal is to ensure that the mean square error between the learned and actual fields is below a predefined value. We study three versions of the problem. In the placement version, the objective is to minimize the number of measurement locations. In the mobile robot version, we seek to minimize the total time required to visit and collect measurements from the measurement locations. A multi-robot version is studied as well where the objective is to minimize the time required by the last robot to return back to a common starting location called depot. By exploiting the properties of Gaussian Process regression, we present constant-factor approximation algorithms that ensure the required guarantees. In addition to the theoretical results, we also compare the empirical performance using a real-world dataset with other baseline strategies. / M. S. / We solve the problem of measuring a physical phenomenon accurately using a team of robots in minimum time. Examples of such phenomena include the amount of nitrogen present in the soil within a farm and concentration of harmful chemicals in a water body etc. Knowing accurately the extent of such quantities is important for a variety of economic and environmental reasons. For example, knowing the content of various nutrients in the soil within a farm can help the farmers to improve the yield and reduce the application of fertilizers, the concentration of certain chemicals inside a water body may affect the marine life in various ways. In this thesis, we present several algorithms which can help robots to be deployed efficiently to quantify such phenomena accurately. Traditionally, robots had to be teleoperated. The algorithms proposed in this thesis enable robots to work more autonomously.
133

Terrain Aided Navigation for Autonomous Underwater Vehicles with Local Gaussian Processes

Chowdhary, Abhilash 28 June 2017 (has links)
Navigation of autonomous underwater vehicles (AUVs) in the subsea environment is particularly challenging due to the unavailability of GPS because of rapid attenuation of electromagnetic waves in water. As a result, the AUV requires alternative methods for position estimation. This thesis describes a terrain-aided navigation approach for an AUV where, with the help of a prior depth map, the AUV localizes itself using altitude measurements from a multibeam DVL. The AUV simultaneously builds a probabilistic depth map of the seafloor as it moves to unmapped locations. The main contribution of this thesis is a new, scalable, and on-line terrain-aided navigation solution for AUVs which does not require the assistance of a support surface vessel. Simulation results on synthetic data and experimental results from AUV field trials in Panama City, Florida are also presented. / Master of Science
134

Control Design for Long Endurance Unmanned Underwater Vehicle Systems

Kleiber, Justin Tanner 24 May 2022 (has links)
In this thesis we demonstrate a technique for robust controller design for an autonomous underwater vehicle (AUV) that explicitly handles the trade-off between reference tracking, agility, and energy efficient performance. AUVs have many sources of modeling uncertainty that impact the uncertainty in maneuvering performance. A robust control design process is proposed to handle these uncertainties while meeting control system performance objectives. We investigate the relationships between linear system design parameters and the control performance of our vehicle in order to inform an H∞ controller synthesis problem with the objective of balancing these tradeoffs. We evaluate the controller based on its reference tracking performance, agility and energy efficiency, and show the efficacy of our control design strategy. / Master of Science / In this thesis we demonstrate a technique for autopilot design for an autonomous underwater vehicle (AUV) that explicitly handles the trade-off between three performance metrics. Mathematical models of AUVs are often unable to fully describe their many physical properties. The discrepancies between the mathematical model and reality impact how certain we can be about an AUV's behavior. Robust controllers are a class of controller that are designed to handle uncertainty. A robust control design process is proposed to handle these uncertainties while meeting vehicle performance objectives. We investigate the relationships between design parameters and the performance of our vehicle. We then use this relationship to inform the design of a controller. We evaluate this controller based on its energy efficiency, agility and ability to stay on course, and thus show the effectiveness of our control design strategy.
135

Essays on Attention Allocation and Factor Models

Scanlan, Susannah January 2024 (has links)
In the first chapter of this dissertation, I explore how forecaster attention, or the degree to which new information is incorporated into forecasts, is reflected at the lower-dimensional factor representation of multivariate forecast data. When information is costly to acquire, forecasters may pay more attention to some sources of information and ignore others. How much attention they pay will determine the strength of the forecast correlation (factor) structure. Using a factor model representation, I show that a forecast made by a rationally inattentive agent will include an extra shrinkage and thresholding "attention matrix" relative to a full information benchmark, and propose an econometric procedure to estimate it. Differences in the degree of forecaster attentiveness can explain observed differences in empirical shrinkage in professional macroeconomic forecasts relative to a consensus benchmark. Forecasters share the same reduced-form model, but differ in their measured attention. Better-performing forecasters have higher measured attention (lower shrinkage) than their poorly-performing peers. Measured forecaster attention to multiple dimensions of the information space can largely be captured by a single scalar cost parameter. I propose a new class of information cost functions for the classic multivariate linear-quadratic Gaussian tracking problem called separable spectral cost functions. The proposed measure of attention and mapping from theoretical model of attention allocation to factor structure in the first chapter is valid for this set of cost functions. These functions are defined over the eigenvalues of prior and posterior variance matrices. Separable spectral cost functions both nest known cost functions and are consistent with the definition of Uniformly Posterior Separable cost functions, which have desirable theoretical properties. The third chapter is coauthored work with Professor Serena Ng. We estimate higher frequency values of monthly macroeconomic data using different factor based imputation methods. Monthly and weekly economic indicators are often taken to be the largest common factor estimated from high and low frequency data, either separately or jointly. To incorporate mixed frequency information without directly modeling them, we target a low frequency diffusion index that is already available, and treat high frequency values as missing. We impute these values using multiple factors estimated from the high frequency data. In the empirical examples considered, static matrix completion that does not account for serial correlation in the idiosyncratic errors yields imprecise estimates of the missing values irrespective of how the factors are estimated. Single equation and systems-based dynamic procedures that account for serial correlation yield imputed values that are closer to the observed low frequency ones. This is the case in the counterfactual exercise that imputes the monthly values of consumer sentiment series before 1978 when the data was released only on a quarterly basis. This is also the case for a weekly version of the CFNAI index of economic activity that is imputed using seasonally unadjusted data. The imputed series reveals episodes of increased variability of weekly economic information that are masked by the monthly data, notably around the 2014-15 collapse in oil prices.
136

The wild bootstrap resampling in regression imputation algorithm with a Gaussian Mixture Model

Mat Jasin, A., Neagu, Daniel, Csenki, Attila 08 July 2018 (has links)
Yes / Unsupervised learning of finite Gaussian mixture model (FGMM) is used to learn the distribution of population data. This paper proposes the use of the wild bootstrapping to create the variability of the imputed data in single miss-ing data imputation. We compare the performance and accuracy of the proposed method in single imputation and multiple imputation from the R-package Amelia II using RMSE, R-squared, MAE and MAPE. The proposed method shows better performance when compared with the multiple imputation (MI) which is indeed known as the golden method of missing data imputation techniques.
137

Variable selection for generalized linear mixed models and non-Gaussian Genome-wide associated study data

Xu, Shuangshuang 11 June 2024 (has links)
Genome-wide associated study (GWAS) aims to identify associated single nucleotide polymorphisms (SNP) for phenotypes. SNP has the characteristic that the number of SNPs is from hundred of thousands to millions. If p is the number of SNPs and n is the sample size, it is a p>>n variable selection problem. To solve this p>>n problem, the common method for GWAS is single marker analysis (SMA). However, since SNPs are highly correlated, SMA identifies true causal SNPs with high false discovery rate. In addition, SMA does not consider interaction between SNPs. In this dissertation, we propose novel Bayesian variable selection methods BG2 and IBG3 for non-Gaussian GWAS data. To solve ultra-high dimension problem and highly correlated SNPs problem, BG2 and IBG3 have two steps: screening step and fine-mapping step. In the screening step, BG2 and IBG3, like SMA method, only have one SNP in one model and screen to obtain a subset of most associated SNPs. In the fine-mapping step, BG2 and IBG3 consider all possible combinations of screened candidate SNPs to find the best model. Fine-mapping step helps to reduce false positives. In addition, IBG3 iterates these two steps to detect more SNPs with small effect size. In simulation studies, we compare our methods with SMA methods and fine-mapping methods. We also compare our methods with different priors for variables, including nonlocal prior, unit information prior, Zellner-g prior, and Zellner-Siow prior. Our methods are applied to substance use disorder (alcohol comsumption and cocaine dependence), human health (breast cancer), and plant science (the number of root-like structure). / Doctor of Philosophy / Genome-wide associated study (GWAS) aims to identify genomics variants for targeted phenotype, such as disease and trait. The genomics variants which we are interested in are single nucleotide polymorphisms (SNP). SNP is a substitution mutation in the DNA sequence. GWAS solves the problem that which SNP is associated with the phenotype. However, the number of possible SNPs is from hundred of thousands to millions. The common method for GWAS is called single marker analysis (SMA). SMA only considers one SNP's association with the phenotype each time. In this way, SMA does not have the problem which comes from the large number of SNPs and small sample size. However, SMA does not consider the interaction between SNPs. In addition, SNPs that are close to each other in the DNA sequance may highly correlated SNPs causing SMA to have high false discovery rate. To solve these problems, this dissertation proposes two variable selection methods (BG2 and IBG3) for non-Gaussian GWAS data. Compared with SMA methods, BG2 and IBG3 methods detect true causal SNPs with low false discovery rate. In addition, IBG3 can detect SNPs with small effect sizes. Our methods are applied to substance use disorder (alcohol comsumption and cocaine dependence), human health (breast cancer), and plant science (the number of root-like structure).
138

Beam spreading of higher order gaussian modes propagating through the earth's atmosphere

Gilchrest, Yadira Vellon 01 April 2000 (has links)
No description available.
139

A sieve problem over the Gaussian integers

Schlackow, Waldemar January 2010 (has links)
Our main result is that there are infinitely many primes of the form a² + b² such that a² + 4b² has at most 5 prime factors. We prove this by first developing the theory of $L$-functions for Gaussian primes by using standard methods. We then give an exposition of the Siegel--Walfisz Theorem for Gaussian primes and a corresponding Prime Number Theorem for Gaussian Arithmetic Progressions. Finally, we prove the main result by using the developed theory together with Sieve Theory and specifically a weighted linear sieve result to bound the number of prime factors of a² + 4b². For the application of the sieve, we need to derive a specific version of the Bombieri--Vinogradov Theorem for Gaussian primes which, in turn, requires a suitable version of the Large Sieve. We are also able to get the number of prime factors of a² + 4b² as low as 3 if we assume the Generalised Riemann Hypothesis.
140

Economic Statistical Design of Inverse Gaussian Distribution Control Charts

Grayson, James M. (James Morris) 08 1900 (has links)
Statistical quality control (SQC) is one technique companies are using in the development of a Total Quality Management (TQM) culture. Shewhart control charts, a widely used SQC tool, rely on an underlying normal distribution of the data. Often data are skewed. The inverse Gaussian distribution is a probability distribution that is wellsuited to handling skewed data. This analysis develops models and a set of tools usable by practitioners for the constrained economic statistical design of control charts for inverse Gaussian distribution process centrality and process dispersion. The use of this methodology is illustrated by the design of an x-bar chart and a V chart for an inverse Gaussian distributed process.

Page generated in 0.0437 seconds