• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 11
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 230
  • 230
  • 52
  • 43
  • 41
  • 37
  • 31
  • 30
  • 30
  • 28
  • 28
  • 26
  • 25
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Envelopes of broad band processes

Van Dyke, Jozef Frans Maria January 1981 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1981. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaf 93. / by Jozef Frans Maria Van Dyke. / M.S.
142

Comparative analysis of ordinary kriging and sequential Gaussian simulation for recoverable reserve estimation at Kayelekera Mine

Gulule, Ellasy Priscilla 16 September 2016 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, 2016 / It is of great importance to minimize misclassification of ore and waste during grade control for a mine operation. This research report compares two recoverable reserve estimation techniques for ore classification for Kayelekera Uranium Mine. The research was performed on two data sets taken from the pit with different grade distributions. The two techniques evaluated were Sequential Gaussian Simulation and Ordinary Kriging. A comparison of the estimates from these techniques was done to investigate which method gives more accurate estimates. Based on the results from profits and loss, grade tonnage curves the difference between the techniques is very low. It was concluded that similarity in the estimates were due to Sequential Gaussian Simulation estimates were from an average of 100 simulation which turned out to be similar to Ordinary Kriging. Additionally, similarities in the estimates were due to the close spaced intervals of the blast hole/sample data used. Whilst OK generally produced acceptable results like SGS, the local variability of grades was not adequately reproduced by the technique. Subsequently, if variability is not much of a concern, like if large blocks were to be mined, then either technique can be used and yield similar results. / M T 2016
143

Resource-Efficient Methods in Machine Learning

Vodrahalli, Kiran Nagesh January 2022 (has links)
In this thesis, we consider resource limitations on machine learning algorithms in a variety of settings. In the first two chapters, we study how to learn nonlinear model classes (monomials and neural nets) which are structured in various ways -- we consider sparse monomials and deep neural nets whose weight-matrices are low-rank respectively. These kinds of restrictions on the model class lead to gains in resource efficiency -- sparse and low-rank models are computationally easier to deploy and train. We prove that sparse nonlinear monomials are easier to learn (smaller sample complexity) while still remaining computationally efficient to both estimate and deploy, and we give both theoretical and empirical evidence for the benefit of novel nonlinear initialization schemes for low-rank deep networks. In both cases, we showcase a blessing of nonlinearity -- sparse monomials are in some sense easier to learn compared to a linear class, and the prior state-of-the-art linear low-rank initialization methods for deep networks are inferior to our proposed nonlinear method for initialization. To achieve our theoretical results, we often make use of the theory of Hermite polynomials -- an orthogonal function basis over the Gaussian measure. In the last chapter, we consider resource limitations in an online streaming setting. In particular, we consider how many data points from an oblivious adversarial stream we must store from one pass over the stream to output an additive approximation to the Support Vector Machine (SVM) objective, and prove stronger lower bounds on the memory complexity.
144

Exact simulation algorithms with applications in queueing theory and extreme value analysis

Liu, Zhipeng January 2020 (has links)
This dissertation focuses on the development and analysis of exact simulation algorithms with applications in queueing theory and extreme value analysis. We first introduce the first algorithm that samples max_𝑛≥0 {𝑆_𝑛 − 𝑛^α} where 𝑆_𝑛 is a mean zero random walk, and 𝑛^α with α ∈ (1/2,1) defines a nonlinear boundary. We apply this algorithm to construct the first exact simulation method for the steady-state departure process of a 𝐺𝐼/𝐺𝐼/∞ queue where the service time distribution has infinite mean. Next, we consider the random field 𝑀 (𝑡) = sup_(𝑛≥1) 􏰄{ − log 𝑨_𝑛 + 𝑋_𝑛 (𝑡)􏰅}, 𝑡 ∈ 𝑇 , for a set 𝑇 ⊂ ℝ^𝓂, where (𝑋_𝑛) is an iid sequence of centered Gaussian random fields on 𝑇 and 𝑂 < 𝑨₁ < 𝑨₂ < . . . are the arrivals of a general renewal process on (0, ∞), independent of 𝑋_𝑛. In particular, a large class of max-stable random fields with Gumbel marginals have such a representation. Assume that the number of function evaluations needed to sample 𝑋_𝑛 at 𝑑 locations 𝑡₁, . . . , 𝑡_𝑑 ∈ 𝑇 is 𝑐(𝑑). We provide an algorithm which samples 𝑀(𝑡_{1}), . . . ,𝑀(𝑡_𝑑) with complexity 𝑂 (𝑐(𝑑)^{1+𝘰 (1)) as measured in the 𝐿_𝑝 norm sense for any 𝑝 ≥ 1. Moreover, if 𝑋_𝑛 has an a.s. converging series representation, then 𝑀 can be a.s. approximated with error δ uniformly over 𝑇 and with complexity 𝑂 (1/(δl og (1/\δ((^{1/α}, where α relates to the Hölder continuity exponent of the process 𝑋_𝑛 (so, if 𝑋_𝑛 is Brownian motion, α =1/2). In the final part, we introduce a class of unbiased Monte Carlo estimators for multivariate densities of max-stable fields generated by Gaussian processes. Our estimators take advantage of recent results on the exact simulation of max-stable fields combined with identities studied in the Malliavin calculus literature and ideas developed in the multilevel Monte Carlo literature. Our approach allows estimating multivariate densities of max-stable fields with precision 𝜀 at a computational cost of order 𝑂 (𝜀 ⁻² log log log 1/𝜀).
145

Continuous-time Trajectory Estimation and its Application to Sensor Calibration and Differentially Flat Systems

Johnson, Jacob C. 14 August 2023 (has links) (PDF)
State estimation is an essential part of any robotic autonomy solution. Continuous-time trajectory estimation is an attractive method because continuous trajectories can be queried at any time, allowing for fusion of multiple asynchronous, high-frequency measurement sources. This dissertation investigates various continuous-time estimation algorithms and their application to a handful of mobile robot autonomy and sensor calibration problems. In particular, we begin by analyzing and comparing two prominent continuous-time trajectory representations from the literature: Gaussian processes and splines, both on vector spaces and Lie groups. Our comparisons show that the two methods give comparable results so long as the same measurements and motion model are used. We then apply spline-based estimation to the problem of calibrating the extrinsic parameters between a camera and a GNSS receiver by fusing measurements from these two sensors and an IMU in continuous-time. Next, we introduce a novel estimation technique that uses the differential flatness property of dynamic systems to model the continuous-time trajectory of a robot on its flat output space, and show that estimating in the flat output space can provide superior accuracy and computation time than estimating on the configuration manifold. We use this new flatness-based estimation technique to perform pose estimation for velocity-constrained vehicles using only GNSS and IMU and show that modeling on the flat output space renders the global heading of the system observable, even when the motion of the system is insufficient to observe attitude from the measurements alone. We then show how flatness-based estimation can be used to calibrate the transformation between the dynamics coordinate frame and the coordinate frame of a sensor, along with other sensor-to-dynamics parameters, and use this calibration to improve the performance of flatness-based estimation when six-degree-of-freedom measurements are involved. Our final contribution involves nonlinear control of a quadrotor aerial vehicle. We use Lie theoretic concepts to develop a geometric attitude controller that utilizes logarithmic rotation error and prove that this controller is globally-asymptotically stable. We then demonstrate the ability of this controller to track highly-aggressive quadrotor trajectories.
146

Bayesian Uncertainty Quantification while Leveraging Multiple Computer Model Runs

Walsh, Stephen A. 22 June 2023 (has links)
In the face of spatially correlated data, Gaussian process regression is a very common modeling approach. Given observational data, kriging equations will provide the best linear unbiased predictor for the mean at unobserved locations. However, when a computer model provides a complete grid of forecasted values, kriging will not apply. To develop an approach to quantify uncertainty of computer model output in this setting, we leverage information from a collection of computer model runs (e.g., historical forecast and observation pairs for tropical cyclone precipitation totals) through a Bayesian hierarchical framework. This framework allows us to combine information and account for the spatial correlation within and across computer model output. Using maximum likelihood estimates and the corresponding Hessian matrices for Gaussian process parameters, these are input to a Gibbs sampler which provides posterior distributions for parameters of interest. These samples are used to generate predictions which provide uncertainty quantification for a given computer model run (e.g., tropical cyclone precipitation forecast). We then extend this framework using deep Gaussian processes to allow for nonstationary covariance structure, applied to multiple computer model runs from a cosmology application. We also perform sensitivity analyses to understand which parameter inputs most greatly impact cosmological computer model output. / Doctor of Philosophy / A crucial theme when analyzing spatial data is that locations that are closer together are more likely to have similar output values (for example, daily precipitation totals). For a particular event, common modeling approach of spatial data is to observe data at numerous locations, and make predictions for locations that were unobserved. In this work, we extend this within-event modeling approach by additionally learning about the uncertainty across different events. Through this extension, we are able to quantify uncertainty for a particular computer model (which may be modeling tropical cyclone precipitation, for example) that does not provide any uncertainty on its own. This framework can be utilized to quantify uncertainty across a vast array of computer model outputs where more than one event or model run has been obtained. We also study how inputting different values into a computer model can influence the values it produces.
147

Adaptive Design for Global Fit of Non-stationary Surfaces

Frazier, Marian L. 03 September 2013 (has links)
No description available.
148

Statistically and Computationally Efficient Resampling and Distributionally Robust Optimization with Applications

Liu, Zhenyuan January 2024 (has links)
Uncertainty quantification via construction of confidence regions has been long studied in statistics. While these existing methods are powerful and commonly used, some modern problems that require expensive model fitting, or those that elicit convoluted interactions between statistical and computational noises, could challenge the effectiveness of these methods. To remedy some of these challenges, this thesis proposes novel approaches that not only guarantee statistical validity but also are computationally efficient. We study two main methodological directions: resampling-based methods in the first half (Chapters 2 and 3) and optimization-based methods, in particular so-called distributionally robust optimization, in the second half (Chapters 4 to 6) of this thesis. The first half focuses on the bootstrap, a common approach for statistical inference. This approach resamples data and hinges on the principle of using the resampling distribution as an approximation to the sampling distribution. However, implementing the bootstrap often demands extensive resampling and model refitting effort to wash away the Monte Carlo error, which can be computationally expensive for modern problems. Chapters 2 and 3 study bootstrap approaches using fewer resamples while maintaining coverage validity, and also the quantification of uncertainty for models with both statistical and Monte Carlo computation errors. In Chapter 2, we investigate bootstrap-based construction of confidence intervals using minimal resampling. We use a “cheap” bootstrap perspective based on sample-resample independence that yields valid coverage with as small as one resample, even when the problem dimension grows closely with the data size. We validate our theoretical findings and assess our approach against other benchmarks through various large-scale or high-dimensional problems. In Chapter 3, we focus on the so-called input uncertainty problem in stochastic simulation, which refers to the propagation of the statistical noise in calibrating input models to impact output accuracy. Unlike most existing literature that focuses on real-valued output quantities, we aim at constructing confidence bands for the entire output distribution function that can contain more holistic information. We develop a new test statistic that generalizes the Kolmogorov-Smirnov statistic to construct confidence bands that account for input uncertainty on top of Monte Carlo errors via an additional asymptotic component formed by a mean-zero Gaussian process. We also demonstrate how subsampling can be used to efficiently estimate the covariance function of this Gaussian process in a computationally cheap fashion. The second part of the thesis is devoted to optimization-based methods, in particular distributionally robust optimization (DRO). Originally built to tackle the uncertainty of the underlying distribution in a stochastic optimization, DRO adopts a worst-case perspective and seeks decisions that optimize under the worst-case scenario, over the so-called ambiguity set that represents the distributional uncertainty. In this thesis, we turn DRO broadly into a statistical tool (still referred to as DRO) by optimizing targets of interest over the ambiguity set and transforming the coverage guarantee of the ambiguity set into confidence bounds for targets. The flexibility of ambiguity sets advantageously allows the injection of prior distribution knowledge that operates with less data requirement than existing methods. In Chapter 4, motivated by the bias-variance tradeoff and other technical complications in conventional multivariate extreme value theory, we propose a shape-constrained DRO called orthounimodality DRO (OU-DRO) as a vehicle to incorporate natural and verifiable information into the tail. We study the statistical guarantee, and tractability especially in the bivariate setting via a new Choquet representation in convex analysis. Chapter 5 further studies a general approach that applies to higher dimensions via sample average approximation (SAA) and importance sampling. We establish convergence guarantee of the SAA optimal value for OU-DRO in any dimension under regularity conditions. We also argue that the resulting SAA problem is a linear program that can be solved by off-the-shelf algorithms. In Chapter 6, we study the connection between the out-of-sample errors of data-driven stochastic optimization and DRO via large deviations theory. We propose a special type of DRO formulation which uses an ambiguity set based on a Kullback Leibler divergence smoothed by the Wasserstein or Levy-Prokhorov distance. We relate large deviations theory to the performance of the proposed DRO and show it achieves nearly optimal out-of-sample performance in terms of the exponential decay rate of the generalization error. Furthermore, the computation of the proposed DRO is not harder than DRO problems based on f-divergence or Wasserstein distances, which leads to a statistically optimal and computationally tractable DRO formulation.
149

The GW Approximation and Bethe-Salpeter Equation for Molecules and Extended Systems

Bintrim, Sylvia Joy January 2024 (has links)
In the first two chapters, we provide a new way to think about the Green’s function-basedGW approximation and Bethe-Salpeter equation (BSE). The former is the most popular beyond-mean-field method for band structures of solids and an increasingly popular one for ionization potentials and electron affinities of molecules. The latter is widely used to compute neutral excitation energies and spectra for solids as well as, increasingly, molecules. Inspired by quantum chemistry approaches, we obtain a computational scaling reduction and avoid approximating certain dynamical quantities. The new formalism suggests further improvements to the GW and BSE methods. In chapters four and five, we derive and test a cheap, approximate version of the GW and BSE for large molecules and then extend the strategy to periodic systems. In chapter six, we assess another Green’s function-based method, the constrained random phase approximation with exact diagonalization, usually applied to solids. This method allows one to treat electron correlation within an active space of important orbitals while also including some of the external orbital space effects. In chapters seven and eight, we implement the BSE in the PySCF software package for periodic systems using Gaussian density fitting and then apply it to a challenging system, the superatomic solid Re₆Se₈Cl₂.
150

Real-time whole organism neural recording with neural identification in freely behaving Caenorhabditis elegans

Yan, Wenwei January 2024 (has links)
How does the brain integrate information from individual neurons? One efficient way to investigate systematic neuroscience is to record the whole brain down to singular neuron level. Caenorhabditis elegans, a 1 mm long, transparent nematode species, is ideally suited as a starting point. Every C. elegans hermaphrodite has a fixed set of 302 neurons. All neuron connections have been fully characterized by electron microscopy. Despite its small and simple nervous system, C. elegans exhibits a wide range of behaviors ranging from foraging, sleep to sexual activity. Recently, Yemini et al. genetically engineered a C. elegans strain where each neuron can be uniquely identified by its color code. This greatly facilitates comparison of neural recordings with literature as well as underlying connectomics. However, it is a daunting task to record the whole nervous system at cellular resolution of a freely moving worm. The imaging system needs to achieve high 3D imaging speed (10+ volumes per second) to avoid motion blur while also maintaining single cell resolution and reasonable field of view.Over the past decade, light sheet microscopy has emerged as a promising technique with great spatial resolution and reduced phototoxicity. Swept, confocally-aligned, planar excitation (SCAPE) microscopy, a single objective light sheet modality developed by Hillman lab, has the advantage of an open top geometry and fast 3D imaging speed. In this proposal, I detail my work towards imaging and tracking the whole C. elegans nervous system at cellular resolution using SCAPE and the NeuroPAL strain. The first chapter introduces fundamental concepts that link the microscopy field with the C. elegans community. The second chapter involves building a new SCAPE system that incorporates new optical components and a high-speed intensified camera. The goal is to construct a workhorse system capable of capturing real-time volumetric recordings with improved resolution. The improvements stem from an improved optical design as well as careful selection of magnification and scan parameters While the new imaging system is capable of capturing high-speed volumetric images of freely moving NeuroPAL worms with single-cell resolution, there is no suitable neuron tracking algorithm to robustly extract neural activities from the data. Indeed, the density of the neurons as well as the vigorous movement of the worm is unprecedented. Chapter 3 and 4 constitute two parts of a broader neuron tracking algorithm. In Chapter 3, I introduce an iterative neural network based algorithm for unsupervised 3D image registration. In Chapter 4, a Gaussian Mixture Model based algorithm is proposed that simulates the raw data as the mixture of 3D Gaussian functions. Chapter 5 is the finale where I integrate of all proposed imaging and tracking methods in recording neural activity from the whole nervous system in freely-behaving NeuroPAL worms. Three applications are demonstrated, which spans from whole nervous system recording to investigation of class-dependent ventral nerve cord motor neurons during locomotion. In Chapter 6, I report progress towards building the next-generation SCAPE with higher resolution/collection efficiency. A custom-designed zero working distance objective is demonstrated, which uses off-the-shelf objective with novel refractive-index-matched material to achieve high collection numerical aperture without sacrificing field of view (FOV).

Page generated in 0.0726 seconds