• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 11
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 223
  • 223
  • 50
  • 42
  • 39
  • 36
  • 31
  • 29
  • 29
  • 27
  • 26
  • 26
  • 25
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Comparative analysis of ordinary kriging and sequential Gaussian simulation for recoverable reserve estimation at Kayelekera Mine

Gulule, Ellasy Priscilla 16 September 2016 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, 2016 / It is of great importance to minimize misclassification of ore and waste during grade control for a mine operation. This research report compares two recoverable reserve estimation techniques for ore classification for Kayelekera Uranium Mine. The research was performed on two data sets taken from the pit with different grade distributions. The two techniques evaluated were Sequential Gaussian Simulation and Ordinary Kriging. A comparison of the estimates from these techniques was done to investigate which method gives more accurate estimates. Based on the results from profits and loss, grade tonnage curves the difference between the techniques is very low. It was concluded that similarity in the estimates were due to Sequential Gaussian Simulation estimates were from an average of 100 simulation which turned out to be similar to Ordinary Kriging. Additionally, similarities in the estimates were due to the close spaced intervals of the blast hole/sample data used. Whilst OK generally produced acceptable results like SGS, the local variability of grades was not adequately reproduced by the technique. Subsequently, if variability is not much of a concern, like if large blocks were to be mined, then either technique can be used and yield similar results. / M T 2016
142

Resource-Efficient Methods in Machine Learning

Vodrahalli, Kiran Nagesh January 2022 (has links)
In this thesis, we consider resource limitations on machine learning algorithms in a variety of settings. In the first two chapters, we study how to learn nonlinear model classes (monomials and neural nets) which are structured in various ways -- we consider sparse monomials and deep neural nets whose weight-matrices are low-rank respectively. These kinds of restrictions on the model class lead to gains in resource efficiency -- sparse and low-rank models are computationally easier to deploy and train. We prove that sparse nonlinear monomials are easier to learn (smaller sample complexity) while still remaining computationally efficient to both estimate and deploy, and we give both theoretical and empirical evidence for the benefit of novel nonlinear initialization schemes for low-rank deep networks. In both cases, we showcase a blessing of nonlinearity -- sparse monomials are in some sense easier to learn compared to a linear class, and the prior state-of-the-art linear low-rank initialization methods for deep networks are inferior to our proposed nonlinear method for initialization. To achieve our theoretical results, we often make use of the theory of Hermite polynomials -- an orthogonal function basis over the Gaussian measure. In the last chapter, we consider resource limitations in an online streaming setting. In particular, we consider how many data points from an oblivious adversarial stream we must store from one pass over the stream to output an additive approximation to the Support Vector Machine (SVM) objective, and prove stronger lower bounds on the memory complexity.
143

Exact simulation algorithms with applications in queueing theory and extreme value analysis

Liu, Zhipeng January 2020 (has links)
This dissertation focuses on the development and analysis of exact simulation algorithms with applications in queueing theory and extreme value analysis. We first introduce the first algorithm that samples max_𝑛≥0 {𝑆_𝑛 − 𝑛^α} where 𝑆_𝑛 is a mean zero random walk, and 𝑛^α with α ∈ (1/2,1) defines a nonlinear boundary. We apply this algorithm to construct the first exact simulation method for the steady-state departure process of a 𝐺𝐼/𝐺𝐼/∞ queue where the service time distribution has infinite mean. Next, we consider the random field 𝑀 (𝑡) = sup_(𝑛≥1) 􏰄{ − log 𝑨_𝑛 + 𝑋_𝑛 (𝑡)􏰅}, 𝑡 ∈ 𝑇 , for a set 𝑇 ⊂ ℝ^𝓂, where (𝑋_𝑛) is an iid sequence of centered Gaussian random fields on 𝑇 and 𝑂 < 𝑨₁ < 𝑨₂ < . . . are the arrivals of a general renewal process on (0, ∞), independent of 𝑋_𝑛. In particular, a large class of max-stable random fields with Gumbel marginals have such a representation. Assume that the number of function evaluations needed to sample 𝑋_𝑛 at 𝑑 locations 𝑡₁, . . . , 𝑡_𝑑 ∈ 𝑇 is 𝑐(𝑑). We provide an algorithm which samples 𝑀(𝑡_{1}), . . . ,𝑀(𝑡_𝑑) with complexity 𝑂 (𝑐(𝑑)^{1+𝘰 (1)) as measured in the 𝐿_𝑝 norm sense for any 𝑝 ≥ 1. Moreover, if 𝑋_𝑛 has an a.s. converging series representation, then 𝑀 can be a.s. approximated with error δ uniformly over 𝑇 and with complexity 𝑂 (1/(δl og (1/\δ((^{1/α}, where α relates to the Hölder continuity exponent of the process 𝑋_𝑛 (so, if 𝑋_𝑛 is Brownian motion, α =1/2). In the final part, we introduce a class of unbiased Monte Carlo estimators for multivariate densities of max-stable fields generated by Gaussian processes. Our estimators take advantage of recent results on the exact simulation of max-stable fields combined with identities studied in the Malliavin calculus literature and ideas developed in the multilevel Monte Carlo literature. Our approach allows estimating multivariate densities of max-stable fields with precision 𝜀 at a computational cost of order 𝑂 (𝜀 ⁻² log log log 1/𝜀).
144

Continuous-time Trajectory Estimation and its Application to Sensor Calibration and Differentially Flat Systems

Johnson, Jacob C. 14 August 2023 (has links) (PDF)
State estimation is an essential part of any robotic autonomy solution. Continuous-time trajectory estimation is an attractive method because continuous trajectories can be queried at any time, allowing for fusion of multiple asynchronous, high-frequency measurement sources. This dissertation investigates various continuous-time estimation algorithms and their application to a handful of mobile robot autonomy and sensor calibration problems. In particular, we begin by analyzing and comparing two prominent continuous-time trajectory representations from the literature: Gaussian processes and splines, both on vector spaces and Lie groups. Our comparisons show that the two methods give comparable results so long as the same measurements and motion model are used. We then apply spline-based estimation to the problem of calibrating the extrinsic parameters between a camera and a GNSS receiver by fusing measurements from these two sensors and an IMU in continuous-time. Next, we introduce a novel estimation technique that uses the differential flatness property of dynamic systems to model the continuous-time trajectory of a robot on its flat output space, and show that estimating in the flat output space can provide superior accuracy and computation time than estimating on the configuration manifold. We use this new flatness-based estimation technique to perform pose estimation for velocity-constrained vehicles using only GNSS and IMU and show that modeling on the flat output space renders the global heading of the system observable, even when the motion of the system is insufficient to observe attitude from the measurements alone. We then show how flatness-based estimation can be used to calibrate the transformation between the dynamics coordinate frame and the coordinate frame of a sensor, along with other sensor-to-dynamics parameters, and use this calibration to improve the performance of flatness-based estimation when six-degree-of-freedom measurements are involved. Our final contribution involves nonlinear control of a quadrotor aerial vehicle. We use Lie theoretic concepts to develop a geometric attitude controller that utilizes logarithmic rotation error and prove that this controller is globally-asymptotically stable. We then demonstrate the ability of this controller to track highly-aggressive quadrotor trajectories.
145

Bayesian Uncertainty Quantification while Leveraging Multiple Computer Model Runs

Walsh, Stephen A. 22 June 2023 (has links)
In the face of spatially correlated data, Gaussian process regression is a very common modeling approach. Given observational data, kriging equations will provide the best linear unbiased predictor for the mean at unobserved locations. However, when a computer model provides a complete grid of forecasted values, kriging will not apply. To develop an approach to quantify uncertainty of computer model output in this setting, we leverage information from a collection of computer model runs (e.g., historical forecast and observation pairs for tropical cyclone precipitation totals) through a Bayesian hierarchical framework. This framework allows us to combine information and account for the spatial correlation within and across computer model output. Using maximum likelihood estimates and the corresponding Hessian matrices for Gaussian process parameters, these are input to a Gibbs sampler which provides posterior distributions for parameters of interest. These samples are used to generate predictions which provide uncertainty quantification for a given computer model run (e.g., tropical cyclone precipitation forecast). We then extend this framework using deep Gaussian processes to allow for nonstationary covariance structure, applied to multiple computer model runs from a cosmology application. We also perform sensitivity analyses to understand which parameter inputs most greatly impact cosmological computer model output. / Doctor of Philosophy / A crucial theme when analyzing spatial data is that locations that are closer together are more likely to have similar output values (for example, daily precipitation totals). For a particular event, common modeling approach of spatial data is to observe data at numerous locations, and make predictions for locations that were unobserved. In this work, we extend this within-event modeling approach by additionally learning about the uncertainty across different events. Through this extension, we are able to quantify uncertainty for a particular computer model (which may be modeling tropical cyclone precipitation, for example) that does not provide any uncertainty on its own. This framework can be utilized to quantify uncertainty across a vast array of computer model outputs where more than one event or model run has been obtained. We also study how inputting different values into a computer model can influence the values it produces.
146

Adaptive Design for Global Fit of Non-stationary Surfaces

Frazier, Marian L. 03 September 2013 (has links)
No description available.
147

Statistically and Computationally Efficient Resampling and Distributionally Robust Optimization with Applications

Liu, Zhenyuan January 2024 (has links)
Uncertainty quantification via construction of confidence regions has been long studied in statistics. While these existing methods are powerful and commonly used, some modern problems that require expensive model fitting, or those that elicit convoluted interactions between statistical and computational noises, could challenge the effectiveness of these methods. To remedy some of these challenges, this thesis proposes novel approaches that not only guarantee statistical validity but also are computationally efficient. We study two main methodological directions: resampling-based methods in the first half (Chapters 2 and 3) and optimization-based methods, in particular so-called distributionally robust optimization, in the second half (Chapters 4 to 6) of this thesis. The first half focuses on the bootstrap, a common approach for statistical inference. This approach resamples data and hinges on the principle of using the resampling distribution as an approximation to the sampling distribution. However, implementing the bootstrap often demands extensive resampling and model refitting effort to wash away the Monte Carlo error, which can be computationally expensive for modern problems. Chapters 2 and 3 study bootstrap approaches using fewer resamples while maintaining coverage validity, and also the quantification of uncertainty for models with both statistical and Monte Carlo computation errors. In Chapter 2, we investigate bootstrap-based construction of confidence intervals using minimal resampling. We use a “cheap” bootstrap perspective based on sample-resample independence that yields valid coverage with as small as one resample, even when the problem dimension grows closely with the data size. We validate our theoretical findings and assess our approach against other benchmarks through various large-scale or high-dimensional problems. In Chapter 3, we focus on the so-called input uncertainty problem in stochastic simulation, which refers to the propagation of the statistical noise in calibrating input models to impact output accuracy. Unlike most existing literature that focuses on real-valued output quantities, we aim at constructing confidence bands for the entire output distribution function that can contain more holistic information. We develop a new test statistic that generalizes the Kolmogorov-Smirnov statistic to construct confidence bands that account for input uncertainty on top of Monte Carlo errors via an additional asymptotic component formed by a mean-zero Gaussian process. We also demonstrate how subsampling can be used to efficiently estimate the covariance function of this Gaussian process in a computationally cheap fashion. The second part of the thesis is devoted to optimization-based methods, in particular distributionally robust optimization (DRO). Originally built to tackle the uncertainty of the underlying distribution in a stochastic optimization, DRO adopts a worst-case perspective and seeks decisions that optimize under the worst-case scenario, over the so-called ambiguity set that represents the distributional uncertainty. In this thesis, we turn DRO broadly into a statistical tool (still referred to as DRO) by optimizing targets of interest over the ambiguity set and transforming the coverage guarantee of the ambiguity set into confidence bounds for targets. The flexibility of ambiguity sets advantageously allows the injection of prior distribution knowledge that operates with less data requirement than existing methods. In Chapter 4, motivated by the bias-variance tradeoff and other technical complications in conventional multivariate extreme value theory, we propose a shape-constrained DRO called orthounimodality DRO (OU-DRO) as a vehicle to incorporate natural and verifiable information into the tail. We study the statistical guarantee, and tractability especially in the bivariate setting via a new Choquet representation in convex analysis. Chapter 5 further studies a general approach that applies to higher dimensions via sample average approximation (SAA) and importance sampling. We establish convergence guarantee of the SAA optimal value for OU-DRO in any dimension under regularity conditions. We also argue that the resulting SAA problem is a linear program that can be solved by off-the-shelf algorithms. In Chapter 6, we study the connection between the out-of-sample errors of data-driven stochastic optimization and DRO via large deviations theory. We propose a special type of DRO formulation which uses an ambiguity set based on a Kullback Leibler divergence smoothed by the Wasserstein or Levy-Prokhorov distance. We relate large deviations theory to the performance of the proposed DRO and show it achieves nearly optimal out-of-sample performance in terms of the exponential decay rate of the generalization error. Furthermore, the computation of the proposed DRO is not harder than DRO problems based on f-divergence or Wasserstein distances, which leads to a statistically optimal and computationally tractable DRO formulation.
148

Signatures of Gaussian processes and SLE curves

Boedihardjo, Horatio S. January 2014 (has links)
This thesis contains three main results. The first result states that, outside a slim set associated with a Gaussian process with long time memory, paths can be canonically enhanced to geometric rough paths. This allows us to apply the powerful Universal Limit Theorem in rough path theory to study the quasi-sure properties of the solutions of stochastic differential equations driven by Gaussian processes. The key idea is to use a norm, invented by B. Hambly and T.Lyons, which dominates the p-variation distance and the fact that the roughness of a Gaussian sample path is evenly distributed over time. The second result is the almost-sure uniqueness of the signatures of SLE kappa curves for kappa less than or equal to 4. We prove this by first expressing the Fourier transform of the winding angle of the SLE curve in terms of its signature. This formula also gives us a relation between the expected signature and the n-point functions studied in the SLE and Statistical Physics literature. It is important that the Chordal SLE measure in D is supported on simple curves from -1 to 1 for kappa between 0 and 4, and hence the image of the curve determines the curve up to reparametrisation. The third result is a formula for the expected signature of Gaussian processes generated by strictly regular kernels. The idea is to approximate the expected signature of this class of processes by the expected signature of their piecewise linear approximations. This reduces the problem to computing the moments of Gaussian random variables, which can be done using Wick’s formula.
149

Calibration and Model Risk in the Pricing of Exotic Options Under Pure-Jump Lévy Dynamics

Mboussa Anga, Gael 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2015 / AFRIKAANSE OPSOMMING : Die groeiende belangstelling in kalibrering en modelrisiko is ’n redelik resente ontwikkeling in finansiële wiskunde. Hierdie proefskrif fokusseer op hierdie sake, veral in verband met die prysbepaling van vanielje-en eksotiese opsies, en vergelyk die prestasie van verskeie Lévy modelle. ’n Nuwe metode om modelrisiko te meet word ook voorgestel (hoofstuk 6). Ons kalibreer eers verskeie Lévy modelle aan die log-opbrengs van die S&P500 indeks. Statistiese toetse en grafieke voorstellings toon albei aan dat suiwer sprongmodelle (VG, NIG en CGMY) die verdeling van die opbrengs beter beskryf as die Black-Scholes model. Daarna kalibreer ons hierdie vier modelle aan S&P500 indeks opsie data en ook aan "CGMY-wˆ ereld" data (’n gesimuleerde wÃłreld wat beskryf word deur die CGMY-model) met behulp van die wortel van gemiddelde kwadraat fout. Die CGMY model vaar beter as die VG, NIG en Black-Scholes modelle. Ons waarneem ook ’n effense verskil tussen die nuwe parameters van CGMY model en sy wisselende parameters, ten spyte van die feit dat CGMY model gekalibreer is aan die "CGMYwêreld" data. Versperrings-en terugblik opsies word daarna geprys, deur gebruik te maak van die gekalibreerde parameters vir ons modelle. Hierdie pryse word dan vergelyk met die "ware" pryse (bereken met die ware parameters van die "CGMY-wêreld), en ’n beduidende verskil tussen die modelpryse en die "ware" pryse word waargeneem. Ons eindig met ’n poging om hierdie modelrisiko te kwantiseer / ENGLISH ABSTRACT : The growing interest in calibration and model risk is a fairly recent development in financial mathematics. This thesis focussing on these issues, particularly in relation to the pricing of vanilla and exotic options, and compare the performance of various Lévy models. A new method to measure model risk is also proposed (Chapter 6). We calibrate only several Lévy models to the log-return of S&P500 index data. Statistical tests and graphs representations both show that pure jump models (VG, NIG and CGMY) the distribution of the proceeds better described as the Black-Scholes model. Then we calibrate these four models to the S&P500 index option data and also to "CGMY-world" data (a simulated world described by the CGMY model) using the root mean square error. Which CGMY model outperform VG, NIG and Black-Scholes models. We observe also a slight difference between the new parameters of CGMY model and its varying parameters, despite the fact that CGMY model is calibrated to the "CGMY-world" data. Barriers and lookback options are then priced, making use of the calibrated parameters for our models. These prices are then compared with the "real" prices (calculated with the true parameters of the "CGMY world), and a significant difference between the model prices and the "real" rates are observed. We end with an attempt to quantization this model risk.
150

Comparative study of a time diversity scheme applied to G3 systems for narrowband power-line communications

Rivard, Yves-François January 2016 (has links)
A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in ful lment of the requirements for the degree of Masters of Science in Engineering (Electrical). Johannesburg, 2016 / Power-line communications can be used for the transfer of data across electrical net- works in applications such as automatic meter reading in smart grid technology. As the power-line channel is harsh and plagued with non-Gaussian noise, robust forward error correction schemes are required. This research is a comparative study where a Luby transform code is concatenated with power-line communication systems provided by an up-to-date standard published by electricit e R eseau Distribution France named G3 PLC. Both decoding using Gaussian elimination and belief propagation are imple- mented to investigate and characterise their behaviour through computer simulations in MATLAB. Results show that a bit error rate performance improvement is achiev- able under non worst-case channel conditions using a Gaussian elimination decoder. An adaptive system is thus recommended which decodes using Gaussian elimination and which has the appropriate data rate. The added complexity can be well tolerated especially on the receiver side in automatic meter reading systems due to the network structure being built around a centralised agent which possesses more resources. / MT2017

Page generated in 0.0743 seconds