• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 837
  • 93
  • 87
  • 86
  • 34
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 1521
  • 266
  • 261
  • 242
  • 213
  • 190
  • 188
  • 170
  • 169
  • 168
  • 163
  • 157
  • 147
  • 138
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Bayesian Optimization for Engineering Design and Quality Control of Manufacturing Systems

AlBahar, Areej Ahmad 14 April 2022 (has links)
Manufacturing systems are usually nonlinear, nonstationary, highly corrupted with outliers, and oftentimes constrained by physical laws. Modeling and approximation of their underly- ing response surface functions are extremely challenging. Bayesian optimization is a great statistical tool, based on Bayes rule, used to optimize and model these expensive-to-evaluate functions. Bayesian optimization comprises of two important components namely, a sur- rogate model often the Gaussian process and an acquisition function often the expected improvement. The Gaussian process, known for its outstanding modeling and uncertainty quantification capabilities, is used to represent the underlying response surface function, while the expected improvement is used to select the next point to be evaluated by trading- off exploitation and exploration. Although Bayesian optimization has been extensively used in optimizing unknown and expensive-to-evaluate functions and in hyperparameter tuning of deep learning models, mod- eling highly outlier-corrupted, nonstationary, and stress-induced response surface functions hinder the use of conventional Bayesian optimization models in manufacturing systems. To overcome these limitations, we propose a series of systematic methodologies to improve Bayesian optimization for engineering design and quality control of manufacturing systems. Specifically, the contributions of this dissertation can be summarized as follows. 1. A novel asymmetric robust kernel function, called AEN-RBF, is proposed to model highly outlier-corrupted functions. Two new hyperparameters are introduced to im- prove the flexibility and robustness of the Gaussian process model. 2. A nonstationary surrogate model that utilizes deep multi-layer Gaussian processes, called MGP-CBO, is developed to improve the modeling of complex anisotropic con- strained nonstationary functions. 3. A Stress-Aware Optimal Actuator Placement framework is designed to model and op- timize stress-induced nonlinear constrained functions. Through extensive evaluations, the proposed methodologies have shown outstanding and significant improvements when compared to state-of-the-art models. Although these pro- posed methodologies have been applied to certain manufacturing systems, they can be easily adapted to other broad ranges of problems. / Doctor of Philosophy / Modeling advanced manufacturing systems, such as engineering design and quality moni- toring and control, is extremely challenging. The underlying response surface functions of these manufacturing systems are often nonlinear, nonstationary, and expensive-to-evaluate. Bayesian optimization, a statistical modeling approach based on Bayes rule, is used to rep- resent and model those complex (i.e., black-box) objective functions. A Bayesian optimiza- tion model consists of a surrogate model, often the Gaussian process, and an acquisition function, often the expected improvement. Conventional Bayesian optimization models do not accurately represent non-stationary and outlier-corrupted functions. To overcome these limitations, we propose a new asymmetric robust kernel function to improve the model- ing capabilities of the Gaussian process model in process quality control through improved defect detection and classification. We also propose a non-stationary surrogate model to improve the performance of Bayesian optimization in aerospace process design problems. Finally, we develop a new optimization framework that models and optimizes stress-induced constrained aerospace manufacturing systems correctly. Our extensive experiments show significant improvements of these three proposed models when compared to state-of-the-art methodologies.
302

NonGaussian estimation using a modified Gaussian sum adaptive filter

Caputi, Mauro J. 28 July 2008 (has links)
This investigation is concerned with effective state estimation of a system driven by an unknown nonGaussian input with additive white Gaussian noise, and observed by measurements containing feedthrough of the same nonGaussian input and corrupted by additional white Gaussian noise. A Gaussian sum (GS) approach has previously been developed [6-8] which can cope with the non Gaussian nature of the input signal. Due to a serious growing memory problem in this approach, a modified Gaussian sum (MGS) estimation technique is developed that avoids the growing memory problem while providing effective state estimation. Several differences between the MGS and GS algorithms are examined. An MGS adaptive filter is derived for a general system and a modal system, with simulation examples performed using a non Gaussian input signal. The modal system simulation results are compared to those produced from an augmented Kalman filter based on an augmented modal system model assuming a narrowband Gaussian input signal. A necessary condition for effective MGS estimation is derived. Alternate estimation procedures are developed to compensate for situations when this condition is not met. Several configurations are simulated and their performance results are analyzed and compared. Two methods of monitoring and updating key parameters of the MGS filter are developed. Simulation results are analyzed to investigate the performance of these methods. / Ph. D.
303

Investigation of real-time optical scanning holography

Duncan, Bradley Dean 28 July 2008 (has links)
Real-time holographic recording using an optical heterodyne scanning technique was proposed by Poon in 1985. The first part of this dissertation provides a detailed theoretical treatment of the technique, based on a Gaussian beam analysis. Topics to be addressed include the derivations of the optical transfer function (OTF) and impulse response of the scanning holographic recording system, reconstructed image resolution and magnification, methods of carrier frequency hologram generation and experimental verification of the recording technique based on careful measurements of a hologram corresponding to a simple transmissive slit. Furthermore, computer simulations are presented pertaining to the incoherent nature of the scanning holographic process and it is shown that this new technique can be used to reduce the effects of bias buildup common in conventional incoherent holographic methods. The reconstruction of holograms generated by the heterodyne scanning technique is then considered in the second part of the dissertation. The primary concentration is on real-time reconstruction using an electron beam addressed spatial light modulator (EBSLM). For comparison, experimental coherent reconstruction methods are presented as well. Additional topics to be addressed are the spatial frequency limitations of the EBSLM and the derivation of the overall incoherent point spread function (PSF) for the holographic imaging (recording/reconstruction) system. Based upon the derived overall PSF, the reconstructed real image of a simple slit object is formulated, compared to, and shown to be consistent with experimental observations. / Ph. D.
304

Integration and Validation of Flow Image Quantification (Flow-IQ) System

Carneal, Jason Bradley 25 October 2004 (has links)
The first aim of this work was to integrate, validate, and document, a digital particle image quantification (Flow-IQ) software package developed in conjunction with and supported by Aeroprobe Corporation. The system is tailored towards experimental fluid mechanics applications. The second aim of this work was to test the performance of DPIV algorithms in wall shear flows, and to test the performance of several particle sizing algorithms for use in spray sizing and average diameter calculation. Several particle sizing algorithms which assume a circular particle profile were tested with DPIV data on spray atomization, including three point Guassian, four point Gaussian, and least squares algorithms. A novel elliptical diameter estimation scheme was developed which does not limit the measurement to circular patterns. The elliptic estimator developed in this work is able to estimate the diameter of a particle with an elliptic shape, and assumes that the particle is axisymmetric about the x or y axis. Two elliptical schemes, the true and averaged elliptical estimators, were developed and compared to the traditional three point Gaussian diameter estimator using theoretical models. If elliptical particles are theoretically used, the elliptical sizing schemes perform drastically better than the traditional scheme, which is limited to diameter measurements in the x-direction. The error of the traditional method in determining the volume of an elliptical particle increases dramatically with the eccentricity. Monte Carlo Simulations were also used to characterize the error associated with wall shear measurements using DPIV. Couette flow artificial images were generated with various shear rates at the wall. DPIV analysis was performed on these images using PIV algorithms developed by other researchers, including the traditional multigrid method, a dynamically-adaptive DPIV scheme, and a control set with no discrete window offset. The error at the wall was calculated for each data set. The dynamically adaptive scheme was found to estimate the velocity near the wall with less error than the no discrete window offset and traditional multigrid algorithms. The shear rate was found to be the main factor in the error in the velocity measurement. In wall shear velocity measurement, the mean (bias) error was an order of magnitude greater than the RMS (random) error. A least squares scheme was used to correct for this bias error with favorable results. The major contribution of this effort stems from providing a novel elliptical particle sizing scheme for use in DPIV, and quantifies the error associated with wall shear measurements using several DPIV algorithms. A test bed and comprehensive user's manual for Flow-IQ v2.2 was also developed in this work. / Master of Science
305

Reinforcement Learning with Gaussian Processes for Unmanned Aerial Vehicle Navigation

Gondhalekar, Nahush Ramesh 03 August 2017 (has links)
We study the problem of Reinforcement Learning (RL) for Unmanned Aerial Vehicle (UAV) navigation with the smallest number of real world samples possible. This work is motivated by applications of learning autonomous navigation for aerial robots in structural inspec- tion. A naive RL implementation suffers from curse of dimensionality in large continuous state spaces. Gaussian Processes (GPs) exploit the spatial correlation to approximate state- action transition dynamics or value function in large state spaces. By incorporating GPs in naive Q-learning we achieve better performance in smaller number of samples. The evalua- tion is performed using simulations with an aerial robot. We also present a Multi-Fidelity Reinforcement Learning (MFRL) algorithm that leverages Gaussian Processes to learn the optimal policy in a real world environment leveraging samples gathered from a lower fidelity simulator. In MFRL, an agent uses multiple simulators of the real environment to perform actions. With multiple levels of fidelity in a simulator chain, the number of samples used in successively higher simulators can be reduced. / Master of Science / Increasing development in the field of infrastructure inspection using Unmanned Aerial Vehicles (UAVs) has been seen in the recent years. This thesis presents work related to UAV navigation using Reinforcement Learning (RL) with the smallest number of real world samples. A naive RL implementation suffers from the curse of dimensionality in large continuous state spaces. Gaussian Processes (GPs) exploit the spatial correlation to approximate state-action transition dynamics or value function in large state spaces. By incorporating GPs in naive Q-learning we achieve better performance in smaller number of samples. The evaluation is performed using simulations with an aerial robot. We also present a Multi-Fidelity Reinforcement Learning (MFRL) algorithm that leverages Gaussian Processes to learn the optimal policy in a real world environment leveraging samples gathered from a lower fidelity simulator. In MFRL, an agent uses multiple simulators of the real environment to perform actions. With multiple levels of fidelity in a simulator chain, the number of samples used in successively higher simulators can be reduced. By developing a bidirectional simulator chain, we try to provide a learning platform for the robots to safely learn required skills in the smallest possible number of real world samples possible.
306

A Complexity-Theoretic Perspective on Convex Geometry

Nadimpalli, Shivam January 2024 (has links)
This thesis considers algorithmic and structural aspects of high-dimensional convex sets with respect to the standard Gaussian measure. Among our contributions, (i) we introduce a notion of "influence" for convex sets that yields the first quantitative strengthening of Royen's celebrated Gaussian correlation inequality; (ii) we investigate the approximability of general convex sets by intersections of halfspaces, where the approximation quality is measured with respect to the standard Gaussian distribution; and (iii) we give the first lower bounds for testing convexity and estimating the distance to convexity of an unknown set in the black-box query model. Our results and techniques are inspired by a number of fundamental ingredients and results---such as the influence of variables, noise sensitivity, and various extremal constructions---from the analysis of Boolean functions in complexity theory.
307

Stochastic Computer Model Calibration and Uncertainty Quantification

Fadikar, Arindam 24 July 2019 (has links)
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation. / Doctor of Philosophy / Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
308

BER Modeling for Interference Canceling Adaptive NLMS Equalizer

Roy, Tamoghna 13 January 2015 (has links)
Adaptive LMS equalizers are widely used in digital communication systems for their simplicity in implementation. Conventional adaptive filtering theory suggests the upper bound of the performance of such equalizer is determined by the performance of a Wiener filter of the same structure. However, in the presence of a narrowband interferer the performance of the LMS equalizer is better than that of its Wiener counterpart. This phenomenon, termed a non-Wiener effect, has been observed before and substantial work has been done in explaining the underlying reasons. In this work, we focus on the Bit Error Rate (BER) performance of LMS equalizers. At first a model “the Gaussian Mixture (GM) model“ is presented to estimate the BER performance of a Wiener filter operating in an environment dominated by a narrowband interferer. Simulation results show that the model predicts BER accurately for a wide range of SNR, ISR, and equalizer length. Next, a model similar to GM termed the Gaussian Mixture using Steady State Weights (GMSSW) model is proposed to model the BER behavior of the adaptive NLMS equalizer. Simulation results show unsatisfactory performance of the model. A detailed discussion is presented that points out the limitations of the GMSSW model, thereby providing some insight into the non-Wiener behavior of (N)LMS equalizers. An improved model, the Gaussian with Mean Square Error (GMSE), is then proposed. Simulation results show that the GMSE model is able to model the non-Wiener characteristics of the NLMS equalizer when the normalized step size is between 0 and 0.4. A brief discussion is provided on why the model is inaccurate for larger step sizes. / Master of Science
309

Multiple plane wave analysis of acousto-optic diffraction of Gaussian shaped light beams

Horger, John 01 August 2012 (has links)
A short history of acousto-optics research is presented along with a general description of how light and sound interact. The Multiple Scattering model is derived and used with a Gaussian light beam to observe the distortion in light beam profile within the sound field. Numerical results are presented for comparison to previous studies using thick holograms and two orders of light. The results from using two light orders are compared to four light order results. A Hamming sound amplitude distribution is introduced as a possible way to reduce the amount of light beam profile distortion. / Master of Science
310

Non-Wiener Characteristics of LMS Adaptive Equalizers: A Bit Error Rate Perspective

Roy, Tamoghna 12 February 2018 (has links)
Adaptive Least Mean Square (LMS) equalizers are widely used in digital communication systems primarily for their ease of implementation and lack of dependence on a priori knowledge of input signal statistics. LMS equalizers exhibit non-Wiener characteristics in the presence of a strong narrowband interference and can outperform the optimal Wiener equalizer in terms of both mean square error (MSE) and bit error rate (BER). There has been significant work in the past related to the analysis of the non-Wiener characteristics of the LMS equalizer, which includes the discovery of the shift in the mean of the LMS weights from the corresponding Wiener weights and the modeling of steady state MSE performance. BER performance is ultimately a more practically relevant metric than MSE for characterizing system performance. The present work focuses on modeling the steady state BER performance of the normalized LMS (NLMS) equalizer operating in the presence of a strong narrowband interference. Initial observations showed that a 2 dB improvement in MSE may result in two orders of magnitude improvement in BER. However, some differences in the MSE and BER behavior of the NLMS equalizer were also seen, most notably the significant dependence (one order of magnitude variation) of the BER behavior on the interference frequency, a dependence not seen in MSE. Thus, MSE cannot be used as a predictor for the BER performance; the latter further motivates the pursuit of a separate BER model. The primary contribution of this work is the derivation of the probability density of the output of the NLMS equalizer conditioned on a particular symbol having been transmitted, which can then be leveraged to predict its BER performance. The analysis of the NLMS equalizer, operating in a strong narrowband interference environment, resulted in a conditional probability density function in the form of a Gaussian Sum Mixture (GSM). Simulation results verify the efficacy of the GSM expression for a wide range of system parameters, such as signal-to-noise ratio (SNR), interference-to-signal (ISR) ratio, interference frequency, and step-sizes over the range of mean-square stable operation of NLMS. Additionally, a low complexity approximate version of the GSM model is also derived and can be used to give a conservative lower bound on BER performance. A thorough analysis of the MSE and BER behavior of the Bi-scale NLMS equalizer (BNLMS), a variant of the NLMS equalizer, constitutes another important contribution of this work. Prior results indicated a 2 dB MSE improvement of BNLMS over NLMS in the presence of a strong narrowband interference. A closed form MSE model is derived for the BLMS algorithm. Additionally, BNLMS BER behavior was studied and showed the potential of two orders of magnitude improvement over NLMS. Analysis led to a BER model in the form of a GSM similar to the NLMS case but with different parameters. Simulation results verified that both models for MSE and BER provided accurate prediction of system performance for different combinations of SNR, ISR, interference frequency, and step-size. An enhanced GSM (EGSM) model to predict the BER performance for the NLMS equalizer is also introduced, specifically to address certain cases (low ISR cases) where the original GSM expression (derived for high ISR) was less accurate. Simulation results show that the EGSM model is more accurate in the low ISR region than the GSM expression. For the situations where the derived GSM expression was accurate, the BER estimates provided by the heuristic EGSM model coincided with those computed from the GSM expression. Finally, the two-interferer problem is introduced, where NLMS equalizer performance is studied in the presence of two narrowband interferers. Initial results show the presence of non-Wiener characteristics for the two-interferer case. Additionally, experimental results indicate that the BER performance of the NLMS equalizer operating in the presence of a single narrowband interferer may be improved by purposeful injection of a second narrowband interferer. / PHD / Every practical communication system requires effective interference mitigation schemes that are able to nullify unwanted signals without distorting the desired signal. Adaptive equalizers are among the prevalent systems used to cancel interfering signals. In particular, for narrowband interference (a particular class of interference) mitigation with (normalized) least mean square type (NLMS) equalizers has been found to be extremely effective. In fact, in the narrowband interference-dominated environment, NLMS equalizers have been found to work better than the solution with the same structure that is optimal according to linear filtering theory. This departure from the linear filtering theory is a result of the non-Wiener characteristics of NLMS type equalizers. This work investigates the bit error rate (BER) behavior, a common metric used to characterize the performance of wireless communication systems, of the NLMS equalizer in the presence of a strong narrowband interference. The major contribution of this dissertation is the derivation of an accurate expression that links the BER performance of the NLMS equalizer with the system parameters and signal statistics. Another variant of the NLMS equalizer known as the Bi-scale LMS (BLMS) equalizer was also studied. Similar to the NLMS case, an accurate BER expression for the BLMS equalizer was also derived. Additionally, situations were investigated where the non-Wiener characteristics of the NLMS equalizers can be leveraged. Overall, this dissertation hopes to add to the existing body of work that pertains to the analysis of non-Wiener effects of NLMS equalizers and thus, in general, to the work related to analysis of adaptive equalizers.

Page generated in 0.0558 seconds