• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 837
  • 93
  • 87
  • 86
  • 34
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 1521
  • 266
  • 261
  • 242
  • 213
  • 190
  • 188
  • 170
  • 169
  • 168
  • 163
  • 157
  • 147
  • 138
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Sequential Calibration Of Computer Models

Kumar, Arun 11 September 2008 (has links)
No description available.
292

New Directions in Gaussian Mixture Learning and Semi-supervised Learning

Sinha, Kaushik 01 November 2010 (has links)
No description available.
293

Robust Heart Rate Variability Analysis using Gaussian Process Regression

Shah, Siddharth S. 10 January 2011 (has links)
No description available.
294

Error estimates for Gauss-Jacobi quadrature formula and Padé approximants of Stieltjes series /

Al-Jarrah, Radwan Abdul-Rahman January 1980 (has links)
No description available.
295

An Experimental and Theoretical Analysis of a Laser Beam Propagating Through Multiple Phase Screens

Weeks, Arthur R. 01 January 1987 (has links) (PDF)
An experimental and a theoretical analysis for a laser beam propagating through multiple phase screens was performed. The theoretical analysis showed that the statistics for the intensity fluctuations, which can be predicted by the HK and the I-K distributions, could be derived from a multiplicative process using statistical distributions derived from Gaussian statistics. For the single phase screen experiment, the experimental normalized moments were compared with the normalized moments of both the HK and I-K distributions . In addition, the intensity data was lowpass filtered to yield moments that are predicted by the gamma distribution. The single phase screen data was segmented into small time intervals, and all time segments with approximately the same variance were grouped together into bins to yield normalized moments for each bin that are predicted by the Rician distribution. Also, the normalized moments for two and three phase screen experiments were measured. Finally, a computer program was written to simulate K distributed noise from two independent Gaussian noise sources.
296

Dimensionality Reduction with Non-Gaussian Mixtures

Tang, Yang 11 1900 (has links)
Broadly speaking, cluster analysis is the organization of a data set into meaningful groups and mixture model-based clustering is recently receiving a wide interest in statistics. Historically, the Gaussian mixture model has dominated the model-based clustering literature. When model-based clustering is performed on a large number of observed variables, it is well known that Gaussian mixture models can represent an over-parameterized solution. To this end, this thesis focuses on the development of novel non-Gaussian mixture models for high-dimensional continuous and categorical data. We developed a mixture of joint generalized hyperbolic models (JGHM), which exhibits different marginal amounts of tail-weight. Moreover, it takes into account the cluster specific subspace and, therefore, limits the number of parameters to estimate. This is a novel approach, which is applicable to high, and potentially very- high, dimensional spaces and with arbitrary correlation between dimensions. Three different mixture models are developed using forms of the mixture of latent trait models to realize model-based clustering of high-dimensional binary data. A family of mixture of latent trait models with common slope parameters are developed to reduce the number of parameters to be estimated. This approach facilitates a low-dimensional visual representation of the clusters. We further developed the penalized latent trait models to facilitate ultra high dimensional binary data which performs automatic variable selection as well. For all models and families of models developed in this thesis, the algorithms used for model-fitting and parameter estimation are presented. Real and simulated data sets are used to assess the clustering ability of the models. / Thesis / Doctor of Philosophy (PhD)
297

Mode Matching sensing in Frequency Dependent Squeezing Source for Advanced Virgo plus

Grimaldi, Andrea 07 February 2023 (has links)
Since the first detection of a Gravitational Wave, the LIGO-Virgo Collaboration has worked to improve the sensitivity of their detectors. This continuous effort paid off in the last scientific run, in which the collaboration detected an average of one gravitational wave per week and collected 74 candidates in less than one year. This result was also possible due to the Frequency Independent Squeezing (FIS) implementation, which improved the Virgo detection range for the coalescence between two Binary Neutron Start (BNS) of 5-8\%. However, this incredible result was dramatically limited by different technical issues, among which the most dangerous was the mismatch between the squeezed vacuum beam and the resonance mode of the cavities. The mismatch can be modelled as a simple optical loss in the first approximation. If the beam shape of squeezed vacuum does not match the resonance mode, part of its amplitude is lost and replaced with the incoherent vacuum. However, this modelisation is valid only in simple setups, e.g. if we study the effect inside a single resonance cavity or the transmission of a mode cleaner. In the case of a more complicated system, such as a gravitational wave interferometer, the squeezed vacuum amplitude rejected by the mismatch still travels inside the optical setup. This component accumulates an extra defined by the characteristics of the mismatch, and it can recouple into the main beam reducing the effect of the quantum noise reduction technique. This issue will become more critical in the implementation of the Frequency Dependent Squeezing. This technique is an upgrade of the Frequency Independent Squeezing one. The new setup will increase the complexity of the squeezed beam path. The characterisation of this degradation mechanism requires a dedicated wavefront sensing technique. In fact, the simpler approach based on studying the resonance peak of the cavity is not enough. This method can only estimate the total amount of the optical loss generated by the mismatch, but it cannot characterise the phase shift generated by the decoupling. Without this information is impossible to estimate how the mismatched squeezed vacuum is recoupled into the main beam, and this limits the possibility to foreseen the degradation of the Quantum Noise Reduction technique. For this reason, the Padova-Trento Group studied different techniques for characterising Mode Matching. In particular, we proposed implementing the Mode Converter technique developed by Syracuse University. This technique can fully characterise the mismatch of a spherical beam, and it can be the first approach to monitoring the mismatch. However, this method is not enough for the Frequency Dependent Squeezer source since it cannot detect the mismatch generated by the astigmatism of the incoming beam. In fact, the Frequency Dependent Squeezer Source case uses off-axis reflective telescopes to reduce the power losses generated by transmissive optics. This setup used curved mirrors that induce small astigmatic aberrations as a function of the beam incident angle. These aberrations are present by design, and the standard Mode Converter Technique will not detect them. To overcome this issue, I proposed an upgrade of the Mode Converter technique, which can extend the detection to this kind of aberration.
298

The Kusuoka Approximation In The Gatheral Model

Al-Sammarraie, Safa, Yang, Qixin January 2024 (has links)
This thesis aims to address the Kusuoka approximation (K-approximation) within the Gatheral model using Yamada’s method, also known as the Gaussian K-approximation. Our approach begins by transforming the original Gatheral model into a model with independent Wiener processes through Cholesky decomposition. Subsequently, the system is reformulated into its Stratonovich form, facilitating the definition of vector fields and their exponentials. We will assess whether the system satisfies the Uniformly Finitely Generated (UFG) condition. Additionally, based on our calculations, a simulation code will be developed to compare our results with those obtained by Yamada.
299

Communication-Aware, Scalable Gaussian Processes for Decentralized Exploration

Kontoudis, Georgios Pantelis 25 January 2022 (has links)
In this dissertation, we propose decentralized and scalable algorithms for Gaussian process (GP) training and prediction in multi-agent systems. The first challenge is to compute a spatial field that represents underwater acoustic communication performance from a set of measurements. We compare kriging to cokriging with vehicle range as a secondary variable using a simple approximate linear-log model of the communication performance. Next, we propose a model-based learning methodology for the prediction of underwater acoustic performance using a realistic propagation model. The methodology consists of two steps: i) estimation of the covariance matrix by evaluating candidate functions with estimated parameters; and ii) prediction of communication performance. Covariance estimation is addressed with a multi-stage iterative training method that produces unbiased and robust results with nested models. The efficiency of the framework is validated with simulations and experimental data from field trials. The second challenge is to perform predictions at unvisited locations with a team of agents and limited inter-agent information exchange. To decentralize the implementation of GP training, we employ the alternating direction method of multipliers (ADMM). A closed-form solution of the decentralized proximal ADMM is provided for the case of GP hyper-parameter training with maximum likelihood estimation. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. In addition, we propose a covariance-based nearest neighbor selection strategy that enables a subset of agents to perform predictions. Empirical evaluations illustrate the efficiency of the proposed methods / Doctor of Philosophy / In this dissertation, we propose decentralized and scalable algorithms for collaborative multiagent learning. Mobile robots, such as autonomous underwater vehicles (AUVs), can use predictions of communication performance to anticipate where they are likely to be connected to the communication network. The first challenge is to predict the acoustic communication performance of AUVs from a set of measurements. We compare two methodologies using a simple model of communication performance. Next, we propose a model-based learning methodology for the prediction of underwater acoustic performance using a realistic model. The methodology first estimates the covariance matrix and then predicts the communication performance. The efficiency of the framework is validated with simulations and experimental data from field trials. The second challenge regards the efficient execution of Gaussian processes using multiple agents and communicating as little as possible. We propose decentralized algorithms that facilitate local computations at the expense of inter-agent communications. Moreover, we propose a nearest neighbor selection strategy that enables a subset of agents to participate in the prediction. Illustrative examples with real world data are provided to validate the efficiency of the algorithms.
300

Bayesian Methods for Mineral Processing Operations

Koermer, Scott Carl 07 June 2022 (has links)
Increases in demand have driven the development of complex processing technology for separating mineral resources from exceedingly low grade multi- component resources. Low mineral concentrations and variable feedstocks can make separating signal from noise difficult, while high process complexity and the multi-component nature of a feedstock can make testwork, optimization, and process simulation difficult or infeasible. A prime example of such a scenario is the recovery and separation of rare earth elements (REEs) and other critical minerals from acid mine drainage (AMD) using a solvent extraction (SX) process. In this process the REE concentration found in an AMD source can vary site to site, and season to season. SX processes take a non-trivial amount of time to reach steady state. The separation of numerous individual elements from gangue metals is a high-dimensional problem, and SX simulators can have a prohibitive computation time. Bayesian statistical methods intrinsically quantify uncertainty of model parameters and predictions given a set of data and a prior distribution and model parameter prior distributions. The uncertainty quantification possible with Bayesian methods lend well to statistical simulation, model selection, and sensitivity analysis. Moreover, Bayesian models utilizing Gaussian Process priors can be used for active learning tasks which allow for prediction, optimization, and simulator calibration while reducing data requirements. However, literature on Bayesian methods applied to separations engineering is sparse. The goal of this dissertation is to investigate, illustrate, and test the use of a handful of Bayesian methods applied to process engineering problems. First further details for the background and motivation are provided in the introduction. The literature review provides further information regarding critical minerals, solvent extraction, Bayeisan inference, data reconciliation for separations, and Gaussian process modeling. The body of work contains four chapters containing a mixture of novel applications for Bayesian methods and a novel statistical method derived for the use with the motivating problem. Chapter topics include Bayesian data reconciliation for processes, Bayesian inference for a model intended to aid engineers in deciding if a process has reached steady state, Bayesian optimization of a process with unknown dynamics, and a novel active learning criteria for reducing the computation time required for the Bayesian calibration of simulations to real data. In closing, the utility of a handfull of Bayesian methods are displayed. However, the work presented is not intended to be complete and suggestions for further improvements to the application of Bayesian methods to separations are provided. / Doctor of Philosophy / Rare earth elements (REEs) are a set of elements used in the manufacture of supplies used in green technologies and defense. Demand for REEs has prompted the development of technology for recovering REEs from unconventional resources. One unconventional resource for REEs under investigation is acid mine drainage (AMD) produced from the exposure of certain geologic strata as part of coal mining. REE concentrations found in AMD are significant, although low compared to REE ore, and can vary from site to site and season to season. Solvent extraction (SX) processes are commonly utilized to concentrate and separate REEs from contaminants using the differing solubilities of specific elements in water and oil based liquid solutions. The complexity and variability in the processes used to concentrate REEs from AMD with SX motivates the use of modern statistical and machine learning based approaches for filtering noise, uncertainty quantification, and design of experiments for testwork, in order to find the truth and make accurate process performance comparisons. Bayesian statistical methods intrinsically quantify uncertainty. Bayesian methods can be used to quantify uncertainty for predictions as well as select which model better explains a data set. The uncertainty quantification available with Bayesian models can be used for decision making. As a particular example, the uncertainty quantification provided by Gaussian process regression lends well to finding what experiments to conduct, given an already obtained data set, to improve prediction accuracy or to find an optimum. However, literature is sparse for Bayesian statistical methods applied to separation processes. The goal of this dissertation is to investigate, illustrate, and test the use of a handful of Bayesian methods applied to process engineering problems. First further details for the background and motivation are provided in the introduction. The literature review provides further information regarding critical minerals, solvent extraction, Bayeisan inference, data reconciliation for separations, and Gaussian process modeling. The body of work contains four chapters containing a mixture of novel applications for Bayesian methods and a novel statistical method derived for the use with the motivating problem. Chapter topics include Bayesian data reconciliation for processes, Bayesian inference for a model intended to aid engineers in deciding if a process has reached steady state, Bayesian optimization of a process with unknown dynamics, and a novel active learning criteria for reducing the computation time required for the Bayesian calibration of simulations to real data. In closing, the utility of a handfull of Bayesian methods are displayed. However, the work presented is not intended to be complete and suggestions for further improvements to the application of Bayesian methods to separations are provided.

Page generated in 0.0369 seconds