• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 115
  • 65
  • 34
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 658
  • 134
  • 119
  • 85
  • 73
  • 73
  • 70
  • 64
  • 62
  • 57
  • 56
  • 55
  • 55
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Variational Method Applied To The Contact Knight Shift / Variational Method Applied to Knight Shift

Vanderhoff, John 10 1900 (has links)
This thesis presents a study of the applications of the variational principle to periodic lattices. A calculation of the conduction Knight Shift in the Alkali Meals is chosen as an example of the calculations possible with this method. The Knight Shift is discussed with reference to the contributions of both the core and conduction electrons. The approximation of neglect of the effect of the core electrons as found in previous calculations is discussed and its validity questioned. / Thesis / Master of Science (MS)
102

Machine Learning and Field Inversion approaches to Data-Driven Turbulence Modeling

Michelen Strofer, Carlos Alejandro 27 April 2021 (has links)
There still is a practical need for improved closure models for the Reynolds-averaged Navier-Stokes (RANS) equations. This dissertation explores two different approaches for using experimental data to provide improved closure for the Reynolds stress tensor field. The first approach uses machine learning to learn a general closure model from data. A novel framework is developed to train deep neural networks using experimental velocity and pressure measurements. The sensitivity of the RANS equations to the Reynolds stress, required for gradient-based training, is obtained by means of both variational and ensemble methods. The second approach is to infer the Reynolds stress field for a flow of interest from limited velocity or pressure measurements of the same flow. Here, this field inversion is done using a Monte Carlo Bayesian procedure and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. The two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions. / Doctor of Philosophy / The Reynolds-averaged Navier-Stokes (RANS) equations are widely used to simulate fluid flows in engineering applications despite their known inaccuracy in many flows of practical interest. The uncertainty in the RANS equations is known to stem from the Reynolds stress tensor for which no universally applicable turbulence model exists. The computational cost of more accurate methods for fluid flow simulation, however, means RANS simulations will likely continue to be a major tool in engineering applications and there is still a need for improved RANS turbulence modeling. This dissertation explores two different approaches to use available experimental data to improve RANS predictions by improving the uncertain Reynolds stress tensor field. The first approach is using machine learning to learn a data-driven turbulence model from a set of training data. This model can then be applied to predict new flows in place of traditional turbulence models. To this end, this dissertation presents a novel framework for training deep neural networks using experimental measurements of velocity and pressure. When using velocity and pressure data, gradient-based training of the neural network requires the sensitivity of the RANS equations to the learned Reynolds stress. Two different methods, the continuous adjoint and ensemble approximation, are used to obtain the required sensitivity. The second approach explored in this dissertation is field inversion, whereby available data for a flow of interest is used to infer a Reynolds stress field that leads to improved RANS solutions for that same flow. Here, the field inversion is done via the ensemble Kalman inversion (EKI), a Monte Carlo Bayesian procedure, and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. While further development is needed, the two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions.
103

Novel Quantum Chemistry Algorithms Based on the Variational  Quantum Eigensolver

Grimsley, Harper Rex 03 February 2023 (has links)
The variational quantum eigensolver (VQE) approach is currently one of the most promising strategies for simulating chemical systems on quantum hardware. In this work, I will describe a new quantum algorithm and a new set of classical algorithms based on VQE. The quantum algorithm, ADAPT-VQE, shows promise in mitigating many of the known limitations of VQEs: Ansatz ambiguity, local minima, and barren plateaus are all addressed to varying degrees by ADAPT-VQE. The classical algorithm family, O2DX-UCCSD, draws inspiration from VQEs, but is classically solvable in polynomial time. This group of algorithms yields equations similar to those of the linearized coupled cluster theory (LCCSD) but is more systematically improvable and, for X = 3 or X = ∞, can break single bonds, which LCCSD cannot do. The overall aim of this work is to showcase the richness of the VQE algorithm and the breadth of its derivative applications. / Doctor of Philosophy / A core goal of quantum chemistry is to compute accurate ground-state energies for molecules. Quantum computers promise to simulate quantum systems in ways that classical computers cannot. It is believed that quantum computers may be able to characterize molecules that are too large for classical computers to treat accurately. One approach to this is the variational quantum eigensolver, or VQE. The idea of a VQE is to use a quantum computer to measure the molecular energy associated with a quantum state which is parametrized by some classical set of parameters. A classical computer will use a classical optimization scheme to update those parameters before the quantum computer measures the energy again. This loop is expected to minimize the quantum resources needed for a quantum computer to be useful, since much of the work is outsourced to classical computers. In this work, I describe two novel algorithms based on the VQE which solve some of its problems.
104

A Distributed Active Vibration Absorber (DAVA) for Active-Passive Vibration and Sound Radiation Control

Cambou, Pierre E. 13 November 1998 (has links)
This thesis presents a new active-passive treatment developed to reduce structural vibrations and their associated radiated sound. It is a contribution to the research of efficient and low cost devices that implement the advantages of active and passive noise control techniques. A theoretical model has been developed to investigate the potential of this new "active-passive distributed absorber". The model integrates new functions that make it extremely stable numerically. Using this model, a genetic algorithm has been used to optimize the shape of the active-passive distributed absorber. Prototypes have been designed and their potential investigated. The device subsequently developed can be described as a skin that can be mechanically and electrically tuned to reduce unwanted vibration and/or sound. It is constructed from the piezoelectric material polyvinylidene fluoride (PVDF) and thin layers of lead. The tested device is designed to weight less than 10% of the main structure and has a resonance frequency around 1000 Hz. Experiments have been conducted on a simply supported steal beam (24"x2"x1/4"). Preliminary results show that the new treatment out-performs active-passive point absorbers and conventional constrained layer damping material. The compact design and its efficiency make it suitable for many applications especially in the transportation industry. This new type of distributed absorber is totally original and represent a potential breakthrough in the field of acoustics and vibration control. / Master of Science
105

Towards scalable solid-state spin qubits and quantum simulation of thermal states

Warren, Ada Meghan 12 June 2024 (has links)
The last forty years have seen an astounding level of progress in the field of quantum computing. Rapidly-improving techniques for fabricating and controlling devices, increasingly refined theoretical models, and innovative quantum computing algorithms have allowed us to pass a number of important milestones on the path towards fault-tolerant general purpose quantum computing. There remains, however, uncertainty regarding the feasibility and logistics of scaling quantum computing platforms to useful sizes. A great deal of work remains to be done in developing sophisticated control techniques, designing scalable quantum information processing architectures, and creating resource-efficient algorithms. This dissertation is a collection of seven manuscripts organized into three sections which aim to contribute to these efforts. In the first section, we explore quantum control techniques for exchange-coupled solid-state electronic spin qubits in arrays of gate-defined quantum dots. We start by demonstrating theoretically the existence of a discrete time crystal phase in finite Heisenberg spin chains. We present driving pulses that can be used to induce time crystalline behavior and probe the conditions under which this behavior can exist, finding that it should be realizable with current experimental capabilities. Next, we use a correspondence between quantum time evolution geometric space curves to design fast, high-fidelity entangling gates in two-spin double quantum dots. In the second section, we study systems of quantum dot spin qubits coupled to one another via mutual coupling to superconducting microwave resonators. We start with two qubits, developing and refining an effective model of resonator-mediated entangling interactions, and then use that model to ultimately design fast, long-distance, high-fidelity entangling gates which are robust to environmental noise. We then take the model further, extending our model to a system of three qubits coupled by a combination of short-range exchange interactions and long-range resonator-mediated interactions, and numerically demonstrate that previously-developed protocols can be used to realize both short- and long-range entangling operations. The final section investigates adaptive variational algorithms for efficient preparation of thermal Gibbs states on a quantum computer, a difficult task with a number of important applications. We suggest a novel objective function which can be used for variational Gibbs state preparation, but which requires fewer resources to measure than the often-used Gibbs free energy. We then introduce and characterize two variational algorithms using this objective function which adaptively construct variational ansätze for Gibbs state preparation. / Doctor of Philosophy / The computers we have now are able to perform computations by storing information in bits (units of memory which can take on either of two values e.g. 0 or 1) and then comparing and modifying the values of these bits according to a simple set of logical rules. The logic these computers use is suited to a universe that obeys the laws of classical mechanics, which was our best theory of physics prior to the 20th century, but the last 120 years have seen a radical shift in our understanding of nature. We now know that nature is much better described by the laws of quantum mechanics, which includes a great deal of surprising and unintuitive non-classical phenomena. The aim of quantum computing is to use our improved understanding of nature to design and build a new kind of computer which stores information in the states of quantum bits ("qubits") and then compares and modifies the combined state of these qubits using a logic adapted to the laws of quantum mechanics. By leveraging the quantum nature of reality, these quantum computers are capable of performing certain computations faster and more efficiently than is possible using classical computers. The prospect of faster computing has inspired a massive effort to develop useful quantum computers, and the last forty years have seen impressive progress towards this goal, but there is a great deal left to do. Current quantum computing devices are too sensitive to their surroundings and far too error-prone to do useful computations. To reach tolerable error rates, we need to develop better devices and better methods for controlling those devices. Meanwhile, although several different device platforms are being continually developed, none of them currently operates with a collection of qubits anywhere near as large as the billions of bits our classical computers are able to use. It is not yet clear that practical scaling of these platforms up to that level is even possible, let alone how we can do so. Furthermore, only a handful of promising quantum algorithms have been discovered, and the efficiency of many is questionable at best. We have much that we still need to learn about what quantum computers can do and how best to use them. This dissertation is a collection of seven papers arranged into three sections, all attempting to help address some of these issues. In the first two sections, we focus on one promising type of quantum computing platform -- solid-state electronic spin qubits. We introduce new methods for quickly performing quantum logic operations in these platforms, we suggest protocols for making these systems exhibit novel and potentially useful behavior, and we characterize and design control methods for a device design which might facilitate scaling up to large numbers of qubits. In the final section, we turn our attention to quantum software, and present two algorithms for using quantum computers to efficiently simulate physical systems at a fixed temperature.
106

Deep generative models for natural language processing

Miao, Yishu January 2017 (has links)
Deep generative models are essential to Natural Language Processing (NLP) due to their outstanding ability to use unlabelled data, to incorporate abundant linguistic features, and to learn interpretable dependencies among data. As the structure becomes deeper and more complex, having an effective and efficient inference method becomes increasingly important. In this thesis, neural variational inference is applied to carry out inference for deep generative models. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. The powerful neural networks are able to approximate complicated non-linear distributions and grant the possibilities for more interesting and complicated generative models. Therefore, we develop the potential of neural variational inference and apply it to a variety of models for NLP with continuous or discrete latent variables. This thesis is divided into three parts. Part I introduces a <b>generic variational inference framework</b> for generative and conditional models of text. For continuous or discrete latent variables, we apply a continuous reparameterisation trick or the REINFORCE algorithm to build low-variance gradient estimators. To further explore Bayesian non-parametrics in deep neural networks, we propose a family of neural networks that parameterise categorical distributions with continuous latent variables. Using the stick-breaking construction, an unbounded categorical distribution is incorporated into our deep generative models which can be optimised by stochastic gradient back-propagation with a continuous reparameterisation. Part II explores <b>continuous latent variable models for NLP</b>. Chapter 3 discusses the Neural Variational Document Model (NVDM): an unsupervised generative model of text which aims to extract a continuous semantic latent variable for each document. In Chapter 4, the neural topic models modify the neural document models by parameterising categorical distributions with continuous latent variables, where the topics are explicitly modelled by discrete latent variables. The models are further extended to neural unbounded topic models with the help of stick-breaking construction, and a truncation-free variational inference method is proposed based on a Recurrent Stick-breaking construction (RSB). Chapter 5 describes the Neural Answer Selection Model (NASM) for learning a latent stochastic attention mechanism to model the semantics of question-answer pairs and predict their relatedness. Part III discusses <b>discrete latent variable models</b>. Chapter 6 introduces latent sentence compression models. The Auto-encoding Sentence Compression Model (ASC), as a discrete variational auto-encoder, generates a sentence by a sequence of discrete latent variables representing explicit words. The Forced Attention Sentence Compression Model (FSC) incorporates a combined pointer network biased towards the usage of words from source sentence, which significantly improves the performance when jointly trained with the ASC model in a semi-supervised learning fashion. Chapter 7 describes the Latent Intention Dialogue Models (LIDM) that employ a discrete latent variable to learn underlying dialogue intentions. Additionally, the latent intentions can be interpreted as actions guiding the generation of machine responses, which could be further refined autonomously by reinforcement learning. Finally, Chapter 8 summarizes our findings and directions for future work.
107

Utilisation de l'élargissement d'opérateurs maximaux monotones pour la résolution d'inclusions variationnelles / Using the expansion of maximal monotone operators for solving variational inclusions

Nagesseur, Ludovic 30 October 2012 (has links)
Cette thèse est consacrée à la résolution d'un problème fondamental de l'analyse variationnelle qu'est la recherchede zéros d'opérateurs maximaux monotones dans un espace de Hilbert. Nous nous sommes tout d'abord intéressés au cas de l'opérateur somme étendue de deux opérateurs maximaux monotones; la recherche d'un zéro de cet opérateur est un problème dont la bibliographie est peu fournie: nous proposons une version modifiée de l'algorithme d'éclatement forward-backward utilisant à chaque itération, l'epsilon-élargissement d'un opérateur maximal monotone,afin de construire une solution. Nous avons ensuite étudié la convergence d'un nouvel algorithme de faisceaux pour construire ID zéro d'un opérateur maximal monotone quelconque en dimension finie. Cet algorithme fait intervenir une double approximation polyédrale de l'epsilon-élargissement de l'opérateur considéré / This thesis is devoted to solving a basic problem of variational analysis which is the search of zeros of maximal monotone operators in a Hilbert space. First of aIl, we concentrate on the case of the extended som of two maximal monotone operators; the search of a zero of this operator is a problem for which the bibliography is not abondant: we purpose a modified version of the forward-backward splitting algorithm using at each iteration, the epsilon-enlargement of a maximal monotone operator, in order to construet a solution. Secondly, we study the convergence of a new bondie algorithm to construet a zero of an arbitrary maximal monotone operator in a finite dimensional space. In this algorithm, intervenes a double polyhedral approximation of the epsilon-enlargement of the considered operator
108

Credit Card Transaction Fraud Detection Using Neural Network Classifiers / Detektering av bedrägliga korttransaktioner m.h.a neurala nätverk

Nazeriha, Ehsan January 2023 (has links)
With increasing usage of credit card payments, credit card fraud has also been increasing. Therefore a fast and accurate fraud detection system is vital for the banks. To solve the problem of fraud detection, different machine learning classifiers have been designed and trained on a credit card transaction dataset. However, the dataset is heavily imbalanced which poses a problem for the performance of the algorithms. To resolve this issue, the generative methods Generative Adversarial Network (GAN), Variational Autoencoders (VAE) and Synthetic Minority Oversampling Technique (SMOTE) have been used to generate synthetic samples for the minority class in order to achieve a more balanced dataset. The main purpose of this study is to evaluate the generative methods and investigate the impact of their generated minority samples on the classifiers. The results from this study indicated that GAN does not outperform the other classifiers as the generated samples from VAE were most effective in three out of five classifiers. Also the validation and histogram of the generated samples indicate that the VAE samples have captured the distribution of the data better than SMOTE and GAN. A suggestion to improve on this work is to perform data engineering on the dataset. For instance, using correlation analysis for the features and analysing which features have the greatest impact on the classification and subsequently dropping the less important features and train the generative methods and classifiers with the trimmed down samples. / Med ökande användning av kreditkort som betalningsmetod i världen, har även kreditkort bedrägeri ökat. Därför finns det behov av ett snabbt och tillförligt system för att upptäcka bedrägliga transkationer. För att lösa problemet med att detektera kreditkort bedrägerier, har olika maskininlärnings klassifiseringsmetoder designats och tränats med ett dataset som innehåller kreditkortstransaktioner. Dock är dessa dataset väldigt obalanserade och innehåller mest normala transaktioner, vilket är problematiskt för systemets noggranhet vid klassificering. Därför har generativa metoderna Generative adversarial networks, Variational autoencoder och Synthetic minority oversampling technique använs för att skapa syntetisk data av minoritetsklassen för att balansera datasetet och uppnå bättre noggranhet. Det centrala målet med denna studie var därmed att evaluera dessa generativa metoder och invetigera påverkan av de syntetiska datapunkterna på klassifiseringsmetoderna. Resultatet av denna studie visade att den generativa metoden generative adversarial networks inte överträffade de andra generativa metoderna då syntetisk data från variational autoencoders var mest effektiv i tre av de fem klassifisieringsmetoderna som testades i denna studie. Dessutom visar valideringsmetoden att variational autoencoder lyckades bäst med att lära sig distributionen av orginal datat bättre än de andra generativa metoderna. Ett förslag för vidare utveckling av denna studie är att jobba med data behandling på datasetet innan datasetet används för träning av algoritmerna. Till exempel kan man använda korrelationsanalys för att analysera vilka features i datasetet har störst påverkan på klassificeringen och därmed radera de minst viktiga och sedan träna algortimerna med data som innehåller färre features.
109

Variational Inference for Data-driven Stochastic Programming

Prateek Jaiswal (11210091) 30 July 2021 (has links)
<div>Stochastic programs are standard models for decision-making under uncertainty and have been extensively studied in the operations research literature. In general, stochastic programming involves minimizing an expected cost function, where the expectation is with respect to fully specified stochastic models that quantify the aleatoric or `inherent' uncertainty in the decision-making problem. In practice, however, the stochastic models are unknown but can be estimated from data, introducing an additional epistemic uncertainty into the decision-making problem. The Bayesian framework provides a coherent way to quantify the epistemic uncertainty through the posterior distribution by combining prior beliefs of the decision-makers with the observed data. Bayesian methods have been used for data-driven decision-making in various applications such as inventory management, portfolio design, machine learning, optimal scheduling, and staffing, etc.</div><div> </div><div>Bayesian methods are challenging to implement, mainly due to the fact that the posterior is computationally intractable, necessitating the computation of approximate posteriors. Broadly speaking, there are two methods in the literature implementing approximate posterior inference. First are sampling-based methods such as Markov Chain Monte Carlo. Sampling-based methods are theoretically well understood, but they suffer from various issues like high variance, poor scalability to high-dimensional problems, and have complex diagnostics. Consequently, we propose to use optimization-based methods collectively known as variational inference (VI) that use information projections to compute an approximation to the posterior. Empirical studies have shown that VI methods are computationally faster and easily scalable to higher-dimensional problems and large datasets. However, the theoretical guarantees of these methods are not well understood. Moreover, VI methods are empirically and theoretically less explored in the decision-theoretic setting.</div><div><br></div><div> In this thesis, we first propose a novel VI framework for risk-sensitive data-driven decision-making, which we call risk-sensitive variational Bayes (RSVB). In RSVB, we jointly compute a risk-sensitive approximation to the `true' posterior and the optimal decision by solving a minimax optimization problem. The RSVB framework includes the naive approach of first computing a VI approximation to the true posterior and then using it in place of the true posterior for decision-making. We show that the RSVB approximate posterior and the corresponding optimal value and decision rules are asymptotically consistent, and we also compute their rate of convergence. We illustrate our theoretical findings in both parametric as well as nonparametric setting with the help of three examples: the single and multi-product newsvendor model and Gaussian process classification. Second, we present the Bayesian joint chance-constrained stochastic program (BJCCP) for modeling decision-making problems with epistemically uncertain constraints. We discover that using VI methods for posterior approximation can ensure the convexity of the feasible set in (BJCCP) unlike any sampling-based methods and thus propose a VI approximation for (BJCCP). We also show that the optimal value computed using the VI approximation of (BJCCP) are statistically consistent. Moreover, we derive the rate of convergence of the optimal value and compute the rate at which a VI approximate solution of (BJCCP) is feasible under the true constraints. We demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queue. Finally, this thesis also contributes to the growing literature in understanding statistical performance of VI methods. In particular, we establish the frequentist consistency of an approximate posterior computed using a well known VI method that computes an approximation to the posterior distribution by minimizing the Renyi divergence from the ‘true’ posterior.</div>
110

A duality approach to gap functions for variational inequalities and equilibrium problems

Lkhamsuren, Altangerel 25 July 2006 (has links)
This work aims to investigate some applications of the conjugate duality for scalar and vector optimization problems to the construction of gap functions for variational inequalities and equilibrium problems. The basic idea of the approach is to reformulate variational inequalities and equilibrium problems into optimization problems depending on a fixed variable, which allows us to apply duality results from optimization problems. Based on some perturbations, first we consider the conjugate duality for scalar optimization. As applications, duality investigations for the convex partially separable optimization problem are discussed. Afterwards, we concentrate our attention on some applications of conjugate duality for convex optimization problems in finite and infinite-dimensional spaces to the construction of a gap function for variational inequalities and equilibrium problems. To verify the properties in the definition of a gap function weak and strong duality are used. The remainder of this thesis deals with the extension of this approach to vector variational inequalities and vector equilibrium problems. By using the perturbation functions in analogy to the scalar case, different dual problems for vector optimization and duality assertions for these problems are derived. This study allows us to propose some set-valued gap functions for the vector variational inequality. Finally, by applying the Fenchel duality on the basis of weak orderings, some variational principles for vector equilibrium problems are investigated.

Page generated in 0.0312 seconds