• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1551
  • 641
  • 223
  • 199
  • 191
  • 128
  • 90
  • 70
  • 56
  • 51
  • 47
  • 44
  • 41
  • 28
  • 16
  • Tagged with
  • 3770
  • 1222
  • 415
  • 292
  • 283
  • 272
  • 263
  • 227
  • 223
  • 193
  • 189
  • 182
  • 179
  • 175
  • 171
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1111

Novel Quantum Chemistry Algorithms Based on the Variational  Quantum Eigensolver

Grimsley, Harper Rex 03 February 2023 (has links)
The variational quantum eigensolver (VQE) approach is currently one of the most promising strategies for simulating chemical systems on quantum hardware. In this work, I will describe a new quantum algorithm and a new set of classical algorithms based on VQE. The quantum algorithm, ADAPT-VQE, shows promise in mitigating many of the known limitations of VQEs: Ansatz ambiguity, local minima, and barren plateaus are all addressed to varying degrees by ADAPT-VQE. The classical algorithm family, O2DX-UCCSD, draws inspiration from VQEs, but is classically solvable in polynomial time. This group of algorithms yields equations similar to those of the linearized coupled cluster theory (LCCSD) but is more systematically improvable and, for X = 3 or X = ∞, can break single bonds, which LCCSD cannot do. The overall aim of this work is to showcase the richness of the VQE algorithm and the breadth of its derivative applications. / Doctor of Philosophy / A core goal of quantum chemistry is to compute accurate ground-state energies for molecules. Quantum computers promise to simulate quantum systems in ways that classical computers cannot. It is believed that quantum computers may be able to characterize molecules that are too large for classical computers to treat accurately. One approach to this is the variational quantum eigensolver, or VQE. The idea of a VQE is to use a quantum computer to measure the molecular energy associated with a quantum state which is parametrized by some classical set of parameters. A classical computer will use a classical optimization scheme to update those parameters before the quantum computer measures the energy again. This loop is expected to minimize the quantum resources needed for a quantum computer to be useful, since much of the work is outsourced to classical computers. In this work, I describe two novel algorithms based on the VQE which solve some of its problems.
1112

The Efficient Computation of Field-Dependent Molecular Properties in the Frequency and Time Domains

Peyton, Benjamin Gilbert 31 May 2022 (has links)
The efficient computation of dynamic (time-dependent) molecular properties is a broad field with numerous applications in aiding molecular synthesis and design, with a particular preva- lence in spectroscopic predictions. Typical methods for computing the response of a molecu- lar system to an electromagnetic field (EMF) considers a quantum mechanical description of the molecule and a classical approximation for the EMF. Methods for describing light-matter interactions with high-accuracy electronic structure methods, such as coupled cluster (CC), are discussed, with a focus on improving the efficiency of such methods. The CC method suffers from high-degree polynomial scaling. In addition to the ground-state calculation, computing dynamic properties requires the description of sensitive excited-state effects. The cost of such methods often prohibits the accurate calculation of response prop- erties for systems of significant importance, such as large-molecule drug candidates or chiral species present in biological systems. While the literature is ripe with reduced-scaling meth- ods for CC ground-state calculations, considerably fewer approaches have been applied to excited-state properties, with even fewer still providing adequate results for realistic systems. This work presents three studies on the reduction of the cost of molecular property evalu- ations, in the hopes of closing this gap in the literature and widening the scope of current theoretical methods. There are two main ways of simulating time-dependent light-matter interactions: one may consider these effects in the frequency domain, where the response of the system to an EMF is computed directly; or, the response may be considered explicitly in the time domain, where wave function (or density) parameters can be propagated in time and examined in detail. Each methodology has unique advantages and computational bottlenecks. The first two studies focus on frequency-domain calculations, and employ fragmentation and machine- learning techniques to reduce the cost of single-molecule calculations or sets of calculations across a series of geometric conformations. The third study presents a novel application of the local correlation technique to real-time CC calculations, and highlights deficiencies and possible solutions to the approach. / Doctor of Philosophy / Theoretical chemistry plays a key role in connecting experimental results with physical inter- pretation. Paramount to the success of theoretical methods is the ability to predict molecular properties without the need for costly high-throughput synthesis, aiding in the determina- tion of molecular structure and the design of new materials. Light-matter interactions, which govern spectroscopic techniques, are particularly complicated, and sensitive to the theoreti- cal tools employed in their prediction. Compounding the issue of accuracy is one of efficiency — accurate theoretical methods typically incur steep scaling of computational cost (memory and processor time) with respect to the size of the system. An important aspect in improving the efficiency of these methods is understanding the nature of light-matter interactions at a quantum level. Many unanswered questions still remain, such as, "Can light-matter interactions be thought of as a sum of interactions be- tween smaller fragments of the system?" and "Can conventional methods of accelerating ground-state calculations be expected to perform well for spectroscopic properties?" The present work seeks to answer these questions through three studies, focusing on improving the efficiency of these techniques, while simultaneously addressing their fundamental flaws and providing reasonable alternatives.
1113

Integrative Perspectives of Academic Motivation

Chittum, Jessica Rebecca 17 March 2015 (has links)
My overall objective in this dissertation was to develop more integrative perspectives of several aspects of academic motivation. Rarely have researchers and theorists examined a more comprehensive model of academic motivation that pools multiple constructs that interact in a complex and dynamic fashion (Kaplan, Katz, and Flum, 2012; Turner, Christensen, Kackar-Cam, Trucano, and Fulmer, 2014). The more common trend in motivation research and theory has been to identify and explain only a few motivation constructs and their linear relationships rather than examine complex relationships involving 'continuously emerging systems of dynamically interrelated components' (Kaplan et al., 2014, para. 4). In this dissertation, my co-author and I focused on a more integrative perspective of academic motivation by first reviewing varying characterizations of one motivation construct (Manuscript 1) and then empirically testing dynamic interactions among multiple motivation constructs using a person-centered methodological approach (Manuscript 2). Within the first manuscript (Chapter 2), a theoretical review paper, we summarized multiple perspectives of the need for autonomy and similar constructs in academic motivation, primarily autonomy in self-determination theory, autonomy supports, and choice. We provided an integrative review and extrapolated practical teaching implications. We concluded with recommendations for researchers and instructors, including a call for more integrated perspectives of academic motivation and autonomy that focus on complex and dynamic patterns in individuals' motivational beliefs. Within the second manuscript (Chapter 3), we empirically investigated students' motivation in science class as a complex, dynamic, and context-bound phenomenon that incorporates multiple motivation constructs. Following a person-centered approach, we completed cluster analyses of students' perceptions of 5 well-known motivation constructs (autonomy, utility value, expectancy, interest, and caring) in science class to determine whether or not the students grouped into meaningful 'motivation profiles.' 5 stable profiles emerged: (1) low motivation; (2) low value and high support; (3) somewhat high motivation; (4) somewhat high empowerment and values, and high support; and (5) high motivation. As this study serves as a proof of concept, we concluded by describing the 5 clusters. Together, these studies represent a focus on more integrative and person-centered approaches to studying and understanding academic motivation. / Ph. D.
1114

Cluster_Based Profile Monitoring in Phase I Analysis

Chen, Yajuan 26 March 2014 (has links)
Profile monitoring is a well-known approach used in statistical process control where the quality of the product or process is characterized by a profile or a relationship between a response variable and one or more explanatory variables. Profile monitoring is conducted over two phases, labeled as Phase I and Phase II. In Phase I profile monitoring, regression methods are used to model each profile and to detect the possible presence of out-of-control profiles in the historical data set (HDS). The out-of-control profiles can be detected by using the statis-tic. However, previous methods of calculating the statistic are based on using all the data in the HDS including the data from the out-of-control process. Consequently, the ability of using this method can be distorted if the HDS contains data from the out-of-control process. This work provides a new profile monitoring methodology for Phase I analysis. The proposed method, referred to as the cluster-based profile monitoring method, incorporates a cluster analysis phase before calculating the statistic. Before introducing our proposed cluster-based method in profile monitoring, this cluster-based method is demonstrated to work efficiently in robust regression, referred to as cluster-based bounded influence regression or CBI. It will be demonstrated that the CBI method provides a robust, efficient and high breakdown regression parameter estimator. The CBI method first represents the data space via a special set of points, referred to as anchor points. Then a collection of single-point-added ordinary least squares regression estimators forms the basis of a metric used in defining the similarity between any two observations. Cluster analysis then yields a main cluster containing at least half the observations, with the remaining observations comprising one or more minor clusters. An initial regression estimator arises from the main cluster, with a group-additive DFFITS argument used to carefully activate the minor clusters through a bounded influence regression frame work. CBI achieves a 50% breakdown point, is regression equivariant, scale and affine equivariant and distributionally is asymptotically normal. Case studies and Monte Carlo results demonstrate the performance advantage of CBI over other popular robust regression procedures regarding coefficient stabil-ity, scale estimation and standard errors. The cluster-based method in Phase I profile monitoring first replaces the data from each sampled unit with an estimated profile, using some appropriate regression method. The estimated parameters for the parametric profiles are obtained from parametric models while the estimated parameters for the nonparametric profiles are obtained from the p-spline model. The cluster phase clusters the profiles based on their estimated parameters and this yields an initial main cluster which contains at least half the profiles. The initial estimated parameters for the population average (PA) profile are obtained by fitting a mixed model (parametric or nonparametric) to those profiles in the main cluster. Profiles that are not contained in the initial main cluster are iteratively added to the main cluster provided their statistics are "small" and the mixed model (parametric or nonparametric) is used to update the estimated parameters for the PA profile. Those profiles contained in the final main cluster are considered as resulting from the in-control process while those not included are considered as resulting from an out-of-control process. This cluster-based method has been applied to monitor both parametric and nonparametric profiles. A simulated example, a Monte Carlo study and an application to a real data set demonstrates the detail of the algorithm and the performance advantage of this proposed method over a non-cluster-based method is demonstrated with respect to more accurate estimates of the PA parameters and improved classification performance criteria. When the profiles can be represented by vectors, the profile monitoring process is equivalent to the detection of multivariate outliers. For this reason, we also compared our proposed method to a popular method used to identify outliers when dealing with a multivariate response. Our study demonstrated that when the out-of-control process corresponds to a sustained shift, the cluster-based method using the successive difference estimator is clearly the superior method, among those methods we considered, based on all performance criteria. In addition, the influence of accurate Phase I estimates on the performance of Phase II control charts is presented to show the further advantage of the proposed method. A simple example and Monte Carlo results show that more accurate estimates from Phase I would provide more efficient Phase II control charts. / Ph. D.
1115

Models and algorithms for a flexible manufacturing system

Desai, Rajendra January 1987 (has links)
This thesis considers a Flexible Manufacturing System (FMS) comprised of programmable machine tools which are capable of performing multiple operations and which are interconnected by computer-controlled automated material handling equipment. The specific problem addressed is a job-shop type of situation in which at least a given number of each type of job needs to be performed on a given set of machines. The flexibility in the system arises in the form that each job can be performed in a variety of ways with each possible manner of performing it, called an Alternate Routing Combination (ARC), being defined by specifying the number of operations needed and the associated machine sequence. The problem is to select a set of jobs and their associated ARCs to be performed, and schedule their operations on the machines so as to optimize various objectives such as minimizing makespan or maximizing machine utilization, or minimizing total flowtime. This problem is mathematically modeled, and heuristic algorithms are presented along with computational results for the case of minimizing the makespan. / M.S.
1116

Local Correlation Approaches and Coupled Cluster Linear Response Theory

McAlexander, Harley R. 15 June 2015 (has links)
Quantum mechanical methods are becoming increasingly useful and applicable tools to complement and support experiment. Nonetheless, some barriers to further applications of theoretical models still remain. A coupled cluster singles and doubles (CCSD) calculation, a reliable ab initio method, scales approximately on the order of 𝑂(𝑁⁶), where 𝑁 is a measure of the system size. This unfortunately limits the use of such high-accuracy methods to relatively small systems. Coupled cluster property calculations must be used in conjunction with reduced-scaling methods in order to broaden the range of applications to larger systems. In this work, we introduce some of the underlying theory behind such calculations and test the performance of several local correlation techniques for polarizabilities, optical rotations, and excited state properties. In general, when the computational cost is significantly reduced, the necessary accuracy is lost. Polarizabilities are less sensitive to the truncation schemes than optical rotations, and the excitation data is often only in agreement with the canonical result for the first few excited states. Additionally, we present a novel application of equation-of-motion coupled cluster singles and doubles to simulated circularly polarized luminescence spectra of eight chiral ketones. Both the absorption in the ground state and emission from the excited states were examined. Extensive geometry analyses were performed, revealing that optimized structures at the density functional theory were adequate for the calculation accurate coupled cluster excitation data. / Ph. D.
1117

Accurate Prediction of Chiroptical Properties

Mach, Taylor Joseph 16 June 2014 (has links)
Accurate theoretical predictions of optical rotation are of substantial utility to the chemical community enabling the determination of absolute configuration without the need for poten- tially lengthy total synthesis. The requirements for robust calculation of gas-phase optical rotation are well understood, but too expensive for routine use. In an effort to reduce this cost we have examined the performance of the LPol and ORP basis sets, created for use in density functional theory calculations of optical rotation, finding that at the coupled cluster level of theory they perform the same or better than comparably sized general basis sets that are often used. We have also examined the performance of a perturbational approach to inclusion of explicit solvent molecules in an effort to extend the calculation of response properties from the gas phase to the condensed phase. This N-body approach performs admirably for interaction energies and even dipole moments but breaks down for optical rotation, exhibiting large basis set superposition errors and requiring higher-order terms in the expansion to provide reasonable accuracy. In addition, we have begun the process of implementing a gauge invariant version of coupled cluster response properties to address the fundamentally unphysical lack of gauge invariance in coupled cluster optical rotations. Correcting this problem, which arises from the non- variational nature of the coupled cluster wavefunction, involves reformulating the response amplitude and function expressions and solving for all necessary amplitudes simultaneously. / Ph. D.
1118

Statistical Analysis of ATM Call Detail Records

Hager, Creighton Tsuan-Ren 11 February 2000 (has links)
Network management is a problem that faces designers and operators of any type of network. Conventional methods of capacity planning or configuration management are difficult to apply directly to networks that dynamically allocate resources, such as Asynchronous Transfer Mode (ATM) networks and emerging Internet Protocol (IP) networks employing Differentiated Services (DiffServ). This work shows a method to generically classify traffic in an ATM network such that capacity planning may be possible. These methods are generally applicable to other networks that support dynamically allocated resources. In this research, Call Detail Records (CDRs) captured from a ¡§live¡¨ ATM network were successfully classified into three traffic categories. The traffic categories correspond to three different video speeds (1152 kbps, 768 kbps, and 384 kbps) in the network. Further statistical analysis was used to characterize these traffic categories and found them to fit deterministic distributions. The statistical analysis methods were also applied to several different network planning and management functions. Three specific potential applications related to network management were examined: capacity planning, traffic modeling, and configuration management. / Master of Science
1119

A Statistical Examination of the Climatic Human Expert System, The Sunset Garden Zones for California

Logan, Ben 11 January 2008 (has links)
Twentieth Century climatology was dominated by two great figures: Wladamir Köppen and C. Warren Thornthwaite. The first carefully developed climatic parameters to match the larger world vegetation communities. The second developed complex formulas of "Moisture Factors" that provided efficient understanding of how evapotranspiration influences plant growth and health, both for native and non-native communities. In the latter half of the Twentieth Century, the Sunset Magazine Corporation develop a purely empirical set of Garden Zones, first for California, then for the thirteen states of the West, now for the entire nation in the National Garden Maps. The Sunset Garden Zones are well recognized and respected in Western States for illustrating the several factors of climate that distinguish zones. But the Sunset Garden Zones have never before been digitized and examined statistically for validation of their demarcations. This thesis examines the digitized zones with reference to PRISM climate data. Variable coverages resembling those described by Sunset are extracted from the PRISM data. These variable coverages are collected for two buffered areas, one in northern California and one in southern California. The coverages are exported from ArcGIS 9.1 to SAS® where they are processed first through a Principal Component Analysis, and then the first five principal components are entered into a Ward's Hierarchical Cluster Analysis. The resulting clusters were translated back into ArcGIS as a raster coverage, where the clusters were climatic regions. This process is quite amenable for further examination of other regions of California / Master of Science
1120

A clustering model for item selection in visual search

McIlhagga, William H. January 2013 (has links)
No / In visual search experiments, the subject looks for a target item in a display containing different distractor items. The reaction time (RT) to find the target is measured as a function of the number of distractors (set size). RT is either constant, or increases linearly, with set size. Here we suggest a two-stage model for search in which items are first selected and then recognized. The selection process is modeled by (a) grouping items into a hierarchical cluster tree, in which each cluster node contains a list of all the features of items in the cluster, called the object file, and (b) recursively searching the tree by comparing target features to the cluster object file to quickly determine whether the cluster could contain the target. This model is able to account for both constant and linear RT versus set size functions. In addition, it provides a simple and accurate account of conjunction searches (e.g., looking for a red N among red Os and green Ns), in particular the variation in search rate as the distractor ratio is varied.

Page generated in 0.0438 seconds