• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 432
  • 42
  • 25
  • 24
  • 20
  • 16
  • 16
  • 16
  • 16
  • 16
  • 15
  • 10
  • 5
  • 2
  • 2
  • Tagged with
  • 643
  • 643
  • 141
  • 100
  • 94
  • 79
  • 71
  • 63
  • 58
  • 57
  • 56
  • 54
  • 52
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Exploring the Reductive Pathway for the Hydrometallurgical Production of Copper from Chalcopyrite

Vardner, Jonathan Thomas January 2021 (has links)
The high demand for copper is coinciding with a sharp decline in the grade of copper reserves, and as a result, copper scarcities are expected to arise in the coming decades. In this work, a transformative hydrometallurgical process is being developed to lower the costs of copper production and thereby sustain the use of copper throughout the global transition to renewable energy technologies. The focal point of the hydrometallurgical process is the reductive treatment of chalcopyrite, which is in contrast to the oxidative treatment more commonly pursued in the literature. Chalcopyrite may be reduced directly by the cathode of an electrochemical reactor, which is monitored by atomic absorption spectroscopy (AAS), x-ray diffraction (XRD), and x-ray photoelectron spectroscopy (XPS). The efficiency of the electrochemical reaction is optimized by adjusting the electrode materials, applied current density, and reactor design. Chalcopyrite may also be reduced by reaction with the vanadium (II) ion, which circumvents engineering challenges associated with slurry electrodes but requires the separation and electrochemical regeneration of the vanadium (II) ion. A preliminary technoeconomic analysis suggests that both reduction pathways may be competitive with the pyrometallurgical standard for copper production. The performance of vanadium redox flow batteries (VRFBs) is hindered by the diffusion and migration of the vanadium species across the separator, however the migration of vanadium species has not been accurately measured or characterized with values of the transference numbers. In this work, models based on dilute solution theory and concentrated solution theory are developed to introduce the dimensionless ratio of migration to diffusion (M/D) to the literature. It is shown that transference numbers may be measured with high accuracy and precision for experiments conducted in the migration-dominated regime. An experimental procedure is designed to measure vanadium crossover as a function of current density for vanadium-containing electrolytes of various state of charge (SOC), state of discharge (SOD), and sulfuric acid concentration. Model-guided design of experiment is used to estimate the transference number of the vanadium species in Nafion 117 with minimal uncertainty related to unknown or unmeasured physical properties. Markov Chain Monte Carlo simulations are used to quantify the relative uncertainties of the transference number estimates to be less than five percent, consistently. The transference number estimates are related to faradaic efficiency loss and capacity fade of working VRFBs operating in the migration-dominated regime. The technique used in this work may be generalized to measure salt transference numbers in novel electrochemical systems and membrane separators to inform their rational design.
352

Large deviations of the KPZ equation, Markov duality and SPDE limits of the vertex models

Lin, Yier January 2021 (has links)
The Kardar-Parisi-Zhang (KPZ) equation is a stochastic PDE describing various objects in statistical mechanics such as random interface growth, directed polymers, interacting particle systems. We study large deviations of the KPZ equation, both in the short time and long time regime. We prove the first short time large deviations for the KPZ equation and detects a Gaussian - 5/2 power law crossover in the lower tail rate function. In the long-time regime, we study the upper tail large deviations of the KPZ equation starting from a wide range of initial data and explore how the rate function depends on the initial data. The KPZ equation plays a role as the weak scaling limit of various models in the KPZ universality class. We show the stochastic higher spin six vertex model, a class of models which sit on top of the KPZ integrable systems, converges weakly to the KPZ equation under certain scaling. This extends the weak universality of the KPZ equation. On the other hand, we show that under a different scaling, the stochastic higher spin six vertex model converges to a hyperbolic stochastic PDE called stochastic telegraph equation. One key tool behind the proof of these two stochastic PDE limits is a property called Markov duality.
353

Modernizing Markov Chains Monte Carlo for Scientific and Bayesian Modeling

Margossian, Charles Christopher January 2022 (has links)
The advent of probabilistic programming languages has galvanized scientists to write increasingly diverse models to analyze data. Probabilistic models use a joint distribution over observed and latent variables to describe at once elaborate scientific theories, non-trivial measurement procedures, information from previous studies, and more. To effectively deploy these models in a data analysis, we need inference procedures which are reliable, flexible, and fast. In a Bayesian analysis, inference boils down to estimating the expectation values and quantiles of the unnormalized posterior distribution. This estimation problem also arises in the study of non-Bayesian probabilistic models, a prominent example being the Ising model of Statistical Physics. Markov chains Monte Carlo (MCMC) algorithms provide a general-purpose sampling method which can be used to construct sample estimators of moments and quantiles. Despite MCMC’s compelling theory and empirical success, many models continue to frustrate MCMC, as well as other inference strategies, effectively limiting our ability to use these models in a data analysis. These challenges motivate new developments in MCMC. The term “modernize” in the title refers to the deployment of methods which have revolutionized Computational Statistics and Machine Learning in the past decade, including: (i) hardware accelerators to support massive parallelization, (ii) approximate inference based on tractable densities, (iii) high-performance automatic differentiation and (iv) continuous relaxations of discrete systems. The growing availability of hardware accelerators such as GPUs has in the past years motivated a general MCMC strategy, whereby we run many chains in parallel with a short sampling phase, rather than a few chains with a long sampling phase. Unfortunately existing convergence diagnostics are not designed for the “many short chains” regime. This is notably the case of the popular R statistics which claims convergence only if the effective sample size per chain is large. We present the nested R, denoted nR, a generalization of R which does not conflate short chains and poor mixing, and offers a useful diagnostic provided we run enough chains and meet certain initialization conditions. Combined with nR the short chain regime presents us with the opportunity to identify optimal lengths for the warmup and sampling phases, as well as the optimal number of chains; tuning parameters of MCMC which are otherwise chosen using heuristics or trial-and-error. We next focus on semi-specialized algorithms for latent Gaussian models, arguably the most widely used of class of hierarchical models. It is well understood that MCMC often struggles with the geometry of the posterior distribution generated by these models. Using a Laplace approximation, we marginalize out the latent Gaussian variables and then integrate the remaining parameters with Hamiltonian Monte Carlo (HMC), a gradient-based MCMC. This approach combines MCMC and a distributional approximation, and offers a useful alternative to pure MCMC or pure approximation methods such as Variational Inference. We compare the three paradigms across a range of general linear models, which admit a sophisticated prior, i.e. a Gaussian process and a Horseshoe prior. To implement our scheme efficiently, we derive a novel automatic differentiation method called the adjoint-differentiated Laplace approximation. This differentiation algorithm propagates the minimal information needed to construct the gradient of the approximate marginal likelihood, and yields a scalable differentiation method that is orders of magnitude faster than state of the art differentiation for high-dimensional hyperparameters. We next discuss the application of our algorithm to models with an unconventional likelihood, going beyond the classical setting of general linear models. This necessitates a non-trivial generalization of the adjoint-differentiated Laplace approximation, which we implement using higher-order adjoint methods. The generalization works out to be both more general and more efficient. We apply the resulting method to an unconventional latent Gaussian model, identifying promising features and highlighting persistent challenges. The final chapter of this dissertation focuses on a specific but rich problem: the Ising model of Statistical Physics, and its generalization as the Potts and Spin Glass models. These models are challenging because they are discrete, precluding the immediate use of gradient-based algorithms, and exhibit multiple modes, notably at cold temperatures. We propose a new class of MCMC algorithms to draw samples from Potts models by augmenting the target space with a carefully constructed auxiliary Gaussian variable. In contrast to existing methods of a similar flavor, our algorithm can take advantage of the low-rank structure of the coupling matrix and scales linearly with the number of states in a Potts model. The method is applied to a broad range of coupling and temperature regimes and compared to several sampling methods, allowing us to paint a nuanced algorithmic landscape.
354

Cis-regulatory modules clustering from sequence similarity

Handfield, Louis-François. January 2007 (has links)
No description available.
355

Unsupervised Machine-Learning Applications in Seismology

Sawi, Theresa January 2024 (has links)
Catalogs of seismic source parameters (hypocenter locations, origin times, and magnitudes) are vital for studying various Earth processes, greatly enhancing our understanding of the nature of seismic events, the structure of the Earth, and the dynamics of fault systems. Modern seismic analyses utilize supervised machine learning (ML) to build enhanced catalogs based on millions of examples of analyst-picked phase-arrivals in waveforms, yet the ability to characterize the time-varying spectral content of the waveforms underlying those catalogs remains lacking. Unsupervised machine learning (UML) methods provide powerful tools for inferring patterns from musical spectrograms with little a priori information, yet has been relatively underutilized in the field of seismology. In this thesis, I leverage advanced tools from UML to analyze the temporal spectral content of large sets of spectrograms generated by different mechanisms in two distinct geologic settings: icequakes and tremors at Gornergletscher (a Swiss temperate glacier) and repeating earthquakes from a 10-km-long creeping segment of the San Andreas Fault. The core algorithm in this work, now known as Spectral Unsupervised Feature Extraction, or SpecUFEx, extracts time-varying frequency patterns from spectrograms and reduces them into low-dimensionality fingerprints via a combination of non-negative matrix factorization and hidden Markov Modeling (Holtzman et al. 2018), optimized for large data sets via stochastic variational inference. This work describes the SpecUFEx algorithm and the suite of preprocessing, clustering, and visualization tools developed to create an UML workflow, SpecUFEx+, that is widely-accessible and applicable for many seismic settings. I apply theSpecUFEx+ workflow to single- and multi-station seismic data from Gornergletscher, and demonstrate how some fingerprint-clusters track diurnal tremor related to subglacial water flow, while others correspond to the onset of the subglacial and englacial components of a glacial lake outburst flood. I also discover periods of harmonic tremor localized near the ice-bed interface that may be related to glacial stick-slip sliding. I additionally apply the SpecUFEx+ workflow to earthquakes on the San Andreas Fault to unveil far more repeating earthquake sequences than previously inferred, leading to enhanced slip-rate estimates at seismogenic depths and providing a more detailed image of seismic gaps along the fault interface. Unsupervised feature extraction is a novel tool to the field of seismology. This work demonstrates how scientific insight can be gained through the characterization of the spectral-temporal patterns of large seismic datasets within an UML-framework.
356

Analysis of variances in electric power system simulation for production cost

Smith, William Corbett January 1991 (has links)
No description available.
357

Bayesian collocation tempering and generalized profiling for estimation of parameters from differential equation models

Campbell, David Alexander. January 2007 (has links)
No description available.
358

Tracking maneuvering targets via semi-Markov maneuver modeling

Gholson, Norman Hamilton 02 March 2010 (has links)
Adaptive algorithms for state estimation are currently of tremendous interest. Such estimation techniques have particular military usefulness in automatic gunfire control systems. The conventional Kalman filter, developed by Kalman and Bucy, optimally solves the state estimation problem concerning linear systems with Gaussian disturbance and error processes. The maneuvering target tracking problem generally involves nonlinear system properties as well as non-Gaussian disturbance processes. The study presented here explores several solutions. to this problem. An adaptive state estimator centered about the familiar Kalman filter has been developed for applications in three-dimensional maneuvering target tracking. Target maneuvers are modeled in a general manner by a semi-Markov process. The semi-Markov modeling is based on very intuitively appealing assumptions. Specifically, target maneuvers are randomly selected from a range (possibly infinite) of maneuver commands. The selected command is sustained for a random holding time before another command is selected. Dynamics of the selection and holding process may be stationary or time varying. By incorporating the semi-Markov modeling into a Baysian estimation scheme, an adaptive state estimator can be designed to identify the particular maneuver command influencing the target. The algorithm has the distinct advantages of requiring only one Kalman filter and non-growing computer storage requirements. Several techniques of implementing the adaptive algorithm have been developed. The merits of rectangular and spherical modeling have been explored. Most importantly, the planar discrete level semi-Markov algorithm, originally developed for sonar applications, has been extended to a continuum of levels, as well as extended to three-dimensional tracking. The developed algorithms have been fully evaluated by computer simulations. Emphasis has been placed on computational burden as well as overall tracking performance. Results are presented that show.that the developed estimators largely eliminate severe tracking errors that occur when more simplistic target models are incorporated. / Ph. D.
359

A Markov chain approach for analyzing Palmer drought severity index

Tchaou, Marcel Kossi 19 September 2009 (has links)
Drought is perceived differently by different people, but in general, it is conceived as a period of below normal precipitation or moisture deficiency that would affect the social and economic activities of a region. Many numerical indices are used to quantify the effect of drought. The Pahner Drought Severity Index (PDSI) is the most and widely used drought indicator parameter in most recent applications. The PDSI takes into account precipitation, temperature, and soil moisture and depicts prolonged abnormal dryness or wetness. A Markov chain model was developed to analyze the likelihood of occurrences of the seven types of weather spells, defined by the National Oceanic and Atmospheric Administration (NOAA). The spells are classified, using the PDSI computed monthly by the NOAA. The model predicts both short and long term drought status over an entire climatic division. Twelve monthly transition matrices and one annual transition matrix were computed. The matrices show the transition patterns between months and between drought states. The model was applied to the Tidewater area (climatic division l) and the Southwest mountains (climatic division 6) of Virginia. The model predictions reflect the reality and compare very well with the observed data for these two climatic divisions. This model can potentially be used as a tool for water resource planning and design of drought assistance plans by water resource managers. / Master of Science
360

A software tool to support more efficient computation of the stationary distribution for Markov chain usage models

Guo, Hongyan 01 April 2002 (has links)
No description available.

Page generated in 0.3853 seconds