• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 404
  • 227
  • 174
  • 40
  • 33
  • 17
  • 14
  • 9
  • 9
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1094
  • 220
  • 212
  • 188
  • 153
  • 137
  • 120
  • 96
  • 93
  • 93
  • 91
  • 82
  • 79
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Steady State and Dynamical Properties of an Impurity in a BEC in a Double Well Potential

Mumford, Jesse D. 10 1900 (has links)
<p>The subject of this work is the theoretical analysis of the mean-field and many-body properties of an impurity in a Bose-Einstein condensate in a double well potential. By investigating the stationary mean-field properties we show that a critical value of the boson-impurity interaction energy, W<sub>c</sub>, corresponds to a pitchfork bifurcation in the number difference variable in the mean-field theory. Comparing W<sub>c</sub> to the value of W where the many-body ground state wave function begins to split shows a direct correlation signaling a connection between the many-body and mean-field theories. Investigation of the mean-field dynamics shows that chaos emerges for W > W<sub>c</sub> in the vicinity of an unstable equilibrium point generated by the pitchfork bifurcation. An entropy is defined to quantify the chaos and compared to the entanglement entropy between the BEC and the impurity. The mean-field entropy shows a large gradient at W<sub>c</sub> whereas the entanglement entropy shows no apparent features around the same value of W. An increase in correlations between nearest neighbour many-body eigenvalues is seen as W is increased providing evidence for ``quantum chaos''.</p> / Master of Science (MSc)
602

Wave Chaos and Enhancement of Coherent Radiation with Rippled Electrodes in a Photoconductive Antenna

Kim, Christopher Yong Jae January 2016 (has links)
Time-domain terahertz spectroscopy is now a well-established technique. Of the many methods available for a terahertz source for terahertz spectroscopy, the most widely used may be the GaAs-based photoconductive antenna, as it provides relatively high power at terahertz frequencies, commercially available up to 150 µW, and a wide-bandwidth, approximately 70 GHz to 3.5 THz. One of the limitations for developing more accurate and sensitive terahertz interrogation techniques is the lack of higher power sources. Because of our research interests in terahertz spectroscopy, we investigated detailed design and fabrication parameters involved in the photoconductive antenna, which exploits the surface plasma oscillation to produce a wideband pulse. The investigation enabled us to develop a new photoconductive antenna that is capable of generating a high power terahertz beam, at least twenty times stronger than those currently available. Throughout this research, it was discovered that antenna electrodes with particular geometries could produce superradiance, also known as the Dicke effect. Chaotic electrodes with a predisposition to lead charge-carriers into chaotic trajectories, e.g. rippled geometry, were exploited to reduce undesirable heat effects by driving thermal-electrons away from the terahertz generation site, i.e. the location of the surface plasma, while concentrating the removed charge-carriers in separate locations slightly away from the surface plasma. Then, spontaneous emission of coherent terahertz radiation may occur when the terahertz pulse generated by the surface plasma stimulates the concentrated carriers. This spontaneous emission enhanced the total coherent terahertz beam strength, as it occurs almost simultaneously with the primary terahertz beam. In principle, the spontaneous emission power increases as N^2, with the number N of dipole moments resulted from the concentrated charge carriers. Hence, if the design parameters are optimized, it may be possible to increase the strength of coherent terahertz beam by more than one order of magnitude with a photoconductive antenna containing rippled electrodes. However, as the parameters are yet to be optimized, we have only demonstrated 10-20 % enhancement with our current photoconductive antennas. Photoconductive antennas were fabricated via photolithography and characterized by time-domain terahertz spectroscopy and pyroelectric detection. In addition to chaotic electrodes, a variety of other parameters were characterized, including GaAs substrate thickness, GaAs crystal lattice orientation, trench depth for electrodes, metal-semiconductor contact, and bias voltage across electrodes. Nearly all parameters were found to play a crucial role influencing terahertz beam emission and carrier dynamics. By exploiting wave chaos and other antenna parameters, we developed a new photoconductive antenna capable of continuous operation with terahertz power twenty times larger than that of the conventional photoconductive antennas, improving from 150 µW to 3 mW. With further optimizations of the parameters, we expect more dramatic improvement of the photoconductive antenna in the near future. / Physics
603

Bounded Surfatron Acceleration in the Presence of Random Fluctuations

Ruiz Mora, Africa January 2015 (has links)
The mechanisms of acceleration and transport of collisionless plasma in the presence of electromagnetic turbulence (EMT) still remains not fully understood. The particle-EMT interaction can be modelled as the interaction of the particle with a particular wave in the presence of random noise. It has been shown that in such a model the acceleration of the charged particles can be almost free. This effect is known as resonance, which can be explained by the so-called “surfatron” mechanism. We have conducted several numerical simulations for the models with and without the presence of EMT. The turbulence has been modeled as small random fluctuations on the background magnetic field. Particles dynamics consist of two regimes of motion: (i) almost free (Larmor) rotation and (ii) captured (resonance) propagation, which are given by two different sets of invariants. We have determined the necessary conditions for capture and release from resonance for the model without fluctuations, as well as the intrinsic structure of the initial conditions domain for particles in order to be captured. We observed a difference in the orders of magnitude of the dispersion of adiabatic invariant due to the effects of the added fluctuations at the resonance. These results are important to describe the mixing of the different energy levels in the presence of EMT. To understand the impact of the EMT on the system dynamics, we have performed statistical analysis of the effects that different characteristics of the random fluctuations have on the system. The particles' energy gain can be viewed as a random walk over the energy levels, which can be described in terms of the diffusion partial differential equation for the probability distribution function. This problem can be reverse-engineered to understand the nature and structure of the EMT, knowing beforehand the energy distribution of a set of particles. / Mechanical Engineering
604

GT-CHES and DyCon: Improved Classification for Human Evolutionary Systems

Johnson, Joseph S. 13 March 2024 (has links) (PDF)
The purpose of this work is to rethink the process of learning in human evolutionary systems. We take a sober look at how game theory, network theory, and chaos theory pertain specifically to the modeling, data, and training components of generalization in human systems. The value of our research is three-fold. First, our work is a direct approach to align machine learning generalization with core behavioral theories. We made our best effort to directly reconcile the axioms of these heretofore incompatible disciplines -- rather than moving from AI/ML towards the behavioral theories while building exclusively on AI/ML intuition. Second, this approach simplifies the learning process and makes it more intuitive for non-technical domain experts. We see increasing complexity in the models introduced in academic literature and, hence, increasing reliance on abstract hidden states learned by automatic feature engineering. The result is less understanding of how the models work and how they can be interpreted. However, these increasingly complex models are effective on the particular benchmark datasets they were designed for, but do not generalize. Our research highlights why these models are not generalizable and why behavioral theoretic intuition must have priority over the black box reliance on automatic feature engineering. Third, we introduce two novel methods that can be applied off-the-shelf: graph transformation for classification in human evolutionary systems (GT-CHES) and dynamic contrastive learning (DyCon). These models are most effective in mixed-motive human systems. While, GT-CHES is most suitable for tasks that involve event-based data, DyCon can be used on any temporal task.
605

Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations

Cheng, Haiyan 03 August 2009 (has links)
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality. / Ph. D.
606

Topological chaos and chaotic mixing of viscous flows

Gheisarieha, Mohsen 20 May 2011 (has links)
Since it is difficult or impossible to generate turbulent flow in a highly viscous fluid or a microfluidic system, efficient mixing becomes a challenge. However, it is possible in a laminar flow to generate chaotic particle trajectories (well-known as chaotic advection), that can lead to effective mixing. This dissertation studies mixing in flows with the limiting case of zero Reynolds numbers that are called Stokes flows and illustrates the practical use of different theories, namely the topological chaos theory, the set-oriented analysis and lobe dynamics in the analysis, design and optimization of different laminar-flow mixing systems. In a recent development, the topological chaos theory has been used to explain the chaos built in the flow only based on the topology of boundary motions. Without considering any details of the fluid dynamics, this novel method uses the Thurston-Nielsen (TN) classification theorem to predict and describe the stretching of material lines both qualitatively and quantitatively. The practical application of this theory toward design and optimization of a viscous-flow mixer and the important role of periodic orbits as "ghost rods" are studied. The relationship between stretching of material lines (chaos) and the homogenization of a scalar (mixing) in chaotic Stokes flows is examined in this work. This study helps determining the extent to which the stretching can represent real mixing. Using a set-oriented approach to describe the stirring in the flow, invariance or leakiness of the Almost Invariant Sets (AIS) playing the role of ghost rods is found to be in a direct relationship with the rate of homogenization of a scalar. The mixing caused by these AIS and the variations of their structure are explained from the point of view of geometric mechanics using transport through lobes. These lobes are made of segments of invariant manifolds of the periodic points that are generators of the ghost rods. A variety of the concentration-based measures, the important parameters of their calculation, and the implicit effect of diffusion are described. The studies, measures and methods of this dissertation help in the evaluation and understanding of chaotic mixing systems in nature and in industrial applications. They provide theoretical and numerical grounds for selection of the appropriate mixing protocol and design and optimization of mixing systems, examples of which can be seen throughout the dissertation. / Ph. D.
607

Distinguishing Dynamical Kinds: An Approach for Automating Scientific Discovery

Shea-Blymyer, Colin 02 July 2019 (has links)
The automation of scientific discovery has been an active research topic for many years. The promise of a formalized approach to developing and testing scientific hypotheses has attracted researchers from the sciences, machine learning, and philosophy alike. Leveraging the concept of dynamical symmetries a new paradigm is proposed for the collection of scientific knowledge, and algorithms are presented for the development of EUGENE – an automated scientific discovery tool-set. These algorithms have direct applications in model validation, time series analysis, and system identification. Further, the EUGENE tool-set provides a novel metric of dynamical similarity that would allow a system to be clustered into its dynamical regimes. This dynamical distance is sensitive to the presence of chaos, effective order, and nonlinearity. I discuss the history and background of these algorithms, provide examples of their behavior, and present their use for exploring system dynamics. / Master of Science / Determining why a system exhibits a particular behavior can be a difficult task. Some turn to causal analysis to show what particular variables lead to what outcomes, but this can be time-consuming, requires precise knowledge of the system’s internals, and often abstracts poorly to salient behaviors. Others attempt to build models from the principles of the system, or try to learn models from observations of the system, but these models can miss important interactions between variables, and often have difficulty recreating high-level behaviors. To help scientists understand systems better, an algorithm has been developed that estimates how similar the causes of one system’s behaviors are to the causes of another. This similarity between two systems is called their ”dynamical distance” from each other, and can be used to validate models, detect anomalies in a system, and explore how complex systems work.
608

Practical Analysis Tools for Structures Subjected to Flow-Induced and Non-Stationary Random Loads

Scott, Karen Mary Louise 14 July 2011 (has links)
There is a need to investigate and improve upon existing methods to predict response of sensors due to flow-induced vibrations in a pipe flow. The aim was to develop a tool which would enable an engineer to quickly evaluate the suitability of a particular design for a certain pipe flow application, without sacrificing fidelity. The primary methods, found in guides published by the American Society of Mechanical Engineers (ASME), of simple response prediction of sensors were found to be lacking in several key areas, which prompted development of the tool described herein. A particular limitation of the existing guidelines deals with complex stochastic stationary and non-stationary modeling and required much further study, therefore providing direction for the second portion of this body of work. A tool for response prediction of fluid-induced vibrations of sensors was developed which allowed for analysis of low aspect ratio sensors. Results from the tool were compared to experimental lift and drag data, recorded for a range of flow velocities. The model was found to perform well over the majority of the velocity range showing superiority in prediction of response as compared to ASME guidelines. The tool was then applied to a design problem given by an industrial partner, showing several of their designs to be inadequate for the proposed flow regime. This immediate identification of unsuitable designs no doubt saved significant time in the product development process. Work to investigate stochastic modeling in structural dynamics was undertaken to understand the reasons for the limitations found in fluid-structure interaction models. A particular weakness, non-stationary forcing, was found to be the most lacking in terms of use in the design stage of structures. A method was developed using the Karhunen Loeve expansion as its base to close the gap between prohibitively simple (stationary only) models and those which require too much computation time. Models were developed from SDOF through continuous systems and shown to perform well at each stage. Further work is needed in this area to bring this work full circle such that the lessons learned can improve design level turbulent response calculations. / Ph. D.
609

A Polynomial Chaos Approach for Stochastic Modeling of Dynamic Wheel-Rail Friction

Lee, Hyunwook 12 October 2010 (has links)
Accurate estimation of the coefficient of friction (CoF) is essential to accurately modeling railroad dynamics, reducing maintenance costs, and increasing safety factors in rail operations. The assumption of a constant CoF is popularly used in simulation studies for ease of implementation, however many evidences demonstrated that CoF depends on various dynamic parameters and instantaneous conditions. In the real world, accurately estimating the CoF is difficult due to effects of various uncertain parameters, such as wheel and rail materials, rail roughness, contact patch, and so on. In this study, the newly developed 3-D nonlinear CoF model for the dry rail condition is introduced and the CoF variation is tested using this model with dynamic parameters estimated from the wheel-rail simulation model. In order to account for uncertain parameters, a stochastic analysis using the polynomial chaos (poly-chaos) theory is performed using the CoF and wheel-rail dynamics models. The wheel-rail system at a right traction wheel is modeled as a mass-spring-damper system to simulate the basic wheel-rail dynamics and the CoF variation. The wheel-rail model accounts for wheel-rail contact, creepage effect, and creep force, among others. Simulations are performed at train speed of 20 m/s for 4 sec using rail roughness as a unique excitation source. The dynamic simulation has been performed for the deterministic model and for the stochastic model. The dynamics results of the deterministic model provide the starting point for the uncertainty analysis. Six uncertain parameters have been studied with an assumption of 50% uncertainty, intentionally imposed for testing extreme conditions. These parameters are: the maximum amplitude of rail roughness (MARR), the wheel lateral displacement, the track stiffness and damping coefficient, the sleeper distance, and semi-elliptical contact lengths. A symmetric beta distribution is assumed for these six uncertain parameters. The PDF of the CoF has been obtained for each uncertain parameter study, for combinations of two different uncertain parameters, and also for combinations of three different uncertain parameters. The results from the deterministic model show acceptable vibration results for the body, the wheel, and the rail. The introduced CoF model demonstrates the nonlinear variation of the total CoF, the stick component, and the slip component. In addition, it captures the maximum CoF value (initial peak) successfully. The stochastic analysis results show that the total CoF PDF before 1 sec is dominantly affected by the stick phenomenon, while the slip dominantly influences the total CoF PDF after 1 sec. Although a symmetric distribution has been used for the uncertain parameters considered, the uncertainty in the response obtained displayed a skewed distribution for some of the situations investigated. The CoF PDFs obtained from simulations with combinations of two and three uncertain parameters have wider PDF ranges than those obtained for only one uncertain parameter. FFT analysis using the rail displacement has been performed for the qualitative validation of the stochastic simulation result due to the absence of the experimental data. The FFT analysis of the deterministic rail displacement and of the stochastic rail displacement with uncertainties demonstrates consistent trends commensurate with loss of tractive efficiency, such as the bandwidth broadening, peak frequency shifts, and side band occurrence. Thus, the FFT analysis validates qualitatively that the stochastic modeling with various uncertainties is well executed and is reflecting observable, real-world results. In conclusions, the development of an effective model which helps to understand the nonlinear nature of wheel-rail friction is critical to the progress of railroad component technology and rail safety. In the real world, accurate estimation of the CoF at the wheel-rail interface is very difficult since it is influenced by several uncertain parameters as illustrated in this study. Using the deterministic CoF value can cause underestimation or overestimation of CoF values leading to inaccurate decisions in the design of the wheel-rail system. Thus, the possible PDF ranges of the CoF according to key uncertain parameters must be considered in the design of the wheel-rail system. / Ph. D.
610

Estimating Uncertainties in the Joint Reaction Forces of Construction Machinery

Allen, James Brandon 05 June 2009 (has links)
In this study we investigate the propagation of uncertainties in the input forces through a mechanical system. The system of interest was a wheel loader, but the methodology developed can be applied to any multibody systems. The modeling technique implemented focused on efficiently modeling stochastic systems for which the equations of motion are not available. The analysis targeted the reaction forces in joints of interest. The modeling approach developed in this thesis builds a foundation for determining the uncertainties in a Caterpillar 980G II wheel loader. The study begins with constructing a simple multibody deterministic system. This simple mechanism is modeled using differential algebraic equations in Matlab. Next, the model is compared with the CAD model constructed in ProMechanica. The stochastic model of the simple mechanism is then developed using a Monte Carlo approach and a Linear/Quadratic transformation method. The Collocation Method was developed for the simple case study for both Matlab and ProMechanica models. Thus, after the Collocation Method was validated on the simple case study, the method was applied to the full 980G II wheel loader in the CAD model in ProMechanica. This study developed and implemented an efficient computational method to propagate computational method to propagate uncertainties through "black-box" models of mechanical systems. The method was also proved to be reliable and easier to implement than traditional methods. / Master of Science

Page generated in 0.0192 seconds