• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 11
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 23
  • 23
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Nonequilibrium Statistical Models: Guided Network Growth Under Localized Information and Perspectives on Electron Diffusion in Conductors

Trevelyan, Alexander 31 October 2018 (has links)
The ability to probe many-particle systems on a microscopic level has revolutionized the way we do statistical physics. As computational capabilities continue to grow exponentially, larger and more complex systems come within reach of microscopic analysis. In the field of network growth, the classical model has given way to competitive processes, in which networks are guided by some criteria at every step of their formation. We develop and analyze a new competitive growth process that permits intervention on growing networks using only local properties of the network when evaluating how to add new connections. We establish the critical behavior of this new method and explore potential uses in guiding the development of real-world networks. The classical system of electrons diffusing within a conductor similarly permits a microscopic analysis where, to date, studies of the macroscopic properties have dominated the literature. In order to extend our understanding of the theory that governs this diffusion—the fluctuation-dissipation theorem—we construct a physical model of the Johnson-Nyquist system of electrons embedded in the bulk of a conductor. Constructing the model involves deriving how the motion of each individual electron comes about via scattering processes in the conductor, then connecting this collective motion to the macroscopic observables of voltage and current that define Johnson-Nyquist noise. Once the equilibrium properties have been fully realized, an external perturbation can be applied in order to probe the behavior of the model as it deviates away from equilibrium. In much the same way that competitive network growth revolutionized classical network theory, we aim to establish a model which can guide future research into nonequilibrium fluctuation-dissipation by providing a method for interacting with the system in a precise and well-controlled manner as it evolves over time. This model is presented in its present form in Chapter 3. Chapter 2, which covers this work, has been published in Physical Review E as a Rapid Communication [1]. The writing and analysis were performed by me as the primary author. Eric Corwin and Georgios Tsekenis are listed as co-authors for their contribution to the analysis and for advisement on the work. This dissertation includes previously published and unpublished co-authored material.
2

The use of in situ gamma radiation measurements as a method of determining radon potential in urban environments

Berens, Andrew S 07 May 2016 (has links)
Radon is a radioactive gas that is the leading cause of lung cancer in non-smokers. While radon is natural and ubiquitous, higher concentrations greatly increase cancer risk. As such, understanding the spatial distribution of radon potential is key to planning and public health efforts. This project tests a method of determining radon potential using in situ measurements of gamma radiation. The in situ measurements were used to create a raster of gamma emissions in the study region using kriging. The resulting model showed that the operational scale of gamma radiation in the study region was 4.5 km. Indoor radon concentrations were then assigned gamma emission rates from the raster and the two were compared. While there was evidence of an association between higher gamma and high radon, the gamma readings were not quantitatively predictive. As such only categorical predictions of radon potential and risk could be made.
3

Modeling the Effect of Shocks and Stresses on the Reliability of Networks with Radial Topologies

Mangal, Kunal, Larsen, Alexandra, Chryst, Breanne, Rojo, Javier 04 November 2011 (has links)
We consider the impact that various shocks and stresses have on the reliability of networks with radial topology, such as an electrical power grid. We incorporate the effects of aging, geographical risk, and local dependence between components into a model of overall system reliability. We also simulate how the system fares under extreme weather events, such as hurricanes. Our model gives a flexible and general understanding of how outside forces affect network reliability and can be adapted to a range of specific uses. We run a simulation using this model which yields realistic results.
4

A Statistical Approach for Assessing Seismic Transitions Associated with Fluid Injections

Wang, Pengyun 01 December 2016 (has links)
The wide application of fluid injection has caused a concern of the potential critical risk associated with induced seismicity. To help clarify the concern, this dissertation proposes a statistical approach for assessing seismic transitions associated with fluid injections by scientifically analyzing instrumental measures of seismic events. The assessment problem is challenging due to the uncertain effects of wastewater injections on regional seismicity, along with the limited availability of seismic and injection data. To overcome the challenge, three statistical methods are developed, with each being focused on a different aspect of the problem. Specifically, a statistical method is developed for early detection of induced seismicity, with the potential of allowing for site managers and regulators to act promptly and preparing communities for the increased seismic risk; the second method aims for addressing the further need of quantitatively assessing the transition of induced seismicity, which can reveal the underlying process of induced seismicity and provide data to support probabilistic seismic hazard analysis; and finally, the third method steps further to characterize the process of spatial distribution of induced seismicity, which accounts for spatial evolution of induced seismicity. All the proposed methods are built on the principles of Bayesian technique, which provides a flexible inference framework to incorporate domain expertise and data uncertainty. The effectiveness of the proposed methods is demonstrated using the earthquake dataset for the state of Oklahoma, which shows a promising result: the detection method is able to issue warning of induced seismicity well before the occurrence of severe consequences; the transition model provides a significantly better fit to the dataset than the classical model and sheds light on the underlying transition of induced seismicity in Oklahoma; and the spatio-temporal model provides a most comprehensive characterization of the dataset in terms of its spatial and temporal properties and is shown to have a much better short-term forecasting performance than the “naïve methods”. The proposed methods can be used in combination as a decision-making support tool to identify areas with increasing levels of seismic risk in a quantitative manner, supporting a comprehensive assessment to decide which risk-mitigation strategy should be recommended.
5

Functional and Effective Connectivity of Effortful Emotion Regulation

McRae, Kateri Lynne January 2007 (has links)
Emotion regulation plays an important role in emotional well-being, as well as in the protection against and recovery from mood and anxiety disorders. Previous studies of the functional neuroanatomy of emotion regulation have reported greater activity in prefrontal control-related regions during active regulation. These activations are accompanied by decreases in activity in emotion-responsive regions such as the amygdala and insula. These findings are widely interpreted as consistent with models of cognitive control that implicate top-down, negative influences from prefrontal cortex upon emotion-related processing in other regions. However, no studies to date have used measures of effective connectivity to investigate the likely influence of prefrontal control regions upon emotion-responsive regions in the context of effortful emotion regulation. In the present study, participants alternated between responding naturally to negative emotional stimuli and reinterpreting the negative stimuli with the goal of reducing their experienced negative affect. Functional magnetic resonance imaging (fMRI) was used to measure whole-brain blood-oxygen level dependent signal throughout the task. fMRI data were analyzed using partial least squares (PLS) and structural equations modeling (SEM) to test for differences in effective connectivity between natural and regulated emotional responding. Results indicate that three paths significantly distinguish between regulation and non-regulation negative conditions. The path from inferior frontal gyrus (IFG) to anterior cingulate cortex (ACC) was significantly less positive during regulation than natural responding. In addition, the reciprocal paths between ACC and insula were more negative during regulation than natural responding. Taken as a whole, these changes in effective connectivity are consistent with assumptions of top-down modulation during effortful emotion regulation. In addition, these changes suggest a pivotal role for the influence of IFG upon ACC and the ACC-insula loop in emotion regulation. The processes represented by these changes and implications for future research are discussed.
6

Statistical Modeling Of Effective Temperature With Cosmic Ray Flux

Zhang, Xiaohang 12 August 2016 (has links)
The increasing frequency of sporadic weather patterns in the last decade, especially major winter storms, demands improvements in current weather forecasting techniques. Recently, there are growing interests in stratospheric forecasting because of its potential enhancements of weather forecasts. The dominating factors of northern hemisphere wintertime variation of the general circulation in the stratosphere is a phenomenon called stratospheric sudden warming (SSW) events. It is shown in multiple studies that SSW and cosmic ray muon flux variations are strongly correlated with the effective atmospheric temperature changes, which suggests that cosmic ray detectors could be potentially used as meteorological applications, especially for monitoring SSW events. A method for determining the effective temperature with cosmic ray flux measurements is studied in this work by using statistical modeling techniques, such as k-fold cross validation and partial least square regression. This method requires the measurement of the vertical profile of the atmospheric temperature, typically measured by radiosonde, for training the model. In this study, cosmic ray flux measured in Atlanta and Yakutsk are chosen for demonstrating this novel technique. The results of this study show the possibility of realtime monitoring on effective temperature by simultaneous measurement of cosmic ray muon and neutron flux. This technique can also be used for studying the historical SSW events using the past world wide cosmic ray data.
7

Statistical Performance Modeling of SRAMs

Zhao, Chang 2009 December 1900 (has links)
Yield analysis is a critical step in memory designs considering a variety of performance constraints. Traditional circuit level Monte-Carlo simulations for yield estimation of Static Random Access Memory (SRAM) cell is quite time consuming due to their characteristic of low failure rate, while statistical method of yield sensitivity analysis is meaningful for its high efficiency. This thesis proposes a novel statistical model to conduct yield sensitivity prediction on SRAM cells at the simulation level, which excels regular circuit simulations in a significant runtime speedup. Based on the theory of Kriging method that is widely used in geostatistics, we develop a series of statistical model building and updating strategies to obtain satisfactory accuracy and efficiency in SRAM yield sensitivity analysis. Generally, this model applies to the yield and sensitivity evaluation with varying design parameters, under the constraints of most SRAM performance metric. Moreover, it is potentially suitable for any designated distribution of the process variation regardless of the sampling method.
8

Modeling and mitigation of interference in wireless receivers with multiple antennae

Chopra, Aditya 31 January 2012 (has links)
Recent wireless communication research faces the challenge of meeting a predicted 1000x increase in demand for wireless Internet data over the next decade. Among the key reasons for such explosive increase in demand include the evolution of Internet as a provider of high-definition video entertainment and two-way video communication, accessed via mobile wireless devices. One way to meet some of this demand is by using multiple antennae at the transmitter and receiver in a wireless device. For example, a system with 4 transmit and 4 receive antennae can provide up to a 4x increase in data throughput. Another key aspect of the overall solution would require sharing radio frequency spectral resources among users, causing severe amounts of interference to wireless systems. Consequently, wireless receivers with multiple antennae would be deployed in network environments that are rife with interference primarily due to wireless resource sharing among users. Other significant sources of interference include computational platform subsystems, signal leakage, and external electronics. Interference causes severe degradation in communication performance of wireless receivers. Having accurate statistical models of interference is a key requirement to designing, and analyzing the communication performance of, multi-antenna wireless receivers in the presence of interference. Prior work on statistical modeling of interference in multi-antenna receivers utilizes either the Gaussian distribution, or non-Gaussian distributions exhibiting either statistical independence or spherical isotropy. This dissertation proposes a framework, based on underlying statistical-physical mechanism of interference generation and propagation, for modeling multi-antenna interference in various network topologies. This framework can model interference which is spherically isotropic, or statistically independent, or somewhere on a continuum between these two extremes. The dissertation then utilizes the derived statistical models to analyze communication performance of multi-antenna receivers in interference-limited wireless networks. Accurate communication performance analysis can highlight the tradeoffs between communication performance and computational complexity of various multi-antenna receiver designs. Finally, using interference statistics, this dissertation proposes receiver algorithms that best mitigate the impact of interference on communication performance. The proposed algorithms include multi-antenna combining strategies, as well as, antenna selection algorithms for cooperative communications. / text
9

Modeling Pavement Performance based on Data from the Swedish LTPP Database : Predicting Cracking and Rutting

Svensson, Markus January 2013 (has links)
The roads in our society are in a state of constant degradation. The reasons for this are many, and therefore constructed to have a certain lifetime before being reconstructed. To minimize the cost of maintaining the important transport road network high quality prediction models are needed. This report presents new models for flexible pavement structures for initiation and propagation of fatigue cracks in the bound layers and rutting for the whole structure. The models are based on observations from the Swedish Long Term Pavement Performance (LTPP) database. The intention is to use them for planning maintenance as part of a pavement management system (PMS). A statistical approach is used for the modeling, where both cracking and rutting are related to traffic data, climate conditions, and the subgrade characteristics as well as the pavement structure. / <p>QC 20130325</p>
10

Modeling the power consumption of computing systems and applications through machine learning techniques / Modélisation de la consommation énergétique des systèmes informatiques et ses applications grâce à des techniques d'apprentissage automatique

Fontoura Cupertino, Leandro 17 July 2015 (has links)
Au cours des dernières années, le nombre de systèmes informatiques n'a pas cesser d'augmenter. Les centres de données sont peu à peu devenus des équipements hautement demandés et font partie des plus consommateurs en énergie. L'utilisation des centres de données se partage entre le calcul intensif et les services web, aussi appelés informatique en nuage. La rapidité de calcul est primordiale pour le calcul intensif, mais pour les autres services ce paramètre peut varier selon les accords signés sur la qualité de service. Certains centres de données sont dits hybrides car ils combinent plusieurs types de services. Toutes ces infrastructures sont extrêmement énergivores. Dans ce présent manuscrit nous étudions les modèles de consommation énergétiques des systèmes informatiques. De tels modèles permettent une meilleure compréhension des serveurs informatiques et de leur façon de consommer l'énergie. Ils représentent donc un premier pas vers une meilleure gestion de ces systèmes, que ce soit pour faire des économies d'énergie ou pour facturer l'électricité à la charge des utilisateurs finaux. Les politiques de gestion et de contrôle de l'énergie comportent de nombreuses limites. En effet, la plupart des algorithmes d'ordonnancement sensibles à l'énergie utilisent des modèles de consommation restreints qui renferment un certain nombre de problèmes ouverts. De précédents travaux dans le domaine suggèrent d'utiliser les informations de contrôle fournies par le système informatique lui-même pour surveiller la consommation énergétique des applications. Néanmoins, ces modèles sont soit trop dépendants du type d'application, soit manquent de précision. Ce manuscrit présente des techniques permettant d'améliorer la précision des modèles de puissance en abordant des problèmes à plusieurs niveaux: depuis l'acquisition des mesures de puissance jusqu'à la définition d'une charge de travail générique permettant de créer un modèle lui aussi générique, c'est-à-dire qui pourra être utilisé pour des charges de travail hétérogènes. Pour atteindre un tel but, nous proposons d'utiliser des techniques d'apprentissage automatique.Les modèles d'apprentissage automatique sont facilement adaptables à l'architecture et sont le cœur de cette recherche. Ces travaux évaluent l'utilisation des réseaux de neurones artificiels et la régression linéaire comme technique d'apprentissage automatique pour faire de la modélisation statistique non linéaire. De tels modèles sont créés par une approche orientée données afin de pouvoir adapter les paramètres en fonction des informations collectées pendant l'exécution de charges de travail synthétiques. L'utilisation des techniques d'apprentissage automatique a pour but d'atteindre des estimateurs de très haute précision à la fois au niveau application et au niveau système. La méthodologie proposée est indépendante de l'architecture cible et peut facilement être reproductible quel que soit l'environnement. Les résultats montrent que l'utilisation de réseaux de neurones artificiels permet de créer des estimations très précises. Cependant, en raison de contraintes de modélisation, cette technique n'est pas applicable au niveau processus. Pour ce dernier, des modèles prédéfinis doivent être calibrés afin d'atteindre de bons résultats. / The number of computing systems is continuously increasing during the last years. The popularity of data centers turned them into one of the most power demanding facilities. The use of data centers is divided into high performance computing (HPC) and Internet services, or Clouds. Computing speed is crucial in HPC environments, while on Cloud systems it may vary according to their service-level agreements. Some data centers even propose hybrid environments, all of them are energy hungry. The present work is a study on power models for computing systems. These models allow a better understanding of the energy consumption of computers, and can be used as a first step towards better monitoring and management policies of such systems either to enhance their energy savings, or to account the energy to charge end-users. Energy management and control policies are subject to many limitations. Most energy-aware scheduling algorithms use restricted power models which have a number of open problems. Previous works in power modeling of computing systems proposed the use of system information to monitor the power consumption of applications. However, these models are either too specific for a given kind of application, or they lack of accuracy. This report presents techniques to enhance the accuracy of power models by tackling the issues since the measurements acquisition until the definition of a generic workload to enable the creation of a generic model, i.e. a model that can be used for heterogeneous workloads. To achieve such models, the use of machine learning techniques is proposed. Machine learning models are architecture adaptive and are used as the core of this research. More specifically, this work evaluates the use of artificial neural networks (ANN) and linear regression (LR) as machine learning techniques to perform non-linear statistical modeling.Such models are created through a data-driven approach, enabling adaptation of their parameters based on the information collected while running synthetic workloads. The use of machine learning techniques intends to achieve high accuracy application- and system-level estimators. The proposed methodology is architecture independent and can be easily reproduced in new environments.The results show that the use of artificial neural networks enables the creation of high accurate estimators. However, it cannot be applied at the process-level due to modeling constraints. For such case, predefined models can be calibrated to achieve fair results.% The use of process-level models enables the estimation of virtual machines' power consumption that can be used for Cloud provisioning.

Page generated in 0.4059 seconds