231 |
Multi-layer Methods for Quantum Chemistry in the Condensed Phase: Combining Density Functional Theory, Molecular Mechanics, and Continuum Solvation ModelsLange, Adrian W. 18 June 2012 (has links)
No description available.
|
232 |
Statistical Analysis of Geolocation Fundamentals Using Stochastic GeometryO'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited.
Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy.
Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability.
Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position.
The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?"
Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target.
Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
|
233 |
Insights into molecular recognition and reactivity from molecular simulations of protein-ligand interactions using MD and QM/MMBowleg, Jerrano L. 13 May 2022 (has links) (PDF)
In this thesis, we have employed two computational methods, molecular dynamics (MD) and hybrid quantum mechanics/molecular mechanics (QM/MM) MD simulations with umbrella sampling (US), to gain insights into the molecular mechanism governing the molecular recognition and reactivity in several protein-ligand complexes. Three systems involving protein-ligand interactions are examined in this dissertation utilizing well-established computational methodologies and mathematical modeling. The three proteins studied here are acetylcholinesterase (AChE), butyrylcholinesterase (BChE), and peptidyl-prolyl cis-trans isomerase NIMA-interacting 1 (PIN1). These enzymes are known to interact with a variety of ligands. AChE dysfunction caused by organophosphorus (OP) chemicals is a severe hazard since AChE is a critical enzyme in neurotransmission. Oximes are chemical compounds that can reactivate inhibited AChE; hence in the development of better oximes, it is critical to understand the mechanism through which OPs block AChE. We have described the covalent inhibition mechanism between AChE and the OP insecticide phorate oxon and its more potent metabolites and established their free energy profiles using QM/MM MD-US for the first time. Our results suggest a concerted mechanism and provide insights into the challenges in reactivating phorate oxon inhibited AChE. Reactivating BChE is another therapeutic approach to detoxifying circulating OP molecules before reaching the target AChE. We explored the covalent modification of BChE with phorate oxon and its metabolites using hybrid quantum mechanics/molecular mechanics (QM/MM) umbrella sampling simulations (PM6/ff14SB) for the inhibition process. Our results reveal that the mechanism is distinct between the inhibitors. The PM6 methodology is a good predictor of these compounds' potency, which may efficiently help study OPs like phorate oxon with larger leaving groups. Finally, we investigated the interactions between Peptidyl-prolyl isomerase (PPIase), which consists of a peptidyl isomerase (PPIase) domain flexibly tethered to a smaller Trp-Trp (WW) protein-binding domain, and chimeric peptides based on the human histone H1.4 sequence (KATGAApTPKKSAKW), as well as the effects on inter-domain dynamics. Using explicit solvent MD simulations, simulated annealing, and native contact analysis, our modeling sugget that the residues in the N-terminal immediate to the pSer/Thr Pro site connect the PPIase and WW domains via a series of hydrogen bonds and native contacts.
|
234 |
QM/MM Applications and Corrections for Chemical ReactionsBryant J Kim (15322279) 18 May 2023 (has links)
<p>In this thesis, we present novel computational methods and frameworks to address the challenges associated with the determination of free energy profiles for condensed-phase chemical reactions using combined quantum mechanical and molecular mechanical (QM/MM) approaches. We focus on overcoming issues related to force matching, molecular polarizability, and convergence of free energy profiles. First, we introduce a method called Reaction Path-Force Matching in Collective Variables (RP-FM-CV) that efficiently carries out ab initio QM/MM free energy simulations through mean force fitting. This method provides accurate and robust simulations of solution-phase chemical reactions by significantly reducing deviations on the collective variables forces, thereby bringing simulated free energy profiles closer to experimental and benchmark AI/MM results. Second, we explore the role of pairwise repulsive correcting potentials in generating converged free energy profiles for chemical reactions using QM/MM simulations. We develop a free energy correcting model that sheds light on the behavior of repulsive pairwise potentials with large force deviations in collective variables. Our findings contribute to a deeper understanding of force matching models, paving the way for more accurate predictions of free energy profiles in chemical reactions. Next, we address the underpolarization problem in semiempirical (SE) molecular orbital methods by introducing a hybrid framework called doubly polarized QM/MM (dp-QM/MM). This framework improves the response property of SE/MM methods through high-level molecular polarizability fitting using machine learning (ML)-derived corrective polarizabilities, referred to as chaperone polarizabilities. We demonstrate the effectiveness of the dp-QM/MM method in simulating the Menshutkin reaction in water, showing that ML chaperones significantly reduce the error in solute molecular polarizability, bringing simulated free energy profiles closer to experimental results. In summary, this thesis presents a series of novel methods and frameworks that improve the accuracy and reliability of free energy profile estimations in condensed-phase chemical reactions using QM/MM simulations. By addressing the challenges of force matching, molecular polarizability, and convergence, these advancements have the potential to impact various fields, including computational chemistry, materials science, and drug design.</p>
|
235 |
QM/MM Applications and Corrections for Chemical ReactionsKim, Bryant 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In this thesis, we present novel computational methods and frameworks to address the challenges associated with the determination of free energy profiles for condensed-phase chemical reactions using combined quantum mechanical and molecular mechanical (QM/MM) approaches. We focus on overcoming issues related to force matching, molecular polarizability, and convergence of free energy profiles. First, we introduce a method called Reaction Path-Force Matching in Collective Variables (RP-FM-CV) that efficiently carries out ab initio QM/MM free energy simulations through mean force fitting. This method provides accurate and robust simulations of solution-phase chemical reactions by significantly reducing deviations on the collective variables forces, thereby bringing simulated free energy profiles closer to experimental and benchmark AI/MM results. Second, we explore the role of pairwise repulsive correcting potentials in generating converged free energy profiles for chemical reactions using QM/MM simulations. We develop a free energy correcting model that sheds light on the behavior of repulsive pairwise potentials with large force deviations in collective variables. Our findings contribute to a deeper understanding of force matching models, paving the way for more accurate predictions of free energy profiles in chemical reactions. Next, we address the underpolarization problem in semiempirical (SE) molecular orbital methods by introducing a hybrid framework called doubly polarized QM/MM (dp-QM/MM). This framework improves the response property of SE/MM methods through high-level molecular polarizability fitting using machine learning (ML)-derived corrective polarizabilities, referred to as chaperone polarizabilities. We demonstrate the effectiveness of the dp-QM/MM method in simulating the Menshutkin reaction in water, showing that ML chaperones significantly reduce the error in solute molecular polarizability, bringing simulated free energy profiles closer to experimental results. In summary, this thesis presents a series of novel methods and frameworks that improve the accuracy and reliability of free energy profile estimations in condensed-phase chemical reactions using QM/MM simulations. By addressing the challenges of force matching, molecular polarizability, and convergence, these advancements have the potential to impact various fields, including computational chemistry, materials science, and drug design.
|
236 |
Performance evaluation of 4.75-mm NMAS Superpave mixtureRahman, Farhana January 1900 (has links)
Doctor of Philosophy / Department of Civil Engineering / Mustaque Hossain / A Superpave asphalt mixture with 4.75-mm nominal maximum aggregate size (NMAS) is a promising, low-cost pavement preservation treatment for agencies such as the Kansas Department of Transportation (KDOT). The objective of this research study is to develop an optimized 4.75-mm NMAS Superpave mixture in Kansas. In addition, the study evaluated the residual tack coat application rate for the 4.75-mm NMAS mix overlay.
Two, hot-in-place recycling (HIPR) projects in Kansas, on US-160 and K-25, were overlaid with a 15- to 19-mm thick layer of 4.75-mm NMAS Superpave mixture in 2007. The field tack coat application rate was measured during construction. Cores were collected from each test section for Hamburg wheel tracking device (HWTD) and laboratory bond tests performed after construction and after one year in service. Test results showed no significant effect of the tack coat application rate on the rutting performance of rehabilitated pavements. The number of wheel passes to rutting failure observed during the HWTD test was dependent on the aggregate source as well as on in-place density of the cores. Laboratory pull-off tests showed that most cores were fully bonded at the interface of the 4.75-mm NMAS overlay and the HIPR layer, regardless of the tack application rate. The failure mode during pull-off tests at the HMA interface was highly dependent on the aggregate source and mix design of the existing layer material. This study also confirmed that overlay construction with a high tack coat application rate may result in bond failure at the HMA interface.
Twelve different 4.75-mm NMAS mix designs were developed using materials from the aforementioned but two binder grades and three different percentages of natural (river) sand. Laboratory performance tests were conducted to assess mixture performance. Results show that rutting and moisture damage potential in the laboratory depend on aggregate type irrespective of binder grade. Anti-stripping agent affects moisture sensitivity test results. Fatigue performance is significantly influenced by river sand content and binder grade. Finally, an optimized 4.75-mm NMAS mixture design was developed and verified based on statistical analysis of performance data.
|
237 |
Monte Carlo radiation transfer studies of protoplanetary environmentsWalker, Christina H. January 2007 (has links)
Monte Carlo radiation transfer provides an efficient modelling tool for probing the dusty local environment of young stars. Within this thesis, such theoretical models are used to study the disk structure of objects across the mass spectrum - young low mass Brown Dwarfs, solar mass T-Tauri stars, intermediate mass Herbig Ae stars, and candidate B-stars with massive disks. A Monte Carlo radiation transfer code is used to model images and photometric data in the UV - mm wavelength range. These models demonstrate how modelling techniques have been updated in an attempt to reduce the number of unknown parameters and extend the diversity of objects that can be studied.
|
238 |
Do Dividend Yields Affect a Stock Price's Volatility? : Does the Miller & Modigliani Theroem apply to the Euronext and London Stock Exchange?Hoffmann, Joe, Marriott, Nicholas January 2019 (has links)
Background: Investors around the globe have debated, for more than 40 years, about whether the dividend yield has an influence on a stock’s price or not. There are different theories supporting both sides. These theories, however, often simplify the real world and therefore may not apply fully. Purpose: The purpose of this paper is to conduct empirical research on the complicated dividend policy topic and find out whether the dividend yield influences a stock’s price by testing for its effect on stock price volatility. This result finds evidence of whether investors disregard, or regard, any dividend payments and if it influences investors decisions when purchasing stock. Method: We take the top valued companies in the non-financial sector from the LSE and the Euronext between the years 2008 and 2017. We then run a Fixed Effect Model regression taking some of their reported values including their dividend yield and their stock price volatility. Conclusion: Our results indicate that the dividend yield a company pays stockholders has a positive influence on the stock price volatility, thus affecting the prices of stocks. These results counter the MM Theorem and are inconclusive with the main principles of the Bird in Hand Theorem by Gordon (1960) and Lintner (1962).
|
239 |
Computational Studies of ThDP-Dependent EnzymesPaulikat, Mirko 18 December 2018 (has links)
No description available.
|
240 |
Sur l'interaction eau/anion ; <br>les caractères structurants et déstructrants, la rupture de symétrie du nitrateBoisson, Jean 19 December 2008 (has links) (PDF)
Nous étudions l'hydratation des ions fluorures et iodures, paradigmes des ions structurants et déstructurants, grâce ` a une étude structurale et dynamique. Concernant la structure, nous observons l'opposé de ce que la définition classique des structurants/déstructurants suggère (F- déforme le réseau de liaisons hydrogènes et I- l'améliore). Ensuite, nous calculons les temps de vie des liaisons hydrogènes halogénures-eau ainsi que les temps de résidence et de réorientation des molécules d'eau de la première couche des anions. Ces mesures confirment la grande stabilité, induite par la force de la liaison H, de la couche d'hydratation de F - et la grande mobilité de l'eau dans la couche d'hydratation de I -. Puis, nous appliquons le modèle étendu de saut (GJM) pour une molécule d'eau dans la couche d'hydratation des ions. Pour F -, la situation inhabituelle, où les deux mécanismes de réorientation (de saut et diffusif) ont la même contribution, est due à la force de la liaison H qui inhibe les sauts de la liaison OH. Dans un second temps, nous appliquons les méthodes précédentes sur l'hydratation du nitrate en utilisant des simulations QM/MM. Les résultats sont similaires pour la structure et la dynamique à ceux de l'iodure et le GJM révèle deux mécanismes de saut pour l'eau initialement liée à Ono3- : un saut vers une autre molécule d'eau ou un saut vers un autre Ono3- . Enfin, grâce aux ordres de liaison et aux charges, nous mettons en évidence la rupture de symétrie du nitrate induite par l'eau, ainsi que la dynamique rapide d'interconversion entre les états du nitrate.
|
Page generated in 0.0224 seconds