101 |
A Self-Consistent-Field Perturbation Theory of Nuclear Spin Coupling ConstantsBlizzard, Alan Cyril 05 1900 (has links)
Scope and Content stated in the place of the abstract. / The principal methods of calculating nuclear spin coupling constants
by applying perturbation theory to molecular orbital wavefunctions for the
electronic structure of molecules are discussed. A new method employing a
self-consistent-field perturbation theory (SCFPT) is then presented and compared
with the earlier methods.
In self-consistent-field (SCF) methods, the interaction of an
electron with other electrons in a molecule is accounted for by treating the
other electrons as an average distribution of negative charge. However, this
charge distribution cannot be calculated until the electron-electron interactions
themselves are known. In the SCF method, an initial charge distribution
is assumed and then modified in an iterative calculation until the
desired degree of self-consistency is attained. In most previous perturbation
methods, these electron interactions are not taken into account in a self consistent
manner in calculating the perturbed wavefunction even when SCF
wavefunctions are used to describe the unperturbed molecule.
The main advantage of the new SCFPT approach is that it treats the interactions between electrons with the same degree of self-consistency
in the perturbed wavefunction as in the unperturbed wavefunction. The
SCFPT method offers additional advantages due to its computational
efficiency and the direct manner in which it treats the perturbations.
This permits the theory to be developed for the orbital and dipolar contributions
to nuclear spin coupling as well as for the more commonly
treated contact interaction.
In this study, the SCFPT theory is used with the Intermediate
Neglect of Differential Overlap (INDO) molecular orbital approximation to
calculate a number of coupling constants involving 13c and 19F. The
usually neglected orbital and dipolar terms are found to be very important
in FF and CF coupling. They can play a decisive role in explaining the
experimental trend of JCF among a series of compounds. The orbital interaction
is found to play a significant role in certain CC couplings.
Generally good agreement is obtained between theory and experiment
except for JCF and JFF in oxalyl fluoride and the incorrect signs obtained
for cis JFF in fluorinated ethylenes. The nature of the theory permits
the latter discrepancy to be rationalized in terms of computational details.
The value of JFF in difluoracetjc acid is predicted to be -235 Hz.
The SCFPT method is used with a theory of dπ - pπ bonding to predict
in agreement with experiment that JCH in acetylene will decrease when that
molecule is bound in a transition metal complex. / Thesis / Doctor of Philosophy (PhD)
|
102 |
Sparse Matrices in Self-Consistent Field MethodsRubensson, Emanuel January 2006 (has links)
This thesis is part of an effort to enable large-scale Hartree-Fock/Kohn-Sham (HF/KS) calculations. The objective is to model molecules and materials containing thousands of atoms at the quantum mechanical level. HF/KS calculations are usually performed with the Self-Consistent Field (SCF) method. This method involves two computationally intensive steps. These steps are the construction of the Fock/Kohn-Sham potential matrix from a given electron density and the subsequent update of the electron density usually represented by the so-called density matrix. In this thesis the focus lies on the representation of potentials and electron density and on the density matrix construction step in the SCF method. Traditionally a diagonalization has been used for the construction of the density matrix. This diagonalization method is, however, not appropriate for large systems since the time complexity for this operation is σ(n3). Three types of alternative methods are described in this thesis; energy minimization, Chebyshev expansion, and density matrix purification. The efficiency of these methods relies on fast matrix-matrix multiplication. Since the occurring matrices become sparse when the separation between atoms exceeds some value, the matrix-matrix multiplication can be performed with complexity σ(n). A hierarchic sparse matrix data structure is proposed for the storage and manipulation of matrices. This data structure allows for easy development and implementation of algebraic matrix operations, particularly needed for the density matrix construction, but also for other parts of the SCF calculation. The thesis addresses also truncation of small elements to enforce sparsity, permutation and blocking of matrices, and furthermore calculation of the HOMO-LUMO gap and a few surrounding eigenpairs when density matrix purification is used instead of the traditional diagonalization method. / <p>QC 20101123</p>
|
103 |
Dimensionally Compatible System of Equations for Tree and Stand Volume, Basal Area, and GrowthSharma, Mahadev 17 November 1999 (has links)
A dimensionally compatible system of equations for stand basal area, volume, and basal area and volume growth was derived using dimensional analysis. These equations are analytically and numerically consistent with dimensionally compatible individual tree volume and taper equations and share parameters with them. Parameters for the system can be estimated by fitting individual tree taper and volume equations or by fitting stand level basal area and volume equations. In either case the parameters are nearly identical. Therefore, parameters for the system can be estimated at the tree or stand level without changing the results.
Data from a thinning study in loblolly pine (Pinus taeda L.) plantations established on cutover site-prepared lands were used to estimate the parameters. However, the developed system of equations is general and can be applied to other tree species in other locales. / Ph. D.
|
104 |
The Relative Importance of Selected Variables on the Employment Consistency of Virginia Ex-OffendersOnyewu, Chinonyerem Nonye Chidozie 18 March 2009 (has links)
To decrease the steady rise in the prison population, we must deter ex-offenders from re-offending and recidivating, once they have been released. For ex-offenders, finding employment is critical to successful post-release re-integration which can help reduce the chances of them recidivating. Ex-offenders who are consistent in their employment patterns are less likely to return to a life of crime. This study investigated the relative importance and significance of 11 selected variables on four separate levels of employment consistency. The selected variables were chosen based on what has been identified in the literature as effecting employment patterns of ex-offenders and the general population, and what data was reliable and available. The study group consisted of 2,314 male Virginia ex-offenders released in fiscal year 2001. The results revealed that the variables of time served, career and technical education program completions, educational level, age at release, race, and being convicted of a violent offense were positive predictors of employment consistency. On the other hand, having a record of minor infractions and being a repeat offender were associated with decreasing employment consistency in the analysis. The findings of the study suggest that it is important for offenders to make changes in the ways they think and their attitudes. This can be accomplished by taking advantage of opportunities in prison to participate in rehabilitative services and educational programs. In addition, as offenders get older they tend to abandon criminal ways of thinking, and once released they are more apt to stay employed. Furthermore, the influence of the race variable did not affect the study group of ex-offenders as anticipated. / Ph. D.
|
105 |
Bases de Datos NoSQL: escalabilidad y alta disponibilidad a través de patrones de diseñoAntiñanco, Matías Javier 09 June 2014 (has links)
Este trabajo presenta un catálogo de técnicas y patrones de diseño aplicados actualmente en bases de datos NoSQL. El enfoque propuesto consiste en una presentación del estado del arte de las bases de datos NoSQL, una exposición de los conceptos claves relacionados y una posterior exhibición de un conjunto de técnicas y patrones de diseño orientados a la escalabilidad y alta disponibilidad.
Para tal fin,
• Se describen brevemente las características principales de los bases de datos NoSQL, cuales son los factores que motivaron su aparición, sus diferencias con sus pares relacionales, se presenta el teorema CAP y se contrasta las propiedades ACID contra las BASE.
• Se introducen las problemáticas que motivan las técnicas y patrones de diseño a describir.
• Se presentan técnicas y patrones de diseños que solucionen las problemáticas.
• Finalmente, se concluye con un análisis integrador, y se indican otros temas de investigación pertinentes.
|
106 |
A theoretical study of creep deformation mechanisms of Type 316H stainless steel at elevated temperaturesHu, Jianan January 2015 (has links)
The currently operating Generation II Advanced Gas-Cooled Reactors (AGR) in the nuclear power stations in the UK, mainly built in the 1960s and 1970s, are approaching their designed life. Besides the development of the new generation of reactors, the government is also seeking to extend the life of some AGRs. Creep and failure properties of Type 316H austenitic stainless steels used in some components of AGR at elevated temperature are under investigation in EDF Energy Ltd. However, the current empirical creep models used and examined in EDF Energy have deficiency and demonstrate poor agreement with the experimental data in the operational complex thermal/mechanical conditions. The overall objective of the present research is to improve our general understanding of the creep behaviour of Type 316H stainless steels under various conditions by undertaking theoretical studies and developing a physically based multiscale state variable model taking into account the evolution of different microstructural elements and a range of different internal mechanisms in order to make realistic life prediction. A detailed review shows that different microstructural elements are responsible for the internal deformation mechanisms for engineering alloys such as 316H stainless steels. These include the strengthening effects, associated with forest dislocation junctions, solute atoms and precipitates, and softening effects, associated with recovery of dislocation structure and coarsening of precipitates. All the mechanisms involve interactions between dislocations and different types of obstacles. Thus change in the microstructural state will lead to the change in materials' internal state and influence the mechanical/creep property. Based on these understandings, a multiscale self-consistent model for a polycrystalline material is established, consisting of continuum, crystal plasticity framework and dislocation link length model that allows the detailed dislocation distribution structure and its evolution during deformation to be incorporated. The model captures the interaction between individual slip planes (self- and latent hardening) and between individual grains and the surrounding matrix (plastic mismatch, leading to the residual stress). The state variables associated with all the microstructure elements are identified as the mean spacing between each type of obstacles. The evolution of these state variables are described in a number of physical processes, including the dislocation multiplication and climb-controlled network coarsening and the phase transformation (nucleation, growth and coarsening of different phases). The enhancements to the deformation kinetics at elevated temperature are also presented. Further, several simulations are carried out to validate the established model and further evaluate and interpret various available data measured for 316H stainless steels. Specimens are divided into two groups, respectively ex-service plus laboratory aged (EXLA) with a considerable population of precipitates and solution treated (ST) where precipitates are not present. For the EXLA specimens, the model is used to evaluate the microscopic lattice response, either parallel or perpendicular to the loading direction, subjected to uniaxial tensile and/or compressive loading at ambient temperature, and macroscopic Bauschinger effect, taking into account the effect of pre-loading and pre-crept history. For the ST specimens, the model is used to evaluate the phase transformation in the specimen head volume subjected to pure thermal ageing, and multiple secondary stages observed during uniaxial tensile creep in the specimen gauge volume at various temperatures and stresses. The results and analysis in this thesis improve the fundamental understanding of the relationship between the evolution of microstructure and the creep behaviour of the material. They are also beneficial to the assessment of materials' internal state and further investigation of deformation mechanism for a broader range of temperature and stress.
|
107 |
Verzerrter Recall als potentielles Hindernis für Synergie bei Gruppenentscheidungen / Biased Recall as a potential obstacle for the achievement of synergy in decision-making groupsGiersiepen, Annika Nora 20 December 2016 (has links)
In Hidden Profiles gelingt es Gruppen häufig nicht, ihr Potenzial, bessere Entscheidungen als jedes ihrer Mitglieder zu treffen, zu erfüllen. Für dieses Phänomen wurden bereits verschiedene Ursachen ermittelt. Dazu gehören insbesondere Verzerrungen im Inhalt der Gruppendiskussion sowie der Bewertung von entscheidungsrelevanten Informationen durch die Gruppenmitglieder. In der vorliegenden Arbeit wird nun ein weiterer Aspekt individueller Informationsverarbeitung untersucht, dessen Verzerrung einen nachteiligen Einfluss auf die Entscheidungsqualität von Diskussionsgruppen haben könnte: der individuelle Recall bezüglich aufgabenrelevanter Informationen. Dabei werden zwei Verzerrungen postuliert: Ein Erinnerungsvorteil von Informationen, welche die ursprüngliche Präferenz des jeweiligen Gruppenmitglieds unterstützen sowie eine Verzerrung zugunsten von Informationen, die bereits vor der Diskussion verfügbar sind. Es wird angenommen, dass beide Verzerrungen einen negativen Einfluss auf die Entscheidungsqualität des Individuums und somit auch der gesamten Gruppe haben. Diese Annahmen wurden in einer Reihe von vier Experimenten und der Reanalyse zweier früherer Studien untersucht. Insgesamt wurde dabei Evidenz für einen Erinnerungsvorteil eigener, vor der Diskussion bekannter Informationen gegenüber in der Diskussion neu gelernten Informationen gefunden. Belege für einen Erinnerungsvorteil präferenzkonsistenter Informationen zeigten sich dagegen nur vereinzelt und in einer metaanalytischen Zusammenfassung nicht in signifikantem Maße. Eine experimentelle Manipulation der Erinnerungsverzerrungen liefert keinen Hinweis auf einen Zusammenhang zwischen diesen Faktoren und der Entscheidungsqualität in Hidden-Profile- Situationen. Eine Verzerrung der individuellen Erinnerung im Hinblick auf entscheidungsrelevante Informationen ist somit nach den Ergebnissen dieser Arbeit keine sinnvolle Erweiterung der bestehenden Erklärungsansätze für das Scheitern von Entscheidungsgruppen an der Realisierung von Synergien.
|
108 |
The regular histories formulation of quantum theoryPriebe, Roman January 2012 (has links)
A measurement-independent formulation of quantum mechanics called ‘regular histories’ (RH) is presented, able to reproduce the predictions of the standard formalism without the need to for a quantum-classical divide or the presence of an observer. It applies to closed systems and features no wave-function collapse. Weights are assigned only to histories satisfying a criterion called ‘regularity’. As the set of regular histories is not closed under the Boolean operations this requires a new con- cept of weight, called ‘likelihood’. Remarkably, this single change is enough to overcome many of the well-known obstacles to a sensible interpretation of quantum mechanics. For example, Bell’s theorem, which makes essential use of probabilities, places no constraints on the locality properties of a theory based on likelihoods. Indeed, RH is both counter- factually definite and free from action-at-a-distance. Moreover, in RH the meaningful histories are exactly those that can be witnessed at least in principle. Since it is especially difficult to make sense of the concept of probability for histories whose occurrence is intrinsically indeterminable, this makes likelihoods easier to justify than probabilities. Interaction with the environment causes the kinds of histories relevant at the macroscopic scale of human experience to be witnessable and indeed to generate Boolean algebras of witnessable histories, on which likelihoods reduce to ordinary probabilities. Further- more, a formal notion of inference defined on regular histories satisfies, when restricted to such Boolean algebras, the classical axioms of implication, explaining our perception of a largely classical world. Even in the context of general quantum histories the rules of reasoning in RH are remark- ably intuitive. Classical logic must only be amended to reflect the fundamental premise that one cannot meaningfully talk about the occurrence of unwitnessable histories. Crucially, different histories with the same ‘physical content’ can be interpreted in the same way and independently of the family in which they are expressed. RH thereby rectifies a critical flaw of its inspiration, the consistent histories (CH) approach, which requires either an as yet unknown set selection rule or a paradigm shift towards an un- conventional picture of reality whose elements are histories-with-respect-to-a-framework. It can be argued that RH compares favourably with other proposed interpretations of quantum mechanics in that it resolves the measurement problem while retaining an essentially classical worldview without parallel universes, a framework-dependent reality or action-at-a-distance.
|
109 |
Ion cyclotron resonance heating in toroidal plasmasHedin, Johan January 2000 (has links)
<p>NR 20140805</p>
|
110 |
Approche cartésienne pour le calcul du vent en terrain complexe avec application à la propagation des feux de forêtProulx, Louis-Xavier 01 1900 (has links)
La méthode de projection et l'approche variationnelle de Sasaki sont deux techniques permettant d'obtenir un champ vectoriel à divergence nulle à partir d'un champ initial quelconque. Pour une vitesse d'un vent en haute altitude, un champ de vitesse sur une grille décalée est généré au-dessus d'une topographie donnée par une fonction analytique. L'approche cartésienne nommée Embedded Boundary Method est utilisée pour résoudre une équation de Poisson découlant de la projection sur un domaine irrégulier avec des conditions aux limites mixtes. La solution obtenue permet de corriger le champ initial afin d'obtenir un champ respectant la loi de conservation de la masse et prenant également en compte les effets dûs à la géométrie du terrain. Le champ de vitesse ainsi généré permettra de propager un feu de forêt sur la topographie à l'aide de la méthode iso-niveaux. L'algorithme est décrit pour le cas en deux et trois dimensions et des tests de convergence sont effectués. / The Projection method and Sasaki's variational technique are two methods allowing one to extract a divergence-free vector field from any vector field. From a high altitude wind speed, a velocity field is generated on a staggered grid over a topography given by an analytical function. The Cartesian grid Embedded Boundary method is used for solving a Poisson equation, obtained from the projection, on an irregular domain with mixed boundary conditions. The solution of this equation gives the correction for the initial velocity field to make it such that it satisfies the conservation of mass and takes into account the effects of the terrain. The incompressible velocity field will be used to spread a wildfire over the topography with the Level Set Method. The algorithm is described for the two and three dimensional cases and convergence tests are done.
|
Page generated in 0.1373 seconds