41 |
Conformational Transitions in Polymer BrushesRomeis, Dirk 07 April 2014 (has links) (PDF)
A polymer brush is formed by densely grafting the chain ends of polymers onto a surface. This tethering of the long macromolecules has considerable influence on the surface properties, which can be additionally modified by changing the environmental conditions. In this context it is of special interest to understand and control the behavior of the grafted layer and to create surfaces that display a desired response to external stimulation.
The present work studies densely grafted polymer brushes and the effects that such an environment imposes on an individual chain molecule in the grafted layer. For this purpose we developed a new self-consistent field approach to describe mixtures of heterogeneous chains comprised of differently sized hard spheres. Applying this method to the case of polymer brushes we consider a fraction of grafted molecules to be different from the majority brush chains. The modification of these chains includes a variation in the degree of polymerization, a different solvent selectivity behavior and a variable size of the free end-monomer. Due to the computational efficiency of the present approach, as compared for example to direct simulation methods, we can study the conformations of the modified 'guest' chains systematically in dependence of the relevant parameters. With respect to brush profile and the distribution of the free chain ends the new method shows very good quantitative agreement with corresponding simulation results. We also confirm the observation that these 'guest' chains can undergo a conformational transition depending on the type of modification and the solvent quality.
For the cases studied in the present work we analyze the conditions to achieve a most sensitive behavior of this conformational switching. In addition, an analytical model is proposed to describe this effect. We compare its predictions to the numerical results and find good agreement.
|
42 |
Numerical Reaction-transport Model of Lake Dynamics and Their Eutrophication ProcessesStojanovic, Severin 22 September 2011 (has links)
A 1D numerical reaction-transport model (RTM) that is a coupled system of partial differential equations is created to simulate prominent physical and biogeochemical processes and interactions in limnological environments. The prognostic variables considered are temperature, horizontal velocity, salinity, and turbulent kinetic energy of the water column, and the concentrations of phytoplankton, zooplankton, detritus, phosphate (H3PO4), nitrate (NO3-), ammonium (NH4+), ferrous iron (Fe2+), iron(III) hydroxide (Fe(OH)3(s)), and oxygen (O2) suspended within the water column. Turbulence is modelled using the k-e closure scheme as implemented by Gaspar et al. (1990) for oceanic environments. The RTM is used to demonstrate how it is possible to investigate limnological trophic states by considering the problem of eutrophication as an example. A phenomenological investigation of processes leading to and sustaining eutrophication is carried out. A new indexing system that identifies different trophic states, the so-called Self-Consistent Trophic State Index (SCTSI), is proposed. This index does not rely on empirical measurements that are then compared to existing tables for classifying limnological environments into particular trophic states, for example, the concentrations of certain species at certain depths to indicate the trophic state, as is commonly done in the literature. Rather, the index is calculated using dynamic properties of only the limnological environment being considered and examines how those properties affect the sustainability of the ecosystem. Specifically, the index is calculated from a ratio of light attenuation by the ecosystem’s primary biomass to that of total light attenuation by all particulate species and molecular scattering throughout the entire water column. The index is used to probe various simulated scenarios that are believed to be relevant to eutrophication: nutrient loading, nutrient limitation, overabundance of phytoplankton, solar-induced turbulence, and wind-induced turbulence.
|
43 |
N-representable density matrix perturbation theory / Théorie des perturbations en matrice densité N-représentableDianzinga, Mamy Rivo 07 December 2016 (has links)
Alors que les approches standards de résolution de la structure électronique présentent un coût de calcul à la puissance 3 par rapport à la complexité du problème, des solutions permettant d’atteindre un régime asymptotique linéaire,O(N), sont maintenant bien connues pour le calcul de l'état fondamental. Ces solutions sont basées sur la "myopie" de la matrice densité et le développement d'un cadre théorique permettant de contourner le problème aux valeurs propres. La théorie des purifications de la matrice densité constitue une branche de ce cadre théorique. Comme pour les approches de type O(N) appliquées à l'état fondamental,la théorie des perturbations nécessaire aux calculs des fonctions de réponse électronique doit être révisée pour contourner l'utilisation des routines coûteuses.L'objectif est de développer une méthode robuste basée uniquement sur la recherche de la matrice densité perturbée, pour laquelle seulement des multiplications de matrices creuses sont nécessaires. Dans une première partie,nous dérivons une méthode de purification canonique qui respecte les conditions de N-representabilité de la matrice densité à une particule. Nous montrons que le polynôme de purification obtenu est auto-cohérent et converge systématiquement vers la bonne solution. Dans une seconde partie, en utilisant une approche de type Hartree-Fock, nous appliquons cette méthode aux calculs des tenseurs de réponses statiques non-linéaires pouvant être déterminés par spectroscopie optique. Au delà des calculs à croissance linéaire réalisés, nous démontrons que les conditions N-representabilité constituent un prérequis pour garantir la fiabilité des résultats. / Whereas standard approaches for solving the electronic structures present acomputer effort scaling with the cube of the number of atoms, solutions to overcomethis cubic wall are now well established for the ground state properties, and allow toreach the asymptotic linear-scaling, O(N). These solutions are based on thenearsightedness of the density matrix and the development of a theoreticalframework allowing bypassing the standard eigenvalue problem to directly solve thedensity matrix. The density matrix purification theory constitutes a branch of such atheoretical framework. Similarly to earlier developments of O(N) methodology appliedto the ground state, the perturbation theory necessary for the calculation of responsefunctions must be revised to circumvent the use of expensive routines, such asmatrix diagonalization and sum-over-states. The key point is to develop a robustmethod based only on the search of the perturbed density matrix, for which, ideally,only sparse matrix multiplications are required. In the first part of this work, we derivea canonical purification, which respects the N-representability conditions of the oneparticledensity matrix for both unperturbed and perturbed electronic structurecalculations. We show that this purification polynomial is self-consistent andconverges systematically to the right solution. As a second part of this work, we applythe method to the computation of static non-linear response tensors as measured inoptical spectroscopy. Beyond the possibility of achieving linear-scaling calculations,we demonstrate that the N-representability conditions are a prerequisite to ensurereliability of the results.
|
44 |
Paralelização do cálculo de estruturas de bandas de semicondutores usando o High Performance Fortran / Semiconductors band structure calculus paralelization using High Performance FortranRodrigo Daniel Malara 14 January 2005 (has links)
O uso de sistemas multiprocessados para a resolução de problemas que demandam um grande poder computacional tem se tornado cada vez mais comum. Porém a conversão de programas seqüenciais para programas concorrentes ainda não é uma tarefa trivial. Dentre os fatores que tornam esta tarefa difícil, destacamos a inexistência de um paradigma único e consolidado para a construção de sistemas computacionais paralelos e a existência de várias plataformas de programação para o desenvolvimento de programas concorrentes. Nos dias atuais ainda é impossível isentar o programador da especificação de como o problema será particionado entre os vários processadores. Para que o programa paralelo seja eficiente, o programador deve conhecer a fundo aspectos que norteiam a construção do hardware computacional paralelo, aspectos inerentes à arquitetura onde o software será executado e à plataforma de programação concorrente escolhida. Isto ainda não pode ser mudado. O ganho que podemos obter é na implementação do software paralelo. Esta tarefa pode ser trabalhosa e demandar muito tempo para a depuração, pois as plataformas de programação não possibilitam que o programador abstraia dos elementos de hardware. Tem havido um grande esforço na criação de ferramentas que otimizem esta tarefa, permitindo que o programador se expresse mais fácil e sucintamente quanto à para1elização do programa. O presente trabalho se baseia na avaliação dos aspectos ligados à implementação de software concorrente utilizando uma plataforma de portabilidade chamada High Performance Fortran, aplicado a um problema específico da física: o cálculo da estrutura de bandas de heteroestruturas semicondutoras. O resultado da utilização desta plataforma foi positivo. Obtivemos um ganho de performance superior ao esperado e verificamos que o compilador pode ser ainda mais eficiente do que o próprio programador na paralelização de um programa. O custo inicial de desenvolvimento não foi muito alto, e pode ser diluído entre os futuros projetos que venham a utilizar deste conhecimento pois após a fase de aprendizado, a paralelização de programas se torna rápida e prática. A plataforma de paralelização escolhida não permite a paralelização de todos os tipos de problemas, apenas daqueles que seguem o paradigma de paralelismo por dados, que representam uma parcela considerável dos problemas típicos da Física. / The employment of multiprocessor systems to solve problems that demand a great computational power have become more and more usual. Besides, the conversion of sequential programs to concurrent ones isn\'t trivial yet. Among the factors that makes this task difficult, we highlight the nonexistence of a unique and consolidated paradigm for the parallel computer systems building and the existence of various programming platforms for concurrent programs development. Nowadays it is still impossible to exempt the programmer of the specification about how the problem will be partitioned among the various processors. In order to have an efficient parallel program the programmer have to deeply know subjects that heads the parallel hardware systems building, the inherent architecture where the software will run and the chosen concurrent programming platform. This cannot be changed yet. The gain is supposed to be on the parallel software implementation. This task can be very hard and consume so much time on debugging it, because the programming platforms do not allow the programmer to abstract from the hardware elements. It has been a great effort in the development of tools that optimize this task, allowing the programmer to work easily and briefly express himself concerning the software parallelization. The present work is based on the evaluation of aspects linked to the concurrent software implementation using a portability platform called High Performance Fortran, applied to a physics specific problem: the calculus of semiconductor heterostructures? valence band structure. The result of the use of this platform use was positive. We obtained a performance gain superior than we expected and we could assert that the compiler is able to be more effective than the programmer on the paralelization of a program. The initial development cost wasn\'t so high and it can be diluted between the next projects that would use the acquired knowledge, because after the learning phase, the programs parallelization task becomes quick and practical. The chosen parallelization platform does not allow the parallelization of all kinds of problems, but just the ones that follow the data parallelism paradigm that represents a considerable parcel of tipical Physics problems.
|
45 |
Numerical Reaction-transport Model of Lake Dynamics and Their Eutrophication ProcessesStojanovic, Severin January 2011 (has links)
A 1D numerical reaction-transport model (RTM) that is a coupled system of partial differential equations is created to simulate prominent physical and biogeochemical processes and interactions in limnological environments. The prognostic variables considered are temperature, horizontal velocity, salinity, and turbulent kinetic energy of the water column, and the concentrations of phytoplankton, zooplankton, detritus, phosphate (H3PO4), nitrate (NO3-), ammonium (NH4+), ferrous iron (Fe2+), iron(III) hydroxide (Fe(OH)3(s)), and oxygen (O2) suspended within the water column. Turbulence is modelled using the k-e closure scheme as implemented by Gaspar et al. (1990) for oceanic environments. The RTM is used to demonstrate how it is possible to investigate limnological trophic states by considering the problem of eutrophication as an example. A phenomenological investigation of processes leading to and sustaining eutrophication is carried out. A new indexing system that identifies different trophic states, the so-called Self-Consistent Trophic State Index (SCTSI), is proposed. This index does not rely on empirical measurements that are then compared to existing tables for classifying limnological environments into particular trophic states, for example, the concentrations of certain species at certain depths to indicate the trophic state, as is commonly done in the literature. Rather, the index is calculated using dynamic properties of only the limnological environment being considered and examines how those properties affect the sustainability of the ecosystem. Specifically, the index is calculated from a ratio of light attenuation by the ecosystem’s primary biomass to that of total light attenuation by all particulate species and molecular scattering throughout the entire water column. The index is used to probe various simulated scenarios that are believed to be relevant to eutrophication: nutrient loading, nutrient limitation, overabundance of phytoplankton, solar-induced turbulence, and wind-induced turbulence.
|
46 |
A Self-Consistent-Field Perturbation Theory of Nuclear Spin Coupling ConstantsBlizzard, Alan Cyril 05 1900 (has links)
Scope and Content stated in the place of the abstract. / The principal methods of calculating nuclear spin coupling constants
by applying perturbation theory to molecular orbital wavefunctions for the
electronic structure of molecules are discussed. A new method employing a
self-consistent-field perturbation theory (SCFPT) is then presented and compared
with the earlier methods.
In self-consistent-field (SCF) methods, the interaction of an
electron with other electrons in a molecule is accounted for by treating the
other electrons as an average distribution of negative charge. However, this
charge distribution cannot be calculated until the electron-electron interactions
themselves are known. In the SCF method, an initial charge distribution
is assumed and then modified in an iterative calculation until the
desired degree of self-consistency is attained. In most previous perturbation
methods, these electron interactions are not taken into account in a self consistent
manner in calculating the perturbed wavefunction even when SCF
wavefunctions are used to describe the unperturbed molecule.
The main advantage of the new SCFPT approach is that it treats the interactions between electrons with the same degree of self-consistency
in the perturbed wavefunction as in the unperturbed wavefunction. The
SCFPT method offers additional advantages due to its computational
efficiency and the direct manner in which it treats the perturbations.
This permits the theory to be developed for the orbital and dipolar contributions
to nuclear spin coupling as well as for the more commonly
treated contact interaction.
In this study, the SCFPT theory is used with the Intermediate
Neglect of Differential Overlap (INDO) molecular orbital approximation to
calculate a number of coupling constants involving 13c and 19F. The
usually neglected orbital and dipolar terms are found to be very important
in FF and CF coupling. They can play a decisive role in explaining the
experimental trend of JCF among a series of compounds. The orbital interaction
is found to play a significant role in certain CC couplings.
Generally good agreement is obtained between theory and experiment
except for JCF and JFF in oxalyl fluoride and the incorrect signs obtained
for cis JFF in fluorinated ethylenes. The nature of the theory permits
the latter discrepancy to be rationalized in terms of computational details.
The value of JFF in difluoracetjc acid is predicted to be -235 Hz.
The SCFPT method is used with a theory of dπ - pπ bonding to predict
in agreement with experiment that JCH in acetylene will decrease when that
molecule is bound in a transition metal complex. / Thesis / Doctor of Philosophy (PhD)
|
47 |
Sparse Matrices in Self-Consistent Field MethodsRubensson, Emanuel January 2006 (has links)
This thesis is part of an effort to enable large-scale Hartree-Fock/Kohn-Sham (HF/KS) calculations. The objective is to model molecules and materials containing thousands of atoms at the quantum mechanical level. HF/KS calculations are usually performed with the Self-Consistent Field (SCF) method. This method involves two computationally intensive steps. These steps are the construction of the Fock/Kohn-Sham potential matrix from a given electron density and the subsequent update of the electron density usually represented by the so-called density matrix. In this thesis the focus lies on the representation of potentials and electron density and on the density matrix construction step in the SCF method. Traditionally a diagonalization has been used for the construction of the density matrix. This diagonalization method is, however, not appropriate for large systems since the time complexity for this operation is σ(n3). Three types of alternative methods are described in this thesis; energy minimization, Chebyshev expansion, and density matrix purification. The efficiency of these methods relies on fast matrix-matrix multiplication. Since the occurring matrices become sparse when the separation between atoms exceeds some value, the matrix-matrix multiplication can be performed with complexity σ(n). A hierarchic sparse matrix data structure is proposed for the storage and manipulation of matrices. This data structure allows for easy development and implementation of algebraic matrix operations, particularly needed for the density matrix construction, but also for other parts of the SCF calculation. The thesis addresses also truncation of small elements to enforce sparsity, permutation and blocking of matrices, and furthermore calculation of the HOMO-LUMO gap and a few surrounding eigenpairs when density matrix purification is used instead of the traditional diagonalization method. / <p>QC 20101123</p>
|
48 |
A theoretical study of creep deformation mechanisms of Type 316H stainless steel at elevated temperaturesHu, Jianan January 2015 (has links)
The currently operating Generation II Advanced Gas-Cooled Reactors (AGR) in the nuclear power stations in the UK, mainly built in the 1960s and 1970s, are approaching their designed life. Besides the development of the new generation of reactors, the government is also seeking to extend the life of some AGRs. Creep and failure properties of Type 316H austenitic stainless steels used in some components of AGR at elevated temperature are under investigation in EDF Energy Ltd. However, the current empirical creep models used and examined in EDF Energy have deficiency and demonstrate poor agreement with the experimental data in the operational complex thermal/mechanical conditions. The overall objective of the present research is to improve our general understanding of the creep behaviour of Type 316H stainless steels under various conditions by undertaking theoretical studies and developing a physically based multiscale state variable model taking into account the evolution of different microstructural elements and a range of different internal mechanisms in order to make realistic life prediction. A detailed review shows that different microstructural elements are responsible for the internal deformation mechanisms for engineering alloys such as 316H stainless steels. These include the strengthening effects, associated with forest dislocation junctions, solute atoms and precipitates, and softening effects, associated with recovery of dislocation structure and coarsening of precipitates. All the mechanisms involve interactions between dislocations and different types of obstacles. Thus change in the microstructural state will lead to the change in materials' internal state and influence the mechanical/creep property. Based on these understandings, a multiscale self-consistent model for a polycrystalline material is established, consisting of continuum, crystal plasticity framework and dislocation link length model that allows the detailed dislocation distribution structure and its evolution during deformation to be incorporated. The model captures the interaction between individual slip planes (self- and latent hardening) and between individual grains and the surrounding matrix (plastic mismatch, leading to the residual stress). The state variables associated with all the microstructure elements are identified as the mean spacing between each type of obstacles. The evolution of these state variables are described in a number of physical processes, including the dislocation multiplication and climb-controlled network coarsening and the phase transformation (nucleation, growth and coarsening of different phases). The enhancements to the deformation kinetics at elevated temperature are also presented. Further, several simulations are carried out to validate the established model and further evaluate and interpret various available data measured for 316H stainless steels. Specimens are divided into two groups, respectively ex-service plus laboratory aged (EXLA) with a considerable population of precipitates and solution treated (ST) where precipitates are not present. For the EXLA specimens, the model is used to evaluate the microscopic lattice response, either parallel or perpendicular to the loading direction, subjected to uniaxial tensile and/or compressive loading at ambient temperature, and macroscopic Bauschinger effect, taking into account the effect of pre-loading and pre-crept history. For the ST specimens, the model is used to evaluate the phase transformation in the specimen head volume subjected to pure thermal ageing, and multiple secondary stages observed during uniaxial tensile creep in the specimen gauge volume at various temperatures and stresses. The results and analysis in this thesis improve the fundamental understanding of the relationship between the evolution of microstructure and the creep behaviour of the material. They are also beneficial to the assessment of materials' internal state and further investigation of deformation mechanism for a broader range of temperature and stress.
|
49 |
Ion cyclotron resonance heating in toroidal plasmasHedin, Johan January 2000 (has links)
<p>NR 20140805</p>
|
50 |
Electronic Structure and Lattice Dynamics of Elements and CompoundsSouvatzis, Petros January 2007 (has links)
<p>The elastic constants of Mg<sub>(1-x)</sub>Al<sub>x</sub>B<sub>2</sub> have been calculated in the regime 0<x<0.25. The calculations show that the ratio, B/G, between the bulk- and the shear-modulus stays well below the empirical ductility limit, 1.75, for all concentrations, indicating that the introduction of Al will not change the brittle behaviour of the material considerably. Furthermore, the tetragonal elastic constant C’ has been calculated for the transition metal alloys Fe-Co, Mo-Tc and W-Re, showing that if a suitable tuning of the alloying is made, these materials have a vanishingly low C'. Thermal expansion calculations of the 4d transition metals have also been performed, showing good agreement with experiment with the exception of Nb and Mo. The calculated phonon dispersions of the 4d metals all give reasonable agreement with experiment. First principles calculations of the thermal expansion of hcp Ti have been performed, showing that this element has a negative thermal expansion along the c-axis which is linked to the closeness of the Fermi level to an electronic topological transition. Calculations of the EOS of fcc Au give support to the suggestion that the ruby pressure scale might underestimate pressures with ~10 GPa at pressures ~150 GPa. The high temperature bcc phase of the group IV metals has been calculated with the novel self-consistent ab-initio dynamical (SCAILD) method. The results show good agreement with experiment, and the free energy resolution of < 1 meV suggests that this method might be suitable for calculating free energy differences between different crystallographic phases as a function of temperature.</p>
|
Page generated in 0.0943 seconds