141 |
Intra-units cost charging practices across countries, challenges faced : Creation of a framework with relevant improvementsUwineza Nzabonimana, Clarisse January 2023 (has links)
Aim: The research aims at analyzing the intra-units cost charging across different countries, projects, and functions in order to establish cost charging transparency, fairness, and accountability. Furthermore, would suggest a framework seeking to ensure a smooth and consistent intra units cost charging across countries, projects and functions of a MNC. Methodology: The study was guided by a qualitative strategy with an element of action research. A total of 7 interviews were conducted with 5 people; 2 of them were interviewed twice due to the nature of the study involving action research and gaining feedback from them. Findings: The analysis revealed that there are variances in the determination of the rate used by various units of this MNC which causes a perception of unfairness and cumbersomeness in determining the costs to be charged between units. The desire to get a harmonized costs charging framework was noticed and recommendations for potential ways to arrive at that were laid out as well. Conclusion: It is essential to deal with intra-unit’s costs charging on a continuous basis as the MNC tends towards a harmonization of this process. There is a need to be flexible and adaptable to imminent changes.
|
142 |
Modeling Hydrogen-Bonding in Diblock Copolymer/Homopolymer BlendsDehghan, Kooshkghazi Ashkan 10 1900 (has links)
<p>The phase behavior of AB diblock copolymers mixed with C homopolymers (AB/C), in which A and C are capable of forming hydrogen-bonds, is examined using self-consistent field theory. The study focuses on the modeling of hydrogen-bonding in polymers. Specifically, we examine two models for the formation of hydrogen-bonds between polymer chains. The first commonly used model assumes a large attractive interaction parameter between the A/C monomers. This model reproduces correct phase transition sequences as compared with experiments, but it fails to correctly describe the change of lamellar spacing induced by the addition of the C homopolymers. The second model is based on the fact that hydrogen-bonding leads to A/C complexation. We show that the interpolymer complexation model predicts correctly the order-order phase transition sequences and the decrease of lamellar spacing for strong hydrogen-bonding. Our analysis demonstrates that hydrogen-bonding of polymers should be modeled by interpolymer complexation.</p> / Master of Science (MSc)
|
143 |
Explorations of a Pi-Striped, d-Wave SuperconductorBazak, Jonathan D. 10 1900 (has links)
<p>The pi-striped, <em>d</em>-wave superconducting (SC) state, which is a type of pair density wave wherein the SC order is spatially modulated, has recently been shown to generate the key ingredients for quantum oscillations consistent with experimental observations (Zelli <em>et al.</em>, 2011, 2012). This was accomplished with a phenomenological approach using non-self-consistent Bogoliubov-de Gennes (BdG) theory. The objective of this thesis is to explore two aspects of this approach: the addition of a charge density wave (CDW) order to the previous non-self-consistent calculations, and an attempt at stabilizing the pi-striped state in fully self-consistent BdG theory. It was found that the CDW order had a minimal effect on the Fermi surface characteristics of the pi-striped state, but that a sufficiently strong CDW degrades the Landau levels which are essential for the formation of quantum oscillations. The self-consistent mean-field calculations were unable to stabilize the pi-striped state under a range of modifications to the Hamiltonian. Free energy calculations with the modulated SC order treated as a parameter demonstrate that the pi-striped state is always less energetically favourable than the normal state for the scenarios which were considered. The results of this study constitute a basis for future, more comprehensive studies, using the BdG approach, of the stability of possible pi-striped SC phases.</p> / Master of Science (MSc)
|
144 |
The embedding of gauged N = 8 supergravity into 11 dimensionsKrüger, Olaf 16 December 2016 (has links)
Diese Doktorarbeit behandelt die bosonische Einbettung der geeichten N = 8 Supergravitation in elf Dimensionen. Die höher dimensionalen Felder müssen zuerst nichtlinear umdefiniert werden, sodass ihre supersymmetrischen Transformationen mit denen der vierdimensionalen Felder verglichen werden können. So wurden in der Literatur nichtlineare Beziehungen zwischen den neu definierten elfdimensionalen Feldern und den Feldern der N = 8 Supergravitation gefunden. Darauf basierend können nun direkte Ansätze gefunden werden, die eine vierdimensionale in eine elfdimensionale Lösung der Supergravitation einbetten. Die Arbeit präsentiert alle Ansätze für die skalaren internen Felder. Zuerst werden die schon bekannten Einbettungsformeln für die inverse Metrik, das Dreiform-Potential mit gemischter Indexstruktur sowie das Sechsform-Potential zusammengefasst. Danach werden neue Ansätze für die explizite interne Metrik, das vollständige Dreiform-Potential, den Warp Faktor, die Vierform Feldstärke sowie den Freund-Rubin Faktor gefunden. Die Einbettung der Vektorbosonen hängt dann nur von den skalaren Feldern ab. Der zweite Teil der Arbeit benutzt die gefundenen Einbettungsformeln, um gruppeninvariante Lösungen der elfdimensionalen Supergravitation zu finden. In solchen Fällen hängen die höherdimensionalen Felder ausschließlich von speziellen gruppeninvarianten Tensoren ab, die auf die jeweilige interne Geometrie angepasst sind. Als Beispiel wird zuerst die schon bekannte Einbettung der G2 invarianten Supergravitation zusammengefasst. Dann wird eine neue SO(3)×SO(3) invariante Löung der elfdimensionalen Supergravitation gefunden. Schließlich wird die Konsistenz der gefundenen Lösungen für eine maximal symmetrische Raumzeit überprüft. Die Ergebnisse können auf andere Kompaktifizierungen verallgemeinert werden, z.B. auf die nichtkompakten CSO(p,q,r) Eichungen oder auf die Reduzierung der Typ IIB Supergravitation zu fünf Dimensionen. / This thesis presents the complete embedding of the bosonic section of gauged N = 8 supergravity into 11 dimensions. The fields of 11-dimensional supergravity are reformulated in a non-linear way, such that their supersymmetry transformations can be compared to the four-dimensional ones. In this way, non-linear relations between the redefined higher-dimensional fields and the fields of N = 8 supergravity were already found in the literature. This is the basis for finding direct uplift Ansätze for the bosonic fields of 11-dimensional supergravity in terms of the four-dimensional ones. This work gives the scalar Ans¨atze for the internal fields. First, the well known uplift formulae for the inverse metric, the three-form potential with mixed index structure and the six-form potential are summarized. Secondly, new embedding formulae for the explicit internal metric, the full three-form potential and the warp factor are presented. Additionally, two subsequent non-linear Ansätze for the full internal four-form field strength and the Freund-Rubin term are found. Finally, the vector uplift can simply be found in terms of the obtained scalar fields. The second part of this thesis uses the obtained embedding formulae in order to construct group invariant solutions of 11-dimensional supergravity. In such cases, the higher-dimensional fields can be written solely in terms of certain group invariant tensors that are adapted to the particular geometry of the internal space. Two such examples are discussed in detail. The first one is the well-known uplift of G2 gauged supergravity. Furthermore, a new SO(3)×SO(3) invariant solution of 11-dimensional supergravity is found. In particular, the consistency of both solutions is explicitly checked for a maximally symmetric spacetime. The results may be generalized to other compactifications, e.g. the non-compact CSO(p, q, r) gaugings or the reduction from type IIB supergravity to five dimensions.
|
145 |
Ab initio study of work function modification at organic/metal interfacesKim, Jongmin 23 May 2024 (has links)
Die Ladungsinjektion (-extraktion) an einer Schnittstelle spielt in der organischen Elektronik eine entscheidende Rolle, da sie die Leistung des Bauelements stark beeinflusst. Eine der effizientesten Methoden zur Optimierung der Energiebarrieren für die Injektion ist die Modifikation der Austrittsarbeit der Elektroden. In dieser Dissertation untersuchen wir die Modifikation der Austrittsarbeit von Au(111) durch dithiol-terminiertes Polyethylenglykol (PEG(thiol)) sowie deren Abhängigkeit von der Anzahl der PEG-Wiederholungseinheiten. In beiden Fällen beobachten wir, dass die Austrittsarbeit des Au(111) durch eine Monoschicht PEG(thiol)-Moleküle reduziert wird. Unsere Berechnungen zeigen, dass diese Änderung der Austrittsarbeit hauptsächlich durch (i) die Ladungsumlagerung aufgrund der Chemisorption und (ii) das intrinsische Dipolmoment der PEG(thiol)-Monoschicht verursacht wird. Die Größe des letzteren Beitrags hängt spürbar von der Anzahl der Wiederholungseinheiten ab und bewirkt somit eine Variation in der Reduktion der Austrittsarbeit. Das oszillatorische Verhalten spiegelt einen ausgeprägten Odd-Even-Effekt wider. Dadurch kann die Austrittsarbeit der Metallelektrode unter Berücksichtigung des Odd-Even-Effekts gesteuert werden. Die Konvergenz der selbstkonsistenten Felditeration für unsere Systeme ist nicht garantiert. Um die Konvergenz zu verbessern, schlagen wir die Verwendung eines speziell auf die FP-LAPW-Methode zugeschnittenen Mischalgorithmus vor. In einem auf Ag(111) basierenden System zeigt sich, dass eine Struktur mit drei Leerstellen in der Substratschicht besonders stabil ist. Dabei ist eine kontinuierliche Abnahme der Austrittsarbeit des Ag(111) feststellbar. Ähnlich wie beim Au(111) manifestiert sich der Odd-Even-Effekt, der auf das Dipolmoment der Molekularschicht zurückzuführen ist. / Charge injection (extraction) at an interface plays a crucial role to organic electronics because this injection (extraction) heavily affects the device performance. One of the most efficient way to optimize energy barriers of the injection (extraction) is modifying the work function of electrodes. In this dissertation, we investigate the modification of work function of Au(111) and Ag(111) induced by the dithiol-terminated polyethylene glycol (PEG(thiol)) as well as a dependence of the work function change on different numbers of PEG repeat units. We find that the work function of the Au(111) is reduced by a monolayer of PEG(thiol) molecules. Overall, our calculations indicate that the work function change is mainly induced by (i) the charge rearrangement due to chemisorption and (ii) the intrinsic dipole moment of the PEG(thiol) monolayer. The magnitude of the latter contribution noticeably depends on the number of repeat units and, thus, causes a variation in the reduction of the work function. The oscillatory behavior reflects a pronounced odd-even effect. As a result, the work function of the metal electrode would be controlled by considering the odd-even effect. Unfortunately, the convergence of the self-consistent field iteration is not guaranteed for our investigated systems. To make the smooth convergence, a mixing algorithm, which is applicable to FP-LAPW method, is devised. We add the Kerker preconditioner as well as further improvements to Pulay’s direct inversion in the iterative subspace. Using this method, one can avoid charge sloshing and noise in the exchange-correlation potential. This method is also implemented in the exciting code. We find the decrease of the work function of the Ag(111) surface is always presented. Similar to the Au(111) case, the odd-even effect is revealed, arising from the dipole moment of the molecular layer.
|
146 |
Programming Model and Protocols for Reconfigurable Distributed SystemsArad, Cosmin January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / Kompics / CATS / REST
|
147 |
Programming Model and Protocols for Reconfigurable Distributed SystemsArad, Cosmin Ionel January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / <p>QC 20130520</p>
|
148 |
Incremental Scheme for Open-Shell SystemsAnacker, Tony 22 February 2016 (has links) (PDF)
In this thesis, the implementation of the incremental scheme for open-shell systems with unrestricted Hartree-Fock reference wave functions is described. The implemented scheme is tested within robustness and performance with respect to the accuracy in the energy and the computation times.
New approaches are discussed to implement a fully automated incremental scheme in combination with the domain-specific basis set approximation. The alpha Domain Partitioning and Template Equalization are presented to handle unrestricted wave functions for the local correlation treatment. Both orbital schemes are analyzed with a test set of structures and reactions. As a further goal, the DSBSenv orbital basis sets and auxiliary basis sets are optimized to be used as environmental basis in the domain-specific basis set approach. The performance with respect to the accuracy and computation times is analyzed with a test set of structures and reactions. In another project, a scheme for the optimization of auxiliary basis sets for uranium is presented. This scheme was used to optimize the MP2Fit auxiliary basis sets for uranium. These auxiliary basis enable density fitting in quantum chemical methods and the application of the incremental scheme for systems containing uranium. Another project was the systematical analysis of the binding energies of four water dodecamers. The incremental scheme in combination with the CCSD(T) and CCSD(T)(F12*) method were used to calculate benchmark energies for these large clusters.
|
149 |
Performance of supertree methods for estimating species treesWang, Yuancheng January 2010 (has links)
Phylogenetics is the research of ancestor-descendant relationships among different groups of organisms, for example, species or populations of interest. The datasets involved are usually sequence alignments of various subsets of taxa for various genes.
A major task of phylogenetics is often to combine estimated gene trees from many loci sampled from the genes into an overall estimate species tree topology. Eventually, one can construct the tree of life that depicts the ancestor-descendant relationships for all known species around the world. If there is missing data or incomplete sampling in the datasets, then supertree methods can be used to assemble gene trees with different subsets of taxa into an estimated overall species tree topology.
In this study, we assume that gene tree discordance is solely due to incomplete lineage sorting under the multispecies coalescent model (Degnan and Rosenberg, 2009). If there is missing data or incomplete sampling in the datasets, then supertree methods can be used to assemble gene trees with different subsets of taxa into an estimated species tree topology. In addition, we examine the performance of the most commonly used supertree method (Wilkinson et al., 2009), namely matrix representation with parsimony (MRP), to explore its statistical properties in this setting. In particular, we show that MRP is not statistically consistent. That is, an estimated species tree topology other than the true species tree topology is more likely to be returned by MRP as the number of gene trees increases. For some situations, using longer branch lengths, randomly deleting taxa or even introducing mutation can improve the performance of MRP so that the matching species tree topology is recovered more often.
In conclusion, MRP is a supertree method that is able to handle large amounts of conflict in the input gene trees. However, MRP is not statistically consistent, when using gene trees arise from the multispecies coalescent model to estimate species trees.
|
150 |
Modélisation multi-échelle des matériaux viscoélastiques hétérogènes : application à l'identification et à l'estimation du fluage propre de bétons d'enceintes de centrales nucléaires / Multiscale modeling of viscoelastic heterogeneous materials : application to the identification and estimation of the basic creep of concrete of containment vessels of nuclear power plantsLe, Quoc Viet 15 January 2008 (has links)
Le béton se présente comme un matériau constitué de granulats, jouant le rôle d'inclusions, enchâssés dans une matrice correspondant à la pâte de ciment hydraté. A l'échelle de la pâte, l'hydratation du ciment génère un milieu multiphasique, constitué d'un squelette solide et de pores remplis ou partiellement remplis d'eau selon leur taille. Le composant principal du squelette solide est le gel de C-S-H, les autres composants étant de nature cristalline. La qualification de gel du composant C-S-H est liée à sa nanostructure dont la schématisation la plus admise consiste en une phase aqueuse adsorbée, en sandwich avec des feuillets solides de nature cristalline. Il est bien admis que la structure du C-S-H est à l'origine de son comportement viscoélastique et donc de celui du béton. Ce comportement viscoélastique peut s'expliquer par un réarrangement de sa nanostructure sous l'effet des contraintes mécaniques appliquées à l'échelle macroscopique. La modélisation macroscopique du fluage du béton ne permet pas d'expliquer la variabilité du fluage d'une formulation de béton à une autre. En effet, les paramètres des modèles macroscopiques ne peuvent être identifiés que par l'analyse de résultats expérimentaux obtenus par des essais réalisés sur des éprouvettes de béton. Ces paramètres ne sont valables que pour une formulation donnée. L'identification de ces paramètres conduit donc à des programmes expérimentaux très coûteux et ne fournit pas suffisamment d'informations sur la sensibilité des paramètres macroscopiques à la variabilité des caractéristiques mécaniques et morphologies des constituants. Dans ce travail, on suppose qu'il existe une échelle microscopique à laquelle les mécanismes moteurs du fluage ne sont pas impactés par la formulation du béton. A cette échelle, celle du C-S-H, les propriétés viscoélastiques peuvent être considérées avoir un caractère intrinsèque. L'influence de la formulation ne concerne alors que les concentrations des différents hydrates. Trois approches, analytiques, semi-analytiques et numériques sont alors proposées pour estimer, par une homogénéisation multi-échelle, les propriétés viscoélastiques macroscopiques du béton à partir des propriétés de ses constituants ainsi qu'à partir de sa microstructure. Ces approches sont basées sur l'extension des schémas d'homogénéisation élastique au cas viscoélastique au moyen du principe de correspondance qui utilise la transformée de Laplace-Carson. Les propriétés effectives sont alors déterminées directement dans l'espace de Carson. Par la suite, celles dans l'espace temporel sont obtenues par la transformée inverse. Les approches proposées apportent des solutions aussi bien dans un cadre général que sous certaines hypothèses restrictives : coefficient de Poisson viscoélastique microscopique ou macroscopique constant, module de compressibilité constant. Sur le plan théorique, deux schémas d'homogénéisation ont été étudiés : le schéma de Mori-Tanaka, basé sur le problème de l'inclusion d'Eshelby, et le schéma auto-cohérente généralisé basé sur la neutralité énergique de l'inclusion. Les résultats obtenus montrent que sous ces hypothèses restrictives, le spectre macroscopique se présente comme une famille de sous ensembles de temps caractéristiques bornés par les temps caractéristiques microscopiques. Par ailleurs, les propriétés thermodynamiques, de croissance monotone et de concavité, des fonctions de retard macroscopiques ne sont préservées par l'homogénéisation que sous certaines conditions de compatibilité des spectres microscopiques. Sur le plan pratique, les méthodes développées ont été appliquées pour construire la complaisance de fluage propre macroscopique du béton en connaissant les données communes de toutes sortes de bétons et celles correspondant à une formulation donnée. Les résultats expérimentaux disponibles sont alors exploités pour analyser le caractère intrinsèque des propriétés viscoélastiques à l'origine du fluage du béton / Concrete can be considered as a multiscale heterogeneous material in which elastic inclusions, i.e., aggregates, are embedded in a viscoelastic matrix which corresponds to the hardened cement paste. At the cement paste scale, the hydration process leads to a partially saturated porous multiphase medium whose solid skeleton is mainly constituted of the C-S-H gel, the other hydrates being crystalline solids. The gel-like properties of the C-S-H are essentially related to the existence of strongly adsorbed structural water in the interlayer nanospace. It is well admitted that this feature is the microscopic origin of the creep mechanism of C-S-H and therefore of concrete at the macroscale. This viscoelastic behavior can be explained by a rearrangement of the nanostructure of the C-S-H component due to a macroscopically applied loading. Therefore, the macroscopic modeling of concrete creep does not allow explaining the variability of creep with respect to the concrete mix. Indeed, the identi?cation of the viscoelastic parameters of macroscopic models requires performing creep tests at the concrete scale. Thus, the validity of the identi?ed parameters is limited to the concrete mix under consideration and the identi?cation process does not permit to carry out the influence of the microstructure and the mechanical constituent properties on the macroscopic creep. In this work, we assume that there exists a microscopic scale at which the driving creep mechanisms are not affected by the concrete mix. At this scale, the C-S-H one, the viscoelastic properties can be considered to be intrinsic. The concrete mix will essentially influence the volume fraction of the hydrates. Analytical, semi-analytical and numerical approaches are developed in order to derive, by multi-scale homogenization, the macroscopic viscoelastic properties of concrete on the basis of the knowledge of the microscopic properties, of the constituents together with the knowledge of the mix-dependent microstructure. These approaches consist in extending results of elastic homogenization schemes to viscoelasticity with the help of the correspondence principle using the Laplace-Carson transform. Therefore, the effective properties are obtained in a straightforward manner in the Carson transform space, which requires the transform inversion in order to determine the time evolution of these properties. The proposed models permit to answer to this requirement, both under some restrictive assumptions (constant Poisson's ratio at either microscopic or macroscopic scale, constant bulk microscopic modulus) and in a general way. At the theoretical level, two homogenization schemes are investigated : the Mori-Tanaka scheme based on the Eshelby solution and the Generalized Self Consistent scheme, based on the energetic neutrality condition of the inclusion. The obtained results show that under the aforementioned restrictive conditions, the macroscopic creep spectrum presents as a family of sets of retardation times which are bounded by two successive microscopic characteristic times. Furthermore, the monotonically increasing and concavity features of the macroscopic creep compliance, which derive from thermodynamic considerations, are not always preserved, except if some compatibility conditions are ful?lled by the microscopic spectrums. From the practical point of view, these methods developed are used to establish the complaisance creep function of concrete from the knowledge of the common data of all types of concrete, as well as the other parameters that characterize a given formulation. Hence, by a back-analysis of creep tests obtained on cement paste or on concrete, the intrinsic basic creep properties of the gel C-S-H are estimated
|
Page generated in 0.0798 seconds