Spelling suggestions: "subject:"month carlo simulations"" "subject:"fonte carlo simulations""
231 |
Liquid-Crystalline Ordering in Semiflexible Polymer Melts and Blends: A Monte Carlo Simulation StudyKhanal, Kiran 26 August 2013 (has links)
No description available.
|
232 |
Allocation of Alternative Investments in Portfolio Management. : A Quantitative Study Considering Investors' Liquidity Preferences / Allokering av alternativa investeringar i portföljförvaltning : En kvantitativ studie med hänsyn till investerarnas likviditetspreferenserEspahbodi, Kamyar, Roumi, Roumi January 2021 (has links)
Despite the fact that illiquid assets pose several difficulties regarding portfolio allocation problems for investors, more investors are increasing their allocation towards them. Alternative assets are characterized as being harder to value and trade because of their illiquidity which raises the question of how they should be managed from an allocation optimization perspective. In an attempt to demystify the illiquidity conundrum, shadow allocations are attached to the classical mean-variance framework to account for liquidity activities. The framework is further improved by replacing the variance for the coherent risk measure conditional value at risk (CVaR). This framework is then used to first stress test and optimize a theoretical portfolio and then analyze real-world data in a case study. The investors’ liquidity preferences are based on common institutional investors such as Foundations & Charities, Pension Funds, and Unions. The theoretical results support previous findings of the shadow allocations framework and decrease the allocation towards illiquid assets, while the results of the case study do not support the shadow allocations framework. / Trots det faktum att illikvida tillgångar medför flera svårigheter när det gäller portföljallokeringsproblem för investerare, så ökar allt fler investerare sin allokering mot dem. Alternativa tillgångar kännetecknas av att de är svårare att värdera och handla på grund av sin illikviditet, vilket väcker frågan om hur de ska hanteras ur ett allokeringsoptimeringsperspektiv. I ett försök att avmystifiera illikviditetsproblemet adderas skuggallokeringar till det klassiska ramverket för modern portföljteori för att ta hänsyn till likviditetsaktiviteter. Ramverket förbättras ytterligare genom att ersätta variansen mot det koherenta riskmåttet CVaR. Detta ramverk används sedan för att först stresstesta och optimera en teoretisk portfölj, och sedan analysera verkliga data i en fallstudie. Investerarnas likviditetspreferenser baseras på vanliga institutionella investerare såsom stiftelser & välgörenhetsorganisationer, pensionsfonder samt fackföreningar. De teoretiska resultaten stödjer tidigare forskning om ramverket för skuggallokeringer och sänker allokeringen mot illikvida tillgångar, samtidigt som resultaten från fallstudien inte stödjer ramverket för skuggallokeringar.
|
233 |
Propagation Prediction Over Random Rough Surface By Zeroth Order Induced Current DensityBalu, Narayana Srinivasan 07 November 2014 (has links) (PDF)
Electromagnetic wave propagation over random sea surfaces is a classical problem of interest for the Navy, and significant research has been done over the years. Here we make use of numerical and analytical methods to predict the propagation of microwaves over random rough surface. The numerical approach involves utilization of the direct solution (using Volterra integral equation of the second kind) to currents induced on a rough surface due to forward propagating waves to compute the scattered field. The mean scattered field is computed using the Monte-Carlo method. Since the exact solution (consisting of an infinite series) to induced current density is computationally intensive, there exists a need to predict the propagation using the closely accurate zeroth order induced current (first term of the series) for time-varying multiple realizations of a random rough surface in a computationally efficient manner. The wind-speed dependent, fully-developed, Piersen-Moskowitz sea spectrum has been considered in order to model a rough sea surface, although other partially-developed roughness spectra may also be utilized. An analytical solution based on the zeroth order current density obtained by deriving the mean scattered field as a function of the range and vertical height by directly using the Parabolic Equation (PE) approximation method and the resulting Green's function has been utilized for a comparative study. The analytical solution takes into account the diffused component of the scattered field.
|
234 |
Improving the light yield and timing resolution of scintillator-based detectors for positron emission tomographyThalhammer, Christof 06 July 2015 (has links)
Positronen-Emissions-Tomographie (PET) ist eine funktionelle medizinische Bildgebungstechnik. Die Lichtausbeute und Zeitauflösung Szintillator basierter PET Detektoren wird von diversen optischen Prozessen begrenzt. Dazu gehört die Lichtauskopplung aus Medien mit hohem Brechungsindex sowie Sensitivitätsbegrenzungen der Photodetektoren. Diese Arbeit studiert mikro- und nano-optische Ansätze um diese Einschränkungen zu überwinden mit dem Ziel das Signal-Rausch Verhältnis sowie die Bildqualität zu verbessern. Dafür wird ein Lichtkonzentrator vorgeschlagen um die Sensitivität von Silizium Photomultipliern zu erhöhen sowie dünne Schichten photonischer Kristalle um die Lichtauskopplung aus Szintillatoren zu verbessern. Die Ansätze werden mit optischen Monte Carlo Simulationen studiert, wobei die Beugungseigenschaften phot. Kristalle hierbei durch eine neuartige kombinierte Methode berücksichtigt werden. Proben der phot. Kristalle und Lichtkonzentratoren wurden mit Fertigungsprozessen der Halbleitertechnologie hergestellt und mit Hilfe eines Goniometer Aufbaus charakterisiert. Die simulierten Eigenschaften konnten hiermit sehr gut experimentell reproduziert werden. Daraufhin wurden Simulationen durchgeführt um den Einfluss beider Konzepte auf die Charakteristika eines PET Detektors zu untersuchen. Diese sagen signifikante Verbesserungen der Lichtausbeute und Zeitauflösung voraus. Darüber hinaus zeigen sie, dass sich auch die Kombination beider Ansätze positiv auf die Detektoreigenschaften auswirken. Diese Ergebnisse wurden in Lichtkonzentrator-Experimenten mit einzelnen Szintillatoren bestätigt. Da die Herstellung phot. Kristalle eine große technologische Herausforderung darstellt, wurde eine neue Fertigungstechnik namens "direct nano imprinting" entwickelt. Dessen Machbarkeit wurde auf Glasswafern demonstriert. Die Arbeit endet mit einer Diskussion der Vor- und Nachteile von Lichtkonzentratoren und phot. Kristallen und deren Implikationen für zukünftige PET Systeme. / Positron emission tomography (PET) is a powerful medical imaging methodology to study functional processes. The light yield and coincident resolving time (CRT) of scintillator-based PET detectors are constrained by optical processes. These include light trapping in high refractive index media and incomplete light collection by photosensors. This work proposes the use of micro and nano optical devices to overcome these limitations with the ultimate goal to improve the signal-to-noise ratio and overall image quality of PET acquisitions. For this, a light concentrator (LC) to improve the light collection of silicon photomultipliers on the Geiger-cell level is studied. Further, two-dimensional photonic crystals (PhCs) are proposed to reduced light trapping in scintillators. The concepts are studied in detail using optical Monte Carlo simulations. To account for the diffractive properties of PhCs, a novel combined simulation approach is presented that integrates results of a Maxwell solver into a ray tracing algorithm. Samples of LCs and PhCs are fabricated with various semiconductor technologies and evaluated using a goniometer setup. A comparison between measured and simulated angular characteristics reveal very good agreement. Simulation studies of implementing LCs and PhCs into a PET detector module predict significant improvements of the light yield and CRT. Also, combining both concepts indicates no adverse effects but a rather a cumulative benefit for the detector performance. Concentrator experiments with individual scintillators confirm these simulation results. Realizing the challenges of transferring PhCs to scintillators, a novel fabrication method called direct nano imprinting is evaluated. The feasibility of this approach is demonstrated on glass wafers. The work concludes with a discussion of the benefits and drawbacks of LCs and PhCs and their implications for future PET systems.
|
235 |
Fundamental parameters of QCD from non-perturbative methods for two and four flavorsMarinkovic, Marina 25 March 2014 (has links)
Die nicht perturbative Formulierung der Quantenchromodynamik (QCD) auf dem vierdimensionalen euklidischen Gitter in Zusammenhang mit der sogenannten Finite-Size-Scaling Methode ermoeglicht die nicht-perturbative Renormierung der QCD-Parameter. Um praezise Vorhersagen aus der Gitter-QCD zu erhalten, ist es noetig, die dynamischen Fermion-Freiheitsgrade in den Gitter-QCD-Simulationen zu beruecksichtigen. Wir betrachten QCD mit zwei und vier O(a)-verbesserten Wilson-Quark-Flavours, wobei deren Masse degeneriert ist. In dieser Dissertation verbessern wir die vorhandenen Bestimmungen des fundamentalen Parameters der Zwei- und Vier-Flavor-QCD. In der Vier-Flavor-Theorie berechnen wir den praezisen Wert des Lambda-Parameters in Einheiten der Skale Lmax, welche im hadronischen Bereich definiert ist. Zudem geben wir auch die praezise Bestimmung der laufenden Schoedinger-Funktional-Kopplung in Vier-Flavor-Theorie an sowie deren Vergleich zu perturbativen Resultaten. Die Monte-Carlo Simulationen der Gitter-QCD in der Schroedinger-Funktional-Formulierung wurden mittels der plattformunabhaengigen Software Schroedinger-Funktional-Mass-Preconditioned- Hybrid-Monte-Carlo (SF-MP-HMC) durchgefuehrt, die als Teil dieses Projektes entwickelt wurde. Schliesslich berechnen wir die Masse des Strange-Quarks und den Lambda-Parameter in Zwei-Flavor-Theorie, wobei die voll-kontrollierte Kontinuums- und chirale Extrapolation zum physikalischen Punkt durchgefuehrt wurden. Um dies zu erreichen, entwickeln wir eine universale Software fuer Simulationen der zwei Wilson-Fermionen-Flavor mit periodischen Randbedingungen, namens Mass-Preconditioned-Hybrid-Monte-Carlo (MP-HMC). Die MP-HMC wird verwendet um Simulationen mit kleinen Gitterabstaenden und in der Naehe der physikalischen Pionmasse ausfuehrlich zu untersuchen. / The non-perturbative formulation of Quantumchromodynamics (QCD) on a four dimensional space-time Euclidean lattice together with the finite size techniques enable us to perform the renormalization of the QCD parameters non-perturbatively. In order to obtain precise predictions from lattice QCD, one needs to include the dynamical fermions into lattice QCD simulations. We consider QCD with two and four mass degenerate flavors of O(a) improved Wilson quarks. In this thesis, we improve the existing determinations of the fundamental parameters of two and four flavor QCD. In four flavor theory, we compute the precise value of the Lambda parameter in the units of the scale Lmax defined in the hadronic regime. We also give the precise determination of the Schroedinger functional running coupling in four flavour theory and compare it to the perturbative results. The Monte Carlo simulations of lattice QCD within the Schroedinger Functional framework were performed with a platform independent program package Schroedinger Funktional Mass Preconditioned Hybrid Monte Carlo (SF-MP-HMC), developed as a part of this project. Finally, we compute the strange quark mass and the Lambda parameter in two flavour theory, performing a well-controlled continuum limit and chiral extrapolation. To achieve this, we developed a universal program package for simulating two flavours of Wilson fermions, Mass Preconditioned Hybrid Monte Carlo (MP-HMC), which we used to run large scale simulations on small lattice spacings and on pion masses close to the physical value.
|
236 |
Simulation Monte-Carlo de la radiolyse du dosimètre de Fricke par des neutrons rapides / Monte-Carlo simulation of fast neutron radiolysis in the Fricke dosimeterTippayamontri, Thititip January 2009 (has links)
Monte-Carlo calculations are used to simulate the stochastic effects of fast neutron-induced chemical changes in the radiolysis of the ferrous sulfate (Fricke) dosimeter. To study the dependence of the yield of ferric ions, G(Fe[superscript 3+]), on fast neutron energy, we have simulated, at 25 [degree centigrade], the oxidation of ferrous ions in aerated aqueous 0.4 M H[subscript 2]SO[subscript 4] (pH 0.46) solutions when subjected to ~0.5-10 MeV incident neutrons, as a function of time up to ~50 s. The radiation effects due to fast neutrons are estimated on the basis of track segment (or"escape") yields calculated for the first four recoil protons with appropriate weighting according to the energy deposited by each of these protons. For example, a 0.8-MeV neutron generates recoil protons of 0.505, 0.186, 0.069, and 0.025 MeV, with linear energy transfer (LET) values of ~41, 69, 82, and 62 keV/[micro]m, respectively. In doing so, we consider that further recoils make only a negligible contribution to radiation processes. Our results show that the radiolysis of dilute aqueous solutions by fast neutrons produces smaller radical yields and larger molecular yields (relative to the corresponding yields for the radiolysis of water by [superscript 60]Co [gamma]-rays or fast electrons) due to the high LET associated to fast neutrons. The effect of recoil ions of oxygen, which is also taken into account in the calculations, is shown to decrease G(Fe[superscript 3+]) by about 10%. Our calculated values of G(Fe[superscript 3+]) are found to increase slightly with increasing neutron energy over the energy range covered in this study, in good agreement with available experimental data. We have also simulated the effect of temperature on the G(Fe[superscript 3+]) values in the fast neutron radiolysis of the Fricke dosimeter from 25 to 300 [degree centigrade]. Our results show an increase of G(Fe[superscript 3+]) with increasing temperature, which is readily explained by an increase in the yields of free radicals and a decrease in those of molecular products. For 0.8-MeV incident neutrons (the only case for which experimental data are available in the literature), there is a ~23% increase in G(Fe[superscript 3+]) on going from 25 to 300 [degree centigrade]. Although these results are in reasonable agreement with experiment, more experimental data, in particular for different incident neutron energies, would be needed to test more rigorously our Fe[superscript 3+] ion yield results at elevated temperatures.
|
237 |
The computation of Greeks with multilevel Monte CarloBurgos, Sylvestre Jean-Baptiste Louis January 2014 (has links)
In mathematical finance, the sensitivities of option prices to various market parameters, also known as the “Greeks”, reflect the exposure to different sources of risk. Computing these is essential to predict the impact of market moves on portfolios and to hedge them adequately. This is commonly done using Monte Carlo simulations. However, obtaining accurate estimates of the Greeks can be computationally costly. Multilevel Monte Carlo offers complexity improvements over standard Monte Carlo techniques. However the idea has never been used for the computation of Greeks. In this work we answer the following questions: can multilevel Monte Carlo be useful in this setting? If so, how can we construct efficient estimators? Finally, what computational savings can we expect from these new estimators? We develop multilevel Monte Carlo estimators for the Greeks of a range of options: European options with Lipschitz payoffs (e.g. call options), European options with discontinuous payoffs (e.g. digital options), Asian options, barrier options and lookback options. Special care is taken to construct efficient estimators for non-smooth and exotic payoffs. We obtain numerical results that demonstrate the computational benefits of our algorithms. We discuss the issues of convergence of pathwise sensitivities estimators. We show rigorously that the differentiation of common discretisation schemes for Ito processes does result in satisfactory estimators of the the exact solutions’ sensitivities. We also prove that pathwise sensitivities estimators can be used under some regularity conditions to compute the Greeks of options whose underlying asset’s price is modelled as an Ito process. We present several important results on the moments of the solutions of stochastic differential equations and their discretisations as well as the principles of the so-called “extreme path analysis”. We use these to develop a rigorous analysis of the complexity of the multilevel Monte Carlo Greeks estimators constructed earlier. The resulting complexity bounds appear to be sharp and prove that our multilevel algorithms are more efficient than those derived from standard Monte Carlo.
|
238 |
Methods, rules and limits of successful self-assemblyWilliamson, Alexander James January 2011 (has links)
The self-assembly of structured particles into monodisperse clusters is a challenge on the nano-, micro- and even macro-scale. While biological systems are able to self-assemble with comparative ease, many aspects of this self-assembly are not fully understood. In this thesis, we look at the strategies and rules that can be applied to encourage the formation of monodisperse clusters. Though much of the inspiration is biological in nature, the simulations use a simple minimal patchy particle model and are thus applicable to a wide range of systems. The topics that this thesis addresses include: Encapsulation: We show how clusters can be used to encapsulate objects and demonstrate that such `templates' can be used to control the assembly mechanisms and enhance the formation of more complex objects. Hierarchical self-assembly: We investigate the use of hierarchical mechanisms in enhancing the formation of clusters. We find that, while we are able to extend the ranges where we see successful assembly by using a hierarchical assembly pathway, it does not straightforwardly provide a route to enhance the complexity of structures that can be formed. Pore formation: We use our simple model to investigate a particular biological example, namely the self-assembly and formation of heptameric alpha-haemolysin pores, and show that pore insertion is key to rationalising experimental results on this system. Phase re-entrance: We look at the computation of equilibrium phase diagrams for self-assembling systems, particularly focusing on the possible presence of an unusual liquid-vapour phase re-entrance that has been suggested by dynamical simulations, using a variety of techniques.
|
239 |
Caractérisation de la composante toxicocinétique du facteur d’ajustement pour la variabilité interindividuelle utilisé en analyse du risque toxicologiqueValcke, Mathieu 11 1900 (has links)
Un facteur d’incertitude de 10 est utilisé par défaut lors de l’élaboration des valeurs toxicologiques de référence en santé environnementale, afin de tenir compte de la variabilité interindividuelle dans la population. La composante toxicocinétique de cette variabilité correspond à racine de 10, soit 3,16. Sa validité a auparavant été étudiée sur la base de données pharmaceutiques colligées auprès de diverses populations (adultes, enfants, aînés). Ainsi, il est possible de comparer la valeur de 3,16 au Facteur d’ajustement pour la cinétique humaine (FACH), qui constitue le rapport entre un centile élevé (ex. : 95e) de la distribution de la dose interne dans des sous-groupes présumés sensibles et sa médiane chez l’adulte, ou encore à l’intérieur d’une population générale. Toutefois, les données expérimentales humaines sur les polluants environnementaux sont rares. De plus, ces substances ont généralement des propriétés sensiblement différentes de celles des médicaments. Il est donc difficile de valider, pour les polluants, les estimations faites à partir des données sur les médicaments. Pour résoudre ce problème, la modélisation toxicocinétique à base physiologique (TCBP) a été utilisée pour simuler la variabilité interindividuelle des doses internes lors de l’exposition aux polluants. Cependant, les études réalisées à ce jour n’ont que peu permis d’évaluer l’impact des conditions d’exposition (c.-à-d. voie, durée, intensité), des propriétés physico/biochimiques des polluants, et des caractéristiques de la population exposée sur la valeur du FACH et donc la validité de la valeur par défaut de 3,16. Les travaux de la présente thèse visent à combler ces lacunes.
À l’aide de simulations de Monte-Carlo, un modèle TCBP a d’abord été utilisé pour simuler la variabilité interindividuelle des doses internes (c.-à-d. chez les adultes, ainés, enfants, femmes enceintes) de contaminants de l’eau lors d’une exposition par voie orale, respiratoire, ou cutanée. Dans un deuxième temps, un tel modèle a été utilisé pour simuler cette variabilité lors de l’inhalation de contaminants à intensité et durée variables. Ensuite, un algorithme toxicocinétique à l’équilibre probabiliste a été utilisé pour estimer la variabilité interindividuelle des doses internes lors d’expositions chroniques à des contaminants hypothétiques aux propriétés physico/biochimiques variables. Ainsi, les propriétés de volatilité, de fraction métabolisée, de voie métabolique empruntée ainsi que de biodisponibilité orale ont fait l’objet d’analyses spécifiques. Finalement, l’impact du référent considéré et des caractéristiques démographiques sur la valeur du FACH lors de l’inhalation chronique a été évalué, en ayant recours également à un algorithme toxicocinétique à l’équilibre. Les distributions de doses internes générées dans les divers scénarios élaborés ont permis de calculer dans chaque cas le FACH selon l’approche décrite plus haut. Cette étude a mis en lumière les divers déterminants de la sensibilité toxicocinétique selon le sous-groupe et la mesure de dose interne considérée. Elle a permis de caractériser les déterminants du FACH et donc les cas où ce dernier dépasse la valeur par défaut de 3,16 (jusqu’à 28,3), observés presqu’uniquement chez les nouveau-nés et en fonction de la substance mère. Cette thèse contribue à améliorer les connaissances dans le domaine de l’analyse du risque toxicologique en caractérisant le FACH selon diverses considérations. / A default uncertainty factor of 10 is used in toxicological risk assessment to account for human variability, and the toxicokinetic component of this factor corresponds to a value of square root of 10, or 3,16. The adequacy of this value has been studied in the literature on the basis of pharmaceutical data obtained in various subpopulations (e.g. adults, children, elderly). Indeed, it is possible to compare the default value of 3,16 to the Human Kinetic Adjustment Factor (HKAF), computed as the ratio of an upper percentile value (e.g. 95th) of the distribution of internal dose metrics in presumed sensitive subpopulation to its median in adults, or alternatively an entire population. However, human experimental data on environmental contaminants are sparse. Besides, these chemicals generally exhibit characteristics that are quite different as compared to drugs. As a result, it is difficult to extrapolate, for pollutants, estimates of HKAF that were made using data on drugs. To solve this problem, physiologically-based toxicokinetic (PBTK) modeling has been used to simulate interindividual variability in internal dose metrics following exposure to xenobiotics. However, studies realized to date have not systematically evaluated the impact of the exposure conditions (route, duration and intensity), the physico/biochemical properties of the chemicals, and the characteristics of the exposed population, on the HKAF, and thus the adequacy of the default value. This thesis aims at compensating this lack of knowledge.
First, a probabilistic PBTK model was used to simulate, by means of Monte Carlo simulations, the interindividual variability in internal dose metrics (i.e. in adults, children, elerly, pregnant women) following the oral, inhalation or dermal exposure to drinking water contaminants, taken separately. Second, a similar model was used to simulate this variability following inhalation exposures of various durations and intensities to air contaminants. Then, a probabilistic steady-state algorithm was used to estimate interindividual variability in internal dose metrics for chronic exposures to hypothetical contaminants exhibiting different physico/biochemical properties. These include volatility, the fraction metabolized, the metabolic pathway by which they are biotransformed and oral bioavailability. Finally, the impact of a population’s demographic characteristics and the referent considered on the HKAF for chronic inhalation exposure was studied, also using a probabilistic steady-state algorithm. The distributions of internal dose metrics that were generated for every scenario simulated were used to compute the HKAF as described above. This study has pointed out the determinants of the toxicokinetic sensitivity considering a given subpopulation and dose metric. It allowed identifying determinants of the numeric value of the HKAF, thus cases for which it exceeded the default value of 3,16. This happened almost exclusively in neonates and on the basis of the parent compound. Overall, this study has contributed to the field of toxicological risk assessment by characterizing the HKAF as a function of various considerations.
|
240 |
Exposants géométriques des modèles de boucles dilués et idempotents des TL-modules de la chaîne de spins XXZProvencher, Guillaume 12 1900 (has links)
Cette thèse porte sur les phénomènes critiques survenant dans les modèles bidimensionnels sur réseau. Les résultats sont l'objet de deux articles : le premier porte sur la mesure d'exposants critiques décrivant des objets géométriques du réseau et, le second, sur la construction d'idempotents projetant sur des modules indécomposables de l'algèbre de Temperley-Lieb pour la chaîne de spins XXZ.
Le premier article présente des expériences numériques Monte Carlo effectuées pour une famille de modèles de boucles en phase diluée. Baptisés "dilute loop models (DLM)", ceux-ci sont inspirés du modèle O(n) introduit par Nienhuis (1990). La famille est étiquetée par les entiers relativement premiers p et p' ainsi que par un paramètre d'anisotropie. Dans la limite thermodynamique, il est pressenti que le modèle DLM(p,p') soit décrit par une théorie logarithmique des champs conformes de charge centrale c(\kappa)=13-6(\kappa+1/\kappa), où \kappa=p/p' est lié à la fugacité du gaz de boucles \beta=-2\cos\pi/\kappa, pour toute valeur du paramètre d'anisotropie. Les mesures portent sur les exposants critiques représentant la loi d'échelle des objets géométriques suivants : l'interface, le périmètre externe et les liens rouges. L'algorithme Metropolis-Hastings employé, pour lequel nous avons introduit de nombreuses améliorations spécifiques aux modèles dilués, est détaillé. Un traitement statistique rigoureux des données permet des extrapolations coïncidant avec les prédictions théoriques à trois ou quatre chiffres significatifs, malgré des courbes d'extrapolation aux pentes abruptes.
Le deuxième article porte sur la décomposition de l'espace de Hilbert \otimes^nC^2 sur lequel la chaîne XXZ de n spins 1/2 agit. La version étudiée ici (Pasquier et Saleur (1990)) est décrite par un hamiltonien H_{XXZ}(q) dépendant d'un paramètre q\in C^\times et s'exprimant comme une somme d'éléments de l'algèbre de Temperley-Lieb TL_n(q). Comme pour les modèles dilués, le spectre de la limite continue de H_{XXZ}(q) semble relié aux théories des champs conformes, le paramètre q déterminant la charge centrale. Les idempotents primitifs de End_{TL_n}\otimes^nC^2 sont obtenus, pour tout q, en termes d'éléments de l'algèbre quantique U_qsl_2 (ou d'une extension) par la dualité de Schur-Weyl quantique. Ces idempotents permettent de construire explicitement les TL_n-modules indécomposables de \otimes^nC^2. Ceux-ci sont tous irréductibles, sauf si q est une racine de l'unité. Cette exception est traitée séparément du cas où q est générique.
Les problèmes résolus par ces articles nécessitent une grande variété de résultats et d'outils. Pour cette raison, la thèse comporte plusieurs chapitres préparatoires. Sa structure est la suivante. Le premier chapitre introduit certains concepts communs aux deux articles, notamment une description des phénomènes critiques et de la théorie des champs conformes. Le deuxième chapitre aborde brièvement la question des champs logarithmiques, l'évolution de Schramm-Loewner ainsi que l'algorithme de Metropolis-Hastings. Ces sujets sont nécessaires à la lecture de l'article "Geometric Exponents of Dilute Loop Models" au chapitre 3. Le quatrième chapitre présente les outils algébriques utilisés dans le deuxième article, "The idempotents of the TL_n-module \otimes^nC^2 in terms of elements of U_qsl_2", constituant le chapitre 5. La thèse conclut par un résumé des résultats importants et la proposition d'avenues de recherche qui en découlent. / This thesis is concerned with the study of critical phenomena for two-dimensional models on the lattice. Its results are contained in two articles: A first one, devoted to measuring geometric exponents, and a second one to the construction of idempotents for the XXZ spin chain projecting on indecomposable modules of the Temperley-Lieb algebra.
Monte Carlo experiments, for a family of loop models in their dilute phase, are presented in the first article. Coined "dilute loop models (DLM)", this family is based upon an O(n) model introduced by Nienhuis (1990). It is defined by two coprime integers p,p' and an anisotropy parameter. In the continuum limit, DLM(p,p') is expected to yield a logarithmic conformal field theory of central charge c(\kappa)=13-6(\kappa+1/\kappa), where the ratio \kappa=p/p' is related to the loop gas fugacity \beta=-2\cos\pi/\kappa. Critical exponents pertaining to valuable geometrical objects, namely the hull, external perimeter and red bonds, were measured. The Metropolis-Hastings algorithm, as well as several methods improving its efficiency, are presented. Despite the extrapolation of curves presenting large slopes, values as close as three to four digits from the theoretical predictions were attained through rigorous statistical analysis.
The second article describes the decomposition of the XXZ spin chain Hilbert space \otimes^nC^2 using idempotents. The model of interest (Pasquier & Saleur (1990)) is described by a parameter-dependent Hamiltonian H_{XXZ}(q), q\in C^\times, expressible as a sum of elements of the Temperley-Lieb algebra TL_n(q). The spectrum of H_{XXZ}(q) in the continuum limit is also believed to be related to conformal field theories whose central charge is set by q. Using the quantum Schur-Weyl duality, an expression for the primitive idempotents of End_{TL_n}\otimes^nC^2, involving U_qsl_2 elements, is obtained. These idempotents allow for the explicit construction of the indecomposable TL_n-modules of \otimes^nC^2, all of which are irreducible except when q is a root of unity. This case, and the case where q is generic, are treated separately.
Since a wide variety of results and tools are required to tackle the problems stated above, this thesis contains many introductory chapters. Its layout is as follows. The first chapter introduces theoretical concepts common to both articles, in particular an overview of critical phenomena and conformal field theory. Before proceeding to the article entitled \emph{Geometric Exponents of Dilute Loop Models} constituting Chapter 3, the second chapter deals briefly with logarithmic conformal fields, Schramm-Loewner evolution and the Metropolis-Hastings algorithm. The fourth chapter defines some algebraic concepts used in the second article, "The idempotents of the TL_n-module \otimes^nC^2 in terms of elements of U_qsl_2" of Chapter 5. A summary of the main results, as well as paths to unexplored questions, are suggested in a final chapter.
|
Page generated in 0.1192 seconds