Spelling suggestions: "subject:"fonte carlosimulations"" "subject:"fonte carlosimulation""
241 |
Hexabromcyclododecan in BiotaKöppen, Robert 28 July 2008 (has links)
Ziel dieser Arbeit war es, ein enantiomerenspezifisches Analysenverfahren für die Bestimmung von Hexabromcyclododecan (HBCD) in Biota-Proben zu entwickeln und die bei erhöhten Temperaturen auftretende Isomerisierung der HBCD-Stereoisomere zu untersuchen. Als erstes wurden die sechs HBCD-Enantiomere isoliert, mittels Einkristallstrukturanalyse, NMR- und IR-Spektroskopie charakterisiert und erstmals die spezifischen Drehwinkel der reinen Enantiomere mit den absoluten Konfigurationen und der Elutionsreihenfolge auf einer chiralen beta-PM-Cyclodextrin-Phase korreliert. Die Untersuchungen der HBCD-Enantiomere in Biota-Proben wurden mit einem HPLC-Tandem-MS-System unter Verwendung einer Kombination aus einer C18- und einer chiralen beta-PM-Cyclodextrin-Phase durchgeführt. Das entwickelte Analysenverfahren wurde validiert und ein Messunsicherheitsbudget erstellt. Die mittlere Wiederfindung für die internen Standards lag im Bereich von 96 - 104 % und die Nachweisgrenzen lagen zwischen 6 und 21 pg/g. Mit Hilfe dieses Analysenverfahrens wurden sowohl maritime als auch Süßwasser Biota-Proben von verschiedenen Probenahmepunkten in Europa untersucht. Die ermittelten Enantiomeren-Verhältnisse, die in allen Fällen vom (±)-alpha-HBCD dominiert wurden, zeigten signifikante Abweichungen von den razemischen Zusammensetzungen. Auffällig hierbei war, dass eine bevorzugte Anreicherung der zuerst eluierenden HBCD-Enantiomere ((-)-alpha-, (-)-beta- und (+)-gamma-HBCD) stattfand. Im Ergebnis der Untersuchungen zur thermisch induzierten intramolekularen Isomerisierung der HBCD-Stereoisomere konnten die verschiedenen Isomerisierungsreaktionen eindeutig aufgeklärt und die jeweiligen Geschwindigkeitskonstanten bei einer Temperatur von 160 °C ermittelt werden. Ergänzend wurde die Isomerisierung mit Hilfe der statistischen Thermodynamik unter Verwendung eines neuen Ansatzes für die klassische Hybrid Monte-Carlo-Simulation untersucht. / The major objectives of this thesis were the development of an analytical procedure for the enantio-specific determination of hexabromocyclododecane (HBCD) in biota samples and the investigation of the interconversion of the individual HBCD isomers at elevated temperatures. The six HBCD enantiomers were isolated, characterised by X-ray diffractometry, NMR- and IR-spectroscopy and the sense of rotation was correlated for the first time with the absolute configurations of the HBCD enantiomers as well as their order of elution on a chiral beta-PM-cyclodextrine-phase. Trace quantification of the individual HBCD enantiomers was achieved by means of high performance liquid chromatography coupled to tandem mass spectrometry equipped with a combination of a C18- and a chiral analytical column. Validation data and an uncertainty budget were determined. The mean recoveries of the different enantiomeric internal standards ranged from 96 to 104 % and the limits of detection are in the range of 6 to 21 pg/g. The analytical procedure was successfully applied to marine and freshwater biota samples from different European sites. The enantiomeric pattern of the six HBCD enantiomers, with (±)-alpha-HBCD as the dominant diastereomer, was determined for all biota samples and showed in most cases a significant deviation from the technical racemate. In these cases a preferential enrichment of the first eluted enantiomers ((-)-alpha-, (-)-beta- and (+)-gamma-HBCD) could be observed. The unambiguous elucidation of the individual isomerisation reactions as well as the quantification of all respective rate constants for the interconversion of the HBCD stereoisomers at 160 °C was done. A mechanistic explanation for the differences of the rate constants which govern the composition of HBCD diastereomers at equilibrium was given. Additionally, the interconversion was investigated by means of statistical thermodynamics using a new approach to classical hybrid Monte-Carlo simulations.
|
242 |
Processos de polimerização e transição de colapso em polímeros ramificados. / Polymerization processes and collapse transition of branched polymers.Neves, Ubiraci Pereira da Costa 13 March 1997 (has links)
Estudamos o diagrama de fases e o ponto tricrítico da transição de colapso em um modelo de animais na rede quadrada, a partir da expansão em série da compressibilidade isotérmica KT do sistema. Como função das variáveis x (fugacidade) e y = e1/T (T é a temperatura reduzida), a serie KT é analisada utilizando-se a técnica dos aproximantes diferenciais parciais. Determinamos o padrão de fluxo das trajetórias características de um típico aproximante diferencial parcial com ponto fixo estável. Obtemos estimativas satisfatórias para a fugacidade tricrítica Xt = 0.024 ± 0.005 e a temperatura tricritica Tt = 0.54 ± 0.04. Considerando somente campos de escala lineares, obtemos também o expoente de escala γ = 1.4 ± 0.2 e o expoente \"crossover\" Φ = 0.66 ± 0.08. Nossos resultados estão em boa concordância com estimativas prévias obtidas por outros métodos. Também estudamos um processo de polimerização ramifIcada através de simulações computacionais na rede quadrada baseadas em um modelo de crescimento cinético generalizado para se incorporar ramifIcações e impurezas. A configuração do polímero e identificada com uma árvore-ligação (\"bond tree\") a fim de se examinar os aspectos topológicos. As dimensões fractais dos aglomerados (\"clusters\") são obtidas na criticalidade. As simulações também permitem o estudo da evolução temporal dos aglomerados bem como a determinação das auto-correlações temporais e expoentes críticos dinâmicos. Com relação aos efeitos de tamanho finito, uma técnica de cumulantes de quarta ordem e empregada para se estimar a probabilidade de ramificação critica bc e os expoentes críticos v e β. Na ausência de impurezas, a rugosidade da superfície e descrita em termos dos expoentes de Hurst. Finalmente, simulamos este modelo de crescimento cinético na rede quadrada utilizando um método de Monte Carlo para estudar a polimerização ramificada com interações atrativas de curto alcance entre os monômeros. O diagrama de fases que separa os regimes de crescimento finito e infinito e obtido no plano (T,b) (T é a temperatura reduzida e b é a probabilidade de ramificação). No limite termodinâmico, extrapolamos a temperatura T∗ = 0.102 ± 0.005 abaixo da qual a fase e sempre infinita. Observamos também a ocorrência de uma transição de rugosidade na superfície do polímero. / The phase diagram and the tricritical point of a collapsing lattice animal are studied through an extended series expansion of the isothermal compressibility KT on a square lattice. As a function of the variables x (fugacity) and y = e1/T (T is the reduced temperature), this series KT is investigated using the partial differential approximants technique. The characteristic flow pattern of partial differential approximant trajectories is determined for a typical stable fixed point. We obtain satisfactory estimates for the tricritical fugacity Xt = 0.024 ± 0.005and temperature Tt = 0.54 ± 0.04.Taking into account only linear scaling fields we are also able to get the scaling exponent γ = 1.4 ± 0.2 and the crossover exponent Φ = 0.66 ± 0.08. Our results are in good agreement with previous estimates from other methods. We also study ramified polymerization through computational simulations on the square lattice of a kinetic growth model generalized to incorporate branching and impurities. The polymer configuration is identified with a bond tree in order to examine its topology. The fractal dimensions of clusters are obtained at criticality. Simulations also allow the study of time evolution of clusters as well as the determination of time autocorrelations and dynamical critical exponents. In regard to finite size effects, a fourth-order cumulant technique is employed to estimate the critical branching probability be and the critical exponents v and β. In the absence of impurities, the surface roughness is described in terms of the Hurst exponents. Finally we simulate this kinetic growth model on the square lattice using a Monte Carlo approach in order to study ramified polymerization with short distance attractive interactions between monomers. The phase boundary separating finite from infinite growth regimes is obtained in the (T,b) space (T is the reduced temperature and b is the branching probability). In the thermodynamic limit, we extrapolate the temperature T = 0.102 ± 0.005 below which the phase is found to be always infinite. We also observe the occurrence of a roughening transition at the polymer surface.
|
243 |
Radiolyse de l’eau dans des conditions extrêmes de température et de TEL. Capture de HO• par les ions Br- / Water radiolysis in extreme conditions of temperature and LET. Scavenging of HO• by Br- ionsSaffré, Dimitri 14 November 2011 (has links)
L’objectif de cette thèse est de contribuer à la compréhension du mécanisme d’oxydation de Br- dans lequel le radical HO• intervient. Le rendement du radical HO• étant alors intimement lié au rendement d’oxydation de Br-, c’est sur lui que l'influence de différents paramètres physicochimiques a été étudiée : température, TEL, débit de dose, pH, nature du gaz saturant. Les solutions ont été irradiées avec 4 types de rayonnement : rayons X de 13 à 18 keV, électrons de 7 et 10 MeV, faisceaux d’ions C6+ de 975 MeV et He2+ de 70 MeV. Le développement d’un autoclave optique avec circulation de solution compatible avec le rayonnement de TEL élevé a permis de réaliser les premières expériences à TEL élevé constant et à température élevée. Cette cellule s’est avérée être aussi compatible avec les expériences pompe-sonde picoseconde réalisées avec l’accélérateur ELYSE.Le rendement de capture du radical hydroxyle a donc été estimé à TEL élevé mais aussi à haute température. Une meilleure compréhension du mécanisme d’oxydation de Br- en est issue, notamment en milieu acide et en comparant les résultats cinétiques avec les simulations Monte Carlo pour les temps inférieurs à la µs, et Chemsimul pour les produits stables (formation de Br2•- et de Br3-). / The purpose of this thesis is to contribute to the understanding of the oxidation mechanism of Br- in which the HO• radical is involved. The HO• radiolytic yield is strongly connected with the oxidation yield of Br-, and therefore we have studied the influence of different physical and chemical parameters on this global yield: temperature, LET, dose rate, pH, saturation gas. The solutions have been irradiated with 4 types of ionizing rays: X- rays (from 13 to 18 keV), electrons (from 7 to 10 MeV), C6+-ions beam of 975 MeV and He2+-ions beam of 70 MeV.The development of an optical autoclave with solution flow, compatible with high LET ionizing rays has allowed us conduct the first experiments at constant high LET and high temperature. This cell has turned out to be compatible with the picosecond pump-probe experiments performed with the ELYSE accelerator.The HO• scavenging yield has been, therefore, estimated at both high LET and high temperature. A better understanding of the Br- oxidation mechanism has been achieved, in acid medium, in particular, by comparing the kinetics results with Monte Carlo Simulations for time scales inferior to the microsecond and with Chemsimul for the stable products (Br2•- and Br3- formations).
|
244 |
Dynamic factor model with non-linearities : application to the business cycle analysis / Modèles à facteurs dynamiques avec non linéarités : application à l'analyse du cycle économiquePetronevich, Anna 26 October 2017 (has links)
Cette thèse est dédiée à une classe particulière de modèles à facteurs dynamiques non linéaires, les modèles à facteurs dynamiques à changement de régime markovien (MS-DFM). Par la combinaison des caractéristiques du modèle à facteur dynamique et celui du modèle à changement de régimes markoviens(i.e. la capacité d’agréger des quantités massives d’information et de suivre des processus fluctuants), ce cadre s’est révélé très utile et convenable pour plusieurs applications, dont le plus important est l’analyse des cycles économiques.La connaissance de l’état actuel des cycles économiques est crucial afin de surveiller la santé économique et d’évaluer les résultats des politiques économiques. Néanmoins, ce n’est pas une tâche facile à réaliser car, d’une part, il n’y a pas d’ensemble de données et de méthodes communément reconnus pour identifier les points de retournement, d’autre part, car les institutions officielles annoncent un nouveau point de retournement, dans les pays où une telle pratique existe, avec un délai structurel de plusieurs mois.Le MS-DFM est en mesure de résoudre ces problèmes en fournissant des estimations de l’état actuel de l’économie de manière rapide, transparente et reproductible sur la base de la composante commune des indicateurs macroéconomiques caractérisant le secteur réel.Cette thèse contribue à la vaste littérature sur l’identification des points de retournement du cycle économique dans trois direction. Dans le Chapitre 3, on compare les deux techniques d’estimation de MS-DFM, les méthodes en une étape et en deux étapes, et on les applique aux données françaises pour obtenir la chronologie des points de retournement du cycle économique. Dans Chapitre 4, sur la base des simulations de Monte Carlo, on étudie la convergence des estimateurs de la technique retenue - la méthode d’estimation en deux étapes, et on analyse leur comportement en échantillon fini. Dans le Chapitre 5, on propose une extension de MS-DFM - le MS-DFM à l’influence dynamique (DI-MS-DFM)- qui permet d’évaluer la contribution du secteur financier à la dynamique du cycle économique et vice versa, tout en tenant compte du fait que l’interaction entre eux puisse être dynamique. / This thesis is dedicated to the study of a particular class of non-linear Dynamic Factor Models, the Dynamic Factor Models with Markov Switching (MS-DFM). Combining the features of the Dynamic Factor model and the Markov Switching model, i.e. the ability to aggregate massive amounts of information and to track recurring processes, this framework has proved to be a very useful and convenient instrument in many applications, the most important of them being the analysis of business cycles.In order to monitor the health of an economy and to evaluate policy results, the knowledge of the currentstate of the business cycle is essential. However, it is not easy to determine since there is no commonly accepted dataset and method to identify turning points, and the official institutions announce a newturning point, in countries where such practice exists, with a structural delay of several months. The MS-DFM is able to resolve these issues by providing estimates of the current state of the economy in a timely, transparent and replicable manner on the basis of the common component of macroeconomic indicators characterizing the real sector. The thesis contributes to the vast literature in this area in three directions. In Chapter 3, I compare the two popular estimation techniques of the MS-DFM, the one-step and the two-step methods, and apply them to the French data to obtain the business cycle turning point chronology. In Chapter 4, on the basis of Monte Carlo simulations, I study the consistency of the estimators of the preferred technique -the two-step estimation method, and analyze their behavior in small samples. In Chapter 5, I extend the MS-DFM and suggest the Dynamical Influence MS-DFM, which allows to evaluate the contribution of the financial sector to the dynamics of the business cycle and vice versa, taking into consideration that the interaction between them can be dynamic.
|
245 |
Monte Carlo simulations and a theoretical study of the damage induced by ionizing particles at the macroscopic scale as well as the molecular scale / Simulations Monte Carlo et étude théorique des dommages induits par les particules ionisantes à l’échelle macroscopique ainsi qu’à l’échelle moléculaireMouawad, Lena 16 December 2017 (has links)
Le travail présenté dans cette thèse se place dans le contexte de la simulation de dommages biologiques. D'abord une étude macroscopique met en question la pertinence des plans de traitement basés sur la dose absorbée et le passage à une étude de micro-dosimétrie permet l'utilisation de paramètres biologiques plus pertinents, tels que les cassures de brins d'ADN. La validité des sections efficaces d'interaction sur lesquelles se basent ces simulations est discutée en plus de détails. Suite à la complexité du milieu biologique, les sections efficaces d'interaction avec l'eau sont souvent utilisées. Nous développons un algorithme qui permet de fournir les sections efficaces d'ionisation pour n'importe quelle cible moléculaire, en utilisant des outils qui permettent de surmonter les difficultés de calcul, ce qui rend notre programme particulièrement intéressant pour les molécules complexes. Nous fournissons des résultats pour l'eau, l'ammoniac, l'acide formique et le tétrahydrofurane. / The work presented in this thesis can be placed in the context of biological damage simulation. Webegin with a macroscopic study where we question the relevance of absorbed-dose-based treatmentplanning. Then we move on to a micro-dosimetry study where we suggest the use of morebiologically relevant probes for damage, such as DNA strand breaks. More focus is given to thefundamental considerations on which the simulations are based, particularly the interaction crosssections. Due to the complexity of the biological medium, the interaction cross sections with waterare often used to simulate the behavior of particles. We develop a parallel user-friendly algorithmthat can provide the ionization cross sections for any molecular target, making use of particular toolsthat allow to overcome the computational difficulties, which makes our program particularlyinteresting for complex molecules. We provide preliminary results for water, ammonia, formic acidand Tetrahydrofuran.
|
246 |
Theory and molecular simulations of functional liquid crystalline dendrimers (LCDrs) / Θεωρία και υπολογιστικές προσομοιώσεις λειτουργικών δενδρόμορφων πολυμερώνWorkineh, Zerihun 07 May 2015 (has links)
Dendrimers are a class of monodisperse polymeric macromolecules with a well
defined and highly branched three-dimensional architecture. Their
well-defined structure and structural precision makes them outstanding
candidates for the development of new types of multifunctional super-molecules and materials with applications in medicine and pharmacy, catalysis, electronics, optoelectronics, etc. Liquid Crystalline Dendrimers (LCDrs) are a relatively new class of super-molecules which are based on the functionalization of common dendrimers with mesogenic (liquid crystalline) units. The combination of the fascinating molecular properties of the common dendrimers with the directionality of the mesogenic units have
produced a novel class of liquid crystal forming super-mesogens (LCDRs) with
unique molecular properties that allow novel ways of supramolecular self-assembly and self-organisation.
This work is mainly concerned with the computational modelling of LCRs. A
coarse grain strategy is adopted for the development of computational
tractable models which take explicitly into account the specific architecture, the extended flexibility and the shape anisotropy of the mesogenic units of LCDRs. The developed force field applies easily to a
variety of dendritic architectures. Utilizing Monte Carlo computer
simulations we study the structural and conformational behavior of single LCDrs and of systems of LCDrs either in confined geometries or in the bulk. Special emphasis is given on the modeling of the response of LCDRS on externally applied alignment fields. External fields might be fictitious aligning potentials which mimic electric
or magnetic fields or fields induced by the confining substrates.
The surface alignment of liquid crystalline dendrimers (LCDrs) is a key factor for many of their potential applications. We present results from Monte Carlo simulations of LCDrs adsorbed on flat, impenetrable aligning
substrates. A tractable coarse-grained force field for the inter-dendritic and the dendrimer-substrate interactions is introduced. The developed force field is based on modifications of well-known interaction potentials that can be used either with MC or with molecular dynamics simulations. We investigate the conformational and ordering properties of single, end-functionalized LCDrs under homeotropic, random (or degenerate) planar and nidirectional planar aligning substrates. Depending on the anchoring constrains to the mesogenic units of the LCDr and on temperature, a variety of stable ordered LCDr states, differing in their topology, are observed and analyzed. The influence of the dendritic generation and core functionality on the surface-induced ordering of the LCDrs are examined.
The study has been extended to system of LCDrs confined in nano-pores of different shapes and sizes under several anchoring conditions. Two basic confining geometries (pores) considered in this work: slit and cylindrical pores. In each confining geometry, different anchoring conditions are
imposed. The Isobaric-Isothermal (NPT) Monte Carlo Simulation is used to investigate the thermodynamic and structural properties of these nano-confined systems. The ransmission of orientational and positional ordering from the surface to the middle region of the pore depends on the size of the pore as well as on temperature and on anchoring strength. In the case of cylindrical pore, alignment propagation is short ranged compared to
that of slit-pore.
As a benchmark of our coarse-grained modelling strategy, we have extended
and tested our coarse grained Force Field for the study of Janus-like dendrimers confined on planar substrates. The obtained results indicate the capability of our model to capture successfully the highly amphipilic nature of these class dendrimers and their self-organisation properties. / Τα δενδριμερή είναι μία κατηγορία μονοδιάσπαρτων πολυμερικών μακρομορίων με δενδρόμορφη τρισδιάστατη αρχιτεκτονική. Η μοριακή αρχιτεκτονική τους και η μονοδιασπορά τους καθιστούν τα δενδριμερή ιδανικά ως πολυλειτουργικά υπερ-μόρια με εφαρμογές στην ιατρική και τη φαρμακολογία, την κατάλυση, την ηλεκτρονική και οπτοηλεκτρονική κλπ. Τα Υγρό-Κρυσταλλικά Δενδριμερή (ΥΚΔ) είναι μια σχετικά νέα κατηγορία υπερ-μορίων που βασίζονται στη χημική τροποποίηση των κοινών δενδριμερών με μεσογόνες (υγρόκρυσταλλικές) μοριακές μονάδες. Ο συνδυασμός των ιδιαίτερων μοριακών ιδιοτήτων των κοινών δενδριμερών με την κατευθυντικότητα των μεσογόνων έχουν οδηγήσει σε μια νέα κατηγορία υπερ-μεσογόνων (ΥΚΔ) με μοναδικές μοριακές ιδιότητες που επιτρέπουν νέους τρόπους (υπερ)μοριακής αυτο-συναρμολόγησης και αυτο-οργάνωσης.
Η εργασία αυτή ασχολείται με τη μοντελοποίηση και την υπολογιστική προσομοίωση ΥΚΔ. Εισάγονται οι αρχές για μια αδροποιημένη μοντελοποίηση ΥΚΔ που να λαμβάνει ρητά υπόψη την ειδική αρχιτεκτονική, την εκτεταμένη μοριακή ευκαμψία και την ανισοτροπία σχήματος των μεσογόνων του ΥΚΔ. Τα δυναμικά αλληλεπιδράσεων που εισάγονται επιτρέπουν τη μοντελοποίηση ποικίλων δενδριτικών αρχιτεκτονικών. Με την χρήση υπολογιστικών προσομοιώσεων Monte Carlo μελετώνται οι μοριακές ιδιότητες απλών ΥΚΔ διαφόρων γενεών και αρχιτεκτονικών καθώς και η θερμοδυναμική συμπεριφορά και οι μετατροπές φάσεων συστημάτων ΥΚΔ. Ιδιαίτερη έμφαση δίνεται στην μοντελοποίηση της απόκρισης των LCDRS σε εξωτερικά πεδία που μπορούν να προκαλέσουν ευθυγράμμισης των μεσογόνων ομάδων του ΥΚΔ. Τα εξωτερικά εφαρμοζόμενα πεδία μπορεί να είναι δυναμικά ευθυγράμμισης που μιμούνται τα ηλεκτρικά ή μαγνητικά πεδία ή πεδία που επάγονται από τους γεωμετρικούς περιορισμούς (συνοριακές συνθήκες) που επιβάλλονται στο υλικό όταν βρίσκεται κοντά σε επιφάνειες ή περιορισμένο εντός πόρων.
Η δυνατότητα επίτευξης κοινού μοριακού προσανατολισμού στις μεσοφάσεις από ΥΚΔ αποτελεί βασικό παράγοντα για πολλές από τις πιθανές εφαρμογές τους. Παρουσιάζονται αποτελέσματα προσομοιώσεων Monte Carlo Μόντε ΥΚΔ σε επαφή με επίπεδο, αδιαπέραστο υπόστρωμα που έχει τη δυνατότητα προσρόφησης (αγκύρωσης) των μεσογόνων μονάδων του ΥΚΔ υπό επιθυμητό προσανατολισμό. Τα αποτελέσματα βασίζονται σε κατάλληλα αδροποιημένο πεδίο δυνάμεων για την περιγραφή των αλληλεπιδράσεων μεταξύ των δενδριμερών καθώς και του δενδριμερούς με το υπόστρωμα. Ανάλογα με τον τύπο μοριακής αγκύρωσης στην επιφάνεια και τη θερμοκρασία, μια ποικιλία από διαφορετικούς τρόπους οργάνωσης του ΥΚΔ στην επιφάνεια παρατηρούνται και αναλύονται.
Η μελέτη έχει επεκταθεί επίσης σε συστήματα ΥΚΔ περιορισμένα σε νανο-πόρους διαφόρων σχήματα και μεγεθών κάτω από διάφορες συνθήκες μοριακής αγκύρωσης. Οι δύο βασικές γεωμετρίες περιορισμού (πόροι) που μελετούνται αναφέρονται σε παραλληλεπίπεδους και κυλινδρικούς πόρους. Σε κάθε γεωμετρία επιβάλλονται διαφορετικές συνθήκες αγκύρωσης. Οι προσομοιώσεις Monte Carlo έγιναν στην ισοβαρή συλλογή (ΝΡΤ) και διερευνήθηκε η θερμοδυναμική συμπεριφορά καθώς και η μοριακή οργάνωση των συστημάτων υπό νανο-εγκλεισμό. Τα συστήματα αυτά παρουσιάζουν πλούσια θερμοδυναμική συμπεριφορά. Η μοριακή οργάνωση καθώς και η μετάδοση του προσανατολισμού και της τάξης θέσεων από την επιφάνεια προς την μεσαία περιοχή των πόρων εξαρτάται το σχήμα και τι μέγεθος του πόρου, από τη θερμοκρασία καθώς και από τις συνθήκες μοριακής αγκύρωσης στις επιφάνειες του πόρου.
Για έλεγχο της αποτελεσματικότητας της στρατηγικής αδροποιημένης μοντελοποίησης που αναπτύχθηκε μελετήθηκαν επίσης αμφίφυλα δενδριμερή τύπου Janus περιορισμένα σε επίπεδη επιφάνεια. Τα αποτελέσματα των προσομοιώσεων έδειξαν την ικανότητα του μοντέλου μας να αποτυπώσει με επιτυχία το την αμφίφυλη φύση αυτών των δενδρομερών και να περιγράψει με επιτυχία διαφορετικούς τύπους αυτοργάνωσης που σχετίζονται με την αμφιφυλικότητα αυτών των μορίων και τον συνεπαγόμενο νανο-φασικό διαχωρισμό τους.
|
247 |
Caractérisation de la composante toxicocinétique du facteur d’ajustement pour la variabilité interindividuelle utilisé en analyse du risque toxicologiqueValcke, Mathieu 11 1900 (has links)
Un facteur d’incertitude de 10 est utilisé par défaut lors de l’élaboration des valeurs toxicologiques de référence en santé environnementale, afin de tenir compte de la variabilité interindividuelle dans la population. La composante toxicocinétique de cette variabilité correspond à racine de 10, soit 3,16. Sa validité a auparavant été étudiée sur la base de données pharmaceutiques colligées auprès de diverses populations (adultes, enfants, aînés). Ainsi, il est possible de comparer la valeur de 3,16 au Facteur d’ajustement pour la cinétique humaine (FACH), qui constitue le rapport entre un centile élevé (ex. : 95e) de la distribution de la dose interne dans des sous-groupes présumés sensibles et sa médiane chez l’adulte, ou encore à l’intérieur d’une population générale. Toutefois, les données expérimentales humaines sur les polluants environnementaux sont rares. De plus, ces substances ont généralement des propriétés sensiblement différentes de celles des médicaments. Il est donc difficile de valider, pour les polluants, les estimations faites à partir des données sur les médicaments. Pour résoudre ce problème, la modélisation toxicocinétique à base physiologique (TCBP) a été utilisée pour simuler la variabilité interindividuelle des doses internes lors de l’exposition aux polluants. Cependant, les études réalisées à ce jour n’ont que peu permis d’évaluer l’impact des conditions d’exposition (c.-à-d. voie, durée, intensité), des propriétés physico/biochimiques des polluants, et des caractéristiques de la population exposée sur la valeur du FACH et donc la validité de la valeur par défaut de 3,16. Les travaux de la présente thèse visent à combler ces lacunes.
À l’aide de simulations de Monte-Carlo, un modèle TCBP a d’abord été utilisé pour simuler la variabilité interindividuelle des doses internes (c.-à-d. chez les adultes, ainés, enfants, femmes enceintes) de contaminants de l’eau lors d’une exposition par voie orale, respiratoire, ou cutanée. Dans un deuxième temps, un tel modèle a été utilisé pour simuler cette variabilité lors de l’inhalation de contaminants à intensité et durée variables. Ensuite, un algorithme toxicocinétique à l’équilibre probabiliste a été utilisé pour estimer la variabilité interindividuelle des doses internes lors d’expositions chroniques à des contaminants hypothétiques aux propriétés physico/biochimiques variables. Ainsi, les propriétés de volatilité, de fraction métabolisée, de voie métabolique empruntée ainsi que de biodisponibilité orale ont fait l’objet d’analyses spécifiques. Finalement, l’impact du référent considéré et des caractéristiques démographiques sur la valeur du FACH lors de l’inhalation chronique a été évalué, en ayant recours également à un algorithme toxicocinétique à l’équilibre. Les distributions de doses internes générées dans les divers scénarios élaborés ont permis de calculer dans chaque cas le FACH selon l’approche décrite plus haut. Cette étude a mis en lumière les divers déterminants de la sensibilité toxicocinétique selon le sous-groupe et la mesure de dose interne considérée. Elle a permis de caractériser les déterminants du FACH et donc les cas où ce dernier dépasse la valeur par défaut de 3,16 (jusqu’à 28,3), observés presqu’uniquement chez les nouveau-nés et en fonction de la substance mère. Cette thèse contribue à améliorer les connaissances dans le domaine de l’analyse du risque toxicologique en caractérisant le FACH selon diverses considérations. / A default uncertainty factor of 10 is used in toxicological risk assessment to account for human variability, and the toxicokinetic component of this factor corresponds to a value of square root of 10, or 3,16. The adequacy of this value has been studied in the literature on the basis of pharmaceutical data obtained in various subpopulations (e.g. adults, children, elderly). Indeed, it is possible to compare the default value of 3,16 to the Human Kinetic Adjustment Factor (HKAF), computed as the ratio of an upper percentile value (e.g. 95th) of the distribution of internal dose metrics in presumed sensitive subpopulation to its median in adults, or alternatively an entire population. However, human experimental data on environmental contaminants are sparse. Besides, these chemicals generally exhibit characteristics that are quite different as compared to drugs. As a result, it is difficult to extrapolate, for pollutants, estimates of HKAF that were made using data on drugs. To solve this problem, physiologically-based toxicokinetic (PBTK) modeling has been used to simulate interindividual variability in internal dose metrics following exposure to xenobiotics. However, studies realized to date have not systematically evaluated the impact of the exposure conditions (route, duration and intensity), the physico/biochemical properties of the chemicals, and the characteristics of the exposed population, on the HKAF, and thus the adequacy of the default value. This thesis aims at compensating this lack of knowledge.
First, a probabilistic PBTK model was used to simulate, by means of Monte Carlo simulations, the interindividual variability in internal dose metrics (i.e. in adults, children, elerly, pregnant women) following the oral, inhalation or dermal exposure to drinking water contaminants, taken separately. Second, a similar model was used to simulate this variability following inhalation exposures of various durations and intensities to air contaminants. Then, a probabilistic steady-state algorithm was used to estimate interindividual variability in internal dose metrics for chronic exposures to hypothetical contaminants exhibiting different physico/biochemical properties. These include volatility, the fraction metabolized, the metabolic pathway by which they are biotransformed and oral bioavailability. Finally, the impact of a population’s demographic characteristics and the referent considered on the HKAF for chronic inhalation exposure was studied, also using a probabilistic steady-state algorithm. The distributions of internal dose metrics that were generated for every scenario simulated were used to compute the HKAF as described above. This study has pointed out the determinants of the toxicokinetic sensitivity considering a given subpopulation and dose metric. It allowed identifying determinants of the numeric value of the HKAF, thus cases for which it exceeded the default value of 3,16. This happened almost exclusively in neonates and on the basis of the parent compound. Overall, this study has contributed to the field of toxicological risk assessment by characterizing the HKAF as a function of various considerations.
|
248 |
Exposants géométriques des modèles de boucles dilués et idempotents des TL-modules de la chaîne de spins XXZProvencher, Guillaume 12 1900 (has links)
Cette thèse porte sur les phénomènes critiques survenant dans les modèles bidimensionnels sur réseau. Les résultats sont l'objet de deux articles : le premier porte sur la mesure d'exposants critiques décrivant des objets géométriques du réseau et, le second, sur la construction d'idempotents projetant sur des modules indécomposables de l'algèbre de Temperley-Lieb pour la chaîne de spins XXZ.
Le premier article présente des expériences numériques Monte Carlo effectuées pour une famille de modèles de boucles en phase diluée. Baptisés "dilute loop models (DLM)", ceux-ci sont inspirés du modèle O(n) introduit par Nienhuis (1990). La famille est étiquetée par les entiers relativement premiers p et p' ainsi que par un paramètre d'anisotropie. Dans la limite thermodynamique, il est pressenti que le modèle DLM(p,p') soit décrit par une théorie logarithmique des champs conformes de charge centrale c(\kappa)=13-6(\kappa+1/\kappa), où \kappa=p/p' est lié à la fugacité du gaz de boucles \beta=-2\cos\pi/\kappa, pour toute valeur du paramètre d'anisotropie. Les mesures portent sur les exposants critiques représentant la loi d'échelle des objets géométriques suivants : l'interface, le périmètre externe et les liens rouges. L'algorithme Metropolis-Hastings employé, pour lequel nous avons introduit de nombreuses améliorations spécifiques aux modèles dilués, est détaillé. Un traitement statistique rigoureux des données permet des extrapolations coïncidant avec les prédictions théoriques à trois ou quatre chiffres significatifs, malgré des courbes d'extrapolation aux pentes abruptes.
Le deuxième article porte sur la décomposition de l'espace de Hilbert \otimes^nC^2 sur lequel la chaîne XXZ de n spins 1/2 agit. La version étudiée ici (Pasquier et Saleur (1990)) est décrite par un hamiltonien H_{XXZ}(q) dépendant d'un paramètre q\in C^\times et s'exprimant comme une somme d'éléments de l'algèbre de Temperley-Lieb TL_n(q). Comme pour les modèles dilués, le spectre de la limite continue de H_{XXZ}(q) semble relié aux théories des champs conformes, le paramètre q déterminant la charge centrale. Les idempotents primitifs de End_{TL_n}\otimes^nC^2 sont obtenus, pour tout q, en termes d'éléments de l'algèbre quantique U_qsl_2 (ou d'une extension) par la dualité de Schur-Weyl quantique. Ces idempotents permettent de construire explicitement les TL_n-modules indécomposables de \otimes^nC^2. Ceux-ci sont tous irréductibles, sauf si q est une racine de l'unité. Cette exception est traitée séparément du cas où q est générique.
Les problèmes résolus par ces articles nécessitent une grande variété de résultats et d'outils. Pour cette raison, la thèse comporte plusieurs chapitres préparatoires. Sa structure est la suivante. Le premier chapitre introduit certains concepts communs aux deux articles, notamment une description des phénomènes critiques et de la théorie des champs conformes. Le deuxième chapitre aborde brièvement la question des champs logarithmiques, l'évolution de Schramm-Loewner ainsi que l'algorithme de Metropolis-Hastings. Ces sujets sont nécessaires à la lecture de l'article "Geometric Exponents of Dilute Loop Models" au chapitre 3. Le quatrième chapitre présente les outils algébriques utilisés dans le deuxième article, "The idempotents of the TL_n-module \otimes^nC^2 in terms of elements of U_qsl_2", constituant le chapitre 5. La thèse conclut par un résumé des résultats importants et la proposition d'avenues de recherche qui en découlent. / This thesis is concerned with the study of critical phenomena for two-dimensional models on the lattice. Its results are contained in two articles: A first one, devoted to measuring geometric exponents, and a second one to the construction of idempotents for the XXZ spin chain projecting on indecomposable modules of the Temperley-Lieb algebra.
Monte Carlo experiments, for a family of loop models in their dilute phase, are presented in the first article. Coined "dilute loop models (DLM)", this family is based upon an O(n) model introduced by Nienhuis (1990). It is defined by two coprime integers p,p' and an anisotropy parameter. In the continuum limit, DLM(p,p') is expected to yield a logarithmic conformal field theory of central charge c(\kappa)=13-6(\kappa+1/\kappa), where the ratio \kappa=p/p' is related to the loop gas fugacity \beta=-2\cos\pi/\kappa. Critical exponents pertaining to valuable geometrical objects, namely the hull, external perimeter and red bonds, were measured. The Metropolis-Hastings algorithm, as well as several methods improving its efficiency, are presented. Despite the extrapolation of curves presenting large slopes, values as close as three to four digits from the theoretical predictions were attained through rigorous statistical analysis.
The second article describes the decomposition of the XXZ spin chain Hilbert space \otimes^nC^2 using idempotents. The model of interest (Pasquier & Saleur (1990)) is described by a parameter-dependent Hamiltonian H_{XXZ}(q), q\in C^\times, expressible as a sum of elements of the Temperley-Lieb algebra TL_n(q). The spectrum of H_{XXZ}(q) in the continuum limit is also believed to be related to conformal field theories whose central charge is set by q. Using the quantum Schur-Weyl duality, an expression for the primitive idempotents of End_{TL_n}\otimes^nC^2, involving U_qsl_2 elements, is obtained. These idempotents allow for the explicit construction of the indecomposable TL_n-modules of \otimes^nC^2, all of which are irreducible except when q is a root of unity. This case, and the case where q is generic, are treated separately.
Since a wide variety of results and tools are required to tackle the problems stated above, this thesis contains many introductory chapters. Its layout is as follows. The first chapter introduces theoretical concepts common to both articles, in particular an overview of critical phenomena and conformal field theory. Before proceeding to the article entitled \emph{Geometric Exponents of Dilute Loop Models} constituting Chapter 3, the second chapter deals briefly with logarithmic conformal fields, Schramm-Loewner evolution and the Metropolis-Hastings algorithm. The fourth chapter defines some algebraic concepts used in the second article, "The idempotents of the TL_n-module \otimes^nC^2 in terms of elements of U_qsl_2" of Chapter 5. A summary of the main results, as well as paths to unexplored questions, are suggested in a final chapter.
|
249 |
Estimating the parameters of polynomial phase signalsFarquharson, Maree Louise January 2006 (has links)
Nonstationary signals are common in many environments such as radar, sonar, bioengineering and power systems. The nonstationary nature of the signals found in these environments means that classicalspectralanalysis techniques are notappropriate for estimating the parameters of these signals. Therefore it is important to develop techniques that can accommodate nonstationary signals. This thesis seeks to achieve this by firstly, modelling each component of the signal as having a polynomial phase and by secondly, developing techniques for estimating the parameters of these components. Several approaches can be used for estimating the parameters of polynomial phase signals, eachwithvarying degrees ofsuccess.Criteria to consider in potential estimation algorithms are (i) the signal-to-noise (SNR) ratio threshold of the algorithm, (ii) the amount of computation required for running the algorithm, and (iii) the closeness of the resulting estimates' mean-square errors to the minimum theoretical bound. These criteria will be used to compare the new techniques developed in this thesis with existing techniques. The literature on polynomial phase signal estimation highlights the recurring trade-off between the accuracy of the estimates and the amount of computation required. For example, the Maximum Likelihood (ML) method provides near-optimal estimates above threshold, but also incurs a heavy computational cost for higher order phase signals. On the other hand, multi-linear techniques such as the high-order ambiguity function (HAF) method require little computation, but have a significantly higher SNR threshold than the ML method. Of the existing techniques, the cubic phase (CP) function method is a promising technique because it provides an attractive SNR threshold and computational complexity trade-off. For this reason, the analysis techniques developed in this thesis will be derived from the CP function. A limitation of the CP function is its inability to accurately process phase orders greater than three. Therefore, the first novel contribution to this thesis develops a broadened class of discrete-time higher order phase (HP)functions to address this limitation.This broadened class is achieved by providing a multi-linear extension of the CP function. Monte Carlo simulations are performed to demonstrate the statistical advantage of the HP functions compared to the HAFs. A first order statistical analysis of the HP functions is presented. This analysis verifies the simulation results. The next novel contribution is a technique called the lower SNR cubic phase function (LCPF)method. It is an extension of the CP function, with the extension enabling performance at lower signal-to-noise ratios (SNRs). The improvement of the SNR threshold's performance is achieved by coherently integrating the CP function over a compact interval in the two-dimensional CP function space. The computation of the new algorithm is quite moderate, especially when compared to the ML method. Above threshold, the LCPF method's parameter estimates are asymptotically efficient. Monte Carlo simulation results are presented and a threshold analysis of the algorithm closely predicts the thresholds observed in these results. The next original contribution to this research involves extending the LCPF method so that it is able to process multicomponent cubic phase signals and higher order phase signals. The LCPF method is extended to higher orders by applying a windowing technique as opposed to adjusting the order of the kernel as implemented in the HP function method. To demonstrate the extension of the LCPF method for processing higher order phase signals and multicomponent cubic phase signals, some Monte Carlo simulations are presented. Finally, these estimation techniques are applied to real-worldscenarios in the fields of Power Systems Analysis, Neuroethology and Speech Analysis.
|
250 |
Modelagem computacional de tomografia com feixe de prótons / Computational modeling of protons tomographyOlga Yevseyeva 16 February 2009 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Nessa tese foi feito um estudo preliminar, destinado à elaboração do programa experimental inicial para a primeira instalação da tomografia com prótons (pCT) brasileira por meio de modelagem computacional. A terapia com feixe de prótons é uma forma bastante precisa de tratamento de câncer. Atualmente, o planejamento de tratamento é baseado na tomografia computadorizada com raios X, alternativamente, a tomografia com prótons pode ser usada. Algumas questões importantes, como efeito de escala e a Curva de Calibração (fonte de dados iniciais para planejamento de terapia com prótons), foram estudados neste trabalho. A passagem
de prótons com energias iniciais de 19,68MeV; 23MeV; 25MeV; 49,10MeV e 230MeV pelas camadas de materiais variados (água, alumínio, polietileno, ouro) foi simulada usando códigos
Monte Carlo populares como SRIM e GEANT4. Os resultados das simulações foram comparados com a previsão teórica (baseada na solução aproximada da equação de transporte de Boltzmann)
e com resultados das simulações feitas com outro popular código Monte Carlo MCNPX. Análise comparativa dos resultados das simulações com dados experimentais publicados na
literatura científica para alvos grossos e na faixa de energias de prótons usada em medidas em pCT foi feita. Foi observado que apesar de que todos os códigos mostram os resultados parecidos
alguns deslocamentos não sistemáticos podem ser observados. Foram feitas observações importantes sobre a precisão dos códigos e uma necessidade em medidas sistemáticas de
frenagem de prótons em alvos grossos foi declarada. / In the present work a preliminary research via computer simulations was made in order to elaborate a prior program for the first experimental pCT setup in Brazil. Proton therapy is a high precise form of a cancer treatment. Treatment planning nowadays is performed basing on X ray Computer Tomography data (CT), alternatively the same procedure could be performed using proton Computer Tomography (pCT). Some important questions, as a scale effect and so called Calibration Curve (as a source of primary data for pCT treatment planning) were studied in this work. The 19.68MeV; 23MeV; 25MeV; 49.10MeV e 230MeV protons passage through varied absorbers (water, aluminum, polyethylene, gold) were simulated by such popular Monte Carlo packages as SRIM and GEANT4. The simulation results were compared with a theoretic prevision based on approximate solution of the Boltzmann transport equation and with simulation results of the other popular Monte Carlo code MCNPX. The comparative analysis of the simulations results with the experimental data published in scientific literature for thick absorbers and within the energy range used in the pCT measurements was made. It was noted in spite of the fact that all codes showed similar results some nonsystematic displacements can be observed. Some important observations about the codes precision were made and a necessity of the systematic measurements of the proton stopping power in thick absorbers was declared.
|
Page generated in 0.1213 seconds