• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 43
  • 11
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 30
  • 27
  • 22
  • 18
  • 17
  • 16
  • 15
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Modelling and simulation framework incorporating redundancy and failure probabilities for evaluation of a modular automated main distribution frame

Botha, Marthinus Ignatius January 2013 (has links)
Maintaining and operating manual main distribution frames is labour-intensive. As a result, Automated Main Distribution Frames (AMDFs) have been developed to alleviate the task of maintaining subscriber loops. Commercial AMDFs are currently employed in telephone exchanges in some parts of the world. However, the most significant factors limiting their widespread adoption are costeffective scalability and reliability. Therefore, an impelling incentive is provided to create a simulation framework in order to explore typical implementations and scenarios. Such a framework will allow the evaluation and optimisation of a design in terms of both internal and external redundancies. One of the approaches to improve system performance, such as system reliability, is to allocate the optimal redundancy to all or some components in a system. Redundancy at the system or component levels can be implemented in one of two schemes: parallel redundancy or standby redundancy. It is also possible to mix these schemes for various components. Moreover, the redundant elements may or may not be of the same type. If all the redundant elements are of different types, the redundancy optimisation model is implemented with component mixing. Conversely, if all the redundant components are identical, the model is implemented without component mixing. The developed framework can be used both to develop new AMDF architectures and to evaluate existing AMDF architectures in terms of expected lifetimes, reliability and service availability. Two simulation models are presented. The first simulation model is concerned with optimising central office equipment within a telephone exchange and entails an environment of clients utilising services. Currently, such a model does not exist. The second model is a mathematical model incorporating stochastic simulation and a hybrid intelligent evolutionary algorithm to solve redundancy allocation problems. For the first model, the optimal partitioning of the model is determined to speed up the simulation run efficiently. For the second model, the hybrid intelligent algorithm is used to solve the redundancy allocation problem under various constraints. Finally, a candidate concept design of an AMDF is presented and evaluated with both simulation models. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
112

Simulation of Weakly Correlated Functions and its Application to Random Surfaces and Random Polynomials

Fellenberg, Benno, Scheidt, Jürgen vom, Richter, Matthias 30 October 1998 (has links)
The paper is dedicated to the modeling and the simulation of random processes and fields. Using the concept and the theory of weakly correlated functions a consistent representation of sufficiently smooth random processes will be derived. Special applications will be given with respect to the simulation of road surfaces in vehicle dynamics and to the confirmation of theoretical results with respect to the zeros of random polynomials.
113

Demand Deposits : Valuation and Interest Rate Risk Management / Avistakonton : Värdering och Ränteriskhantering

Lu, Yang, Visvanathar, Kevin January 2015 (has links)
In the aftermath of the financial crisis of 2008, regulatory authorities have implemented stricter policies to ensure more prudent risk management practices among banks. Despite the growing importance of demand deposits for banks, no policies for how to adequately account for the inherent interest rate risk have been introduced. Demand deposits are associated with two sources of uncertainties which make it difficult to assess its risks using standardized models: they lack a predetermined maturity and the deposit rate may be changed at the bank’s discretion. In light of this gap, this study aims to empirically investigate the modeling of the valuation and interest rate risk of demand deposits with two different frameworks: the Economic Value Model Framework (EVM) and the Replicating Portfolio Model Framework (RPM). To analyze the two frameworks, models for the demand deposit rate and demand deposit volume are developed using a comprehensive and novel dataset provided by one the biggest commercial banks in Sweden. The findings indicate that including macroeconomic variables in the modeling of the deposit rate and deposit volume do not improve the modeling accuracy. This is in contrast to what has been suggested by previous studies. The findings also indicate that there are modeling differences between demand deposit categories. Finally, the EVM is found to produce interest rate risks with less variability compared to the RPM. / Till foljd av nanskrisen 2008 har regulatoriska myndigheter infort mer strikta regelverk for att framja en sund nansiell riskhantering hos banker. Trots avistakontons okade betydelse for banker har inga regulatoriska riktlinjer introducerats for hur den associerade ranterisken ska hanteras ur ett riskperspektiv. Avistakonton ar forknippade med tva faktorer som forsvarar utvarderingen av dess ranterisk med traditionella ranteriskmetoder: de saknar en forutbestamd loptid och avistarantan kan andras nar sa banken onskar. Med hansyn till detta gap fokuserar denna studie pa att empiriskt analysera tva modelleringsramverk for att vardera och mata ranterisken hos avistakonton: Economic Value Model Framework (EVM) and Replicating Portfolio Model Framework (RPM). Analysen genomfors genom att initialt ta fram modeller for hur avistarantan och volymen pa avistakonton utvecklas over tid med hjalp av ett modernt och unikt dataset fran en av Sveriges storsta kommersiella banker. Studiens resultat indikerar att modellerna for avistarantan och avistavolymen inte forbattras nar makroekonomiska variabler ar inkluderade. Detta ar i kontrast till vad tidigare studier har oreslagit. Vidare visar studiens resultat att det modellerna skiljer sig nar avistakontona ar egmenterade pa en mer granular niva. Slutligen pavisar resultatet att EVM producerar ranteriskestimat som ar mindre kansliga for antanganden an RPM.
114

The Eukaryotic Chromatin Computer: Components, Mode of Action, Properties, Tasks, Computational Power, and Disease Relevance

Arnold, Christian 14 February 2014 (has links)
Eukaryotic genomes are typically organized as chromatin, the complex of DNA and proteins that forms chromosomes within the cell\\\''s nucleus. Chromatin has pivotal roles for a multitude of functions, most of which are carried out by a complex system of covalent chemical modifications of histone proteins. The propagation of patterns of these histone post-translational modifications across cell divisions is particularly important for maintenance of the cell state in general and the transcriptional program in particular. The discovery of epigenetic inheritance phenomena - mitotically and/or meiotically heritable changes in gene function resulting from changes in a chromosome without alterations in the DNA sequence - was remarkable because it disproved the assumption that information is passed to daughter cells exclusively through DNA. However, DNA replication constitutes a dramatic disruption of the chromatin state that effectively amounts to partial erasure of stored information. To preserve its epigenetic state the cell reconstructs (at least part of) the histone post-translational modifications by means of processes that are still very poorly understood. A plausible hypothesis is that the different combinations of reader and writer domains in histone-modifying enzymes implement local rewriting rules that are capable of \\\"recomputing\\\" the desired parental patterns of histone post-translational modifications on the basis of the partial information contained in that half of the nucleosomes that predate replication. It is becoming increasingly clear that both information processing and computation are omnipresent and of fundamental importance in many fields of the natural sciences and the cell in particular. The latter is exemplified by the increasingly popular research areas that focus on computing with DNA and membranes. Recent work suggests that during evolution, chromatin has been converted into a powerful cellular memory device capable of storing and processing large amounts of information. Eukaryotic chromatin may therefore also act as a cellular computational device capable of performing actual computations in a biological context. A recent theoretical study indeed demonstrated that even relatively simple models of chromatin computation are computationally universal and hence conceptually more powerful than gene regulatory networks. In the first part of this thesis, I establish a deeper understanding of the computational capacities and limits of chromatin, which have remained largely unexplored. I analyze selected biological building blocks of the chromatin computer and compare it to system components of general purpose computers, particularly focusing on memory and the logical and arithmetical operations. I argue that it has a massively parallel architecture, a set of read-write rules that operate non-deterministically on chromatin, the capability of self-modification, and more generally striking analogies to amorphous computing. I therefore propose a cellular automata-like 1-D string as its computational paradigm on which sets of local rewriting rules are applied asynchronously with time-dependent probabilities. Its mode of operation is therefore conceptually similar to well-known concepts from the complex systems theory. Furthermore, the chromatin computer provides volatile memory with a massive information content that can be exploited by the cell. I estimate that its memory size lies in the realms of several hundred megabytes of writable information per cell, a value that I compare with DNA itself and cis-regulatory modules. I furthermore show that it has the potential to not only perform computations in a biological context but also in a strict informatics sense. At least theoretically it may therefore be used to calculate any computable function or algorithm more generally. Chromatin is therefore another representative of the growing number of non-standard computing examples. As an example for a biological challenge that may be solved by the \\\"chromatin computer\\\", I formulate epigenetic inheritance as a computational problem and develop a flexible stochastic simulation system for the study of recomputation-based epigenetic inheritance of individual histone post-translational modifications. The implementation uses Gillespie\\\''s stochastic simulation algorithm for exactly simulating the time evolution of the chemical master equation of the underlying stochastic process. Furthermore, it is efficient enough to use an evolutionary algorithm to find a system of enzymes that can stably maintain a particular chromatin state across multiple cell divisions. I find that it is easy to evolve such a system of enzymes even without explicit boundary elements separating differentially modified chromatin domains. However, the success of this task depends on several previously unanticipated factors such as the length of the initial state, the specific pattern that should be maintained, the time between replications, and various chemical parameters. All these factors also influence the accumulation of errors in the wake of cell divisions. Chromatin-regulatory processes and epigenetic (inheritance) mechanisms constitute an intricate and sensitive system, and any misregulation may contribute significantly to various diseases such as Alzheimer\\\''s disease. Intriguingly, the role of epigenetics and chromatin-based processes as well as non-coding RNAs in the etiology of Alzheimer\\\''s disease is increasingly being recognized. In the second part of this thesis, I explicitly and systematically address the two hypotheses that (i) a dysregulated chromatin computer plays important roles in Alzheimer\\\''s disease and (ii) Alzheimer\\\''s disease may be considered as an evolutionarily young disease. In summary, I found support for both hypotheses although for hypothesis 1, it is very difficult to establish causalities due to the complexity of the disease. However, I identify numerous chromatin-associated, differentially expressed loci for histone proteins, chromatin-modifying enzymes or integral parts thereof, non-coding RNAs with guiding functions for chromatin-modifying complexes, and proteins that directly or indirectly influence epigenetic stability (e.g., by altering cell cycle regulation and therefore potentially also the stability of epigenetic states). %Notably, we generally observed enrichment of probes located in non-coding regions, particularly antisense to known annotations (e.g., introns). For the identification of differentially expressed loci in Alzheimer\\\''s disease, I use a custom expression microarray that was constructed with a novel bioinformatics pipeline. Despite the emergence of more advanced high-throughput methods such as RNA-seq, microarrays still offer some advantages and will remain a useful and accurate tool for transcriptome profiling and expression studies. However, it is non-trivial to establish an appropriate probe design strategy for custom expression microarrays because alternative splicing and transcription from non-coding regions are much more pervasive than previously appreciated. To obtain an accurate and complete expression atlas of genomic loci of interest in the post-ENCODE era, this additional transcriptional complexity must be considered during microarray design and requires well-considered probe design strategies that are often neglected. This encompasses, for example, adequate preparation of a set of target sequences and accurate estimation of probe specificity. With the help of this pipeline, two custom-tailored microarrays have been constructed that include a comprehensive collection of non-coding RNAs. Additionally, a user-friendly web server has been set up that makes the developed pipeline publicly available for other researchers. / Eukaryotische Genome sind typischerweise in Form von Chromatin organisiert, dem Komplex aus DNA und Proteinen, aus dem die Chromosomen im Zellkern bestehen. Chromatin hat lebenswichtige Funktionen in einer Vielzahl von Prozessen, von denen die meisten durch ein komplexes System von kovalenten Modifikationen an Histon-Proteinen ablaufen. Muster dieser Modifikationen sind wichtige Informationsträger, deren Weitergabe über die Zellteilung hinaus an beide Tochterzellen besonders wichtig für die Aufrechterhaltung des Zellzustandes im Allgemeinen und des Transkriptionsprogrammes im Speziellen ist. Die Entdeckung von epigenetischen Vererbungsphänomenen - mitotisch und/oder meiotisch vererbbare Veränderungen von Genfunktionen, hervorgerufen durch Veränderungen an Chromosomen, die nicht auf Modifikationen der DNA-Sequenz zurückzuführen sind - war bemerkenswert, weil es die Hypothese widerlegt hat, dass Informationen an Tochterzellen ausschließlich durch DNA übertragen werden. Die Replikation der DNA erzeugt eine dramatische Störung des Chromatinzustandes, welche letztendlich ein partielles Löschen der gespeicherten Informationen zur Folge hat. Um den epigenetischen Zustand zu erhalten, muss die Zelle Teile der parentalen Muster der Histonmodifikationen durch Prozesse rekonstruieren, die noch immer sehr wenig verstanden sind. Eine plausible Hypothese postuliert, dass die verschiedenen Kombinationen der Lese- und Schreibdomänen innerhalb von Histon-modifizierenden Enzymen lokale Umschreibregeln implementieren, die letztendlich das parentale Modifikationsmuster der Histone neu errechnen. Dies geschieht auf Basis der partiellen Informationen, die in der Hälfte der vererbten Histone gespeichert sind. Es wird zunehmend klarer, dass sowohl Informationsverarbeitung als auch computerähnliche Berechnungen omnipräsent und in vielen Bereichen der Naturwissenschaften von fundamentaler Bedeutung sind, insbesondere in der Zelle. Dies wird exemplarisch durch die zunehmend populärer werdenden Forschungsbereiche belegt, die sich auf computerähnliche Berechnungen mithilfe von DNA und Membranen konzentrieren. Jüngste Forschungen suggerieren, dass sich Chromatin während der Evolution in eine mächtige zelluläre Speichereinheit entwickelt hat und in der Lage ist, eine große Menge an Informationen zu speichern und zu prozessieren. Eukaryotisches Chromatin könnte also als ein zellulärer Computer agieren, der in der Lage ist, computerähnliche Berechnungen in einem biologischen Kontext auszuführen. Eine theoretische Studie hat kürzlich demonstriert, dass bereits relativ simple Modelle eines Chromatincomputers berechnungsuniversell und damit mächtiger als reine genregulatorische Netzwerke sind. Im ersten Teil meiner Dissertation stelle ich ein tieferes Verständnis des Leistungsvermögens und der Beschränkungen des Chromatincomputers her, welche bisher größtenteils unerforscht waren. Ich analysiere ausgewählte Grundbestandteile des Chromatincomputers und vergleiche sie mit den Komponenten eines klassischen Computers, mit besonderem Fokus auf Speicher sowie logische und arithmetische Operationen. Ich argumentiere, dass Chromatin eine massiv parallele Architektur, eine Menge von Lese-Schreib-Regeln, die nicht-deterministisch auf Chromatin operieren, die Fähigkeit zur Selbstmodifikation, und allgemeine verblüffende Ähnlichkeiten mit amorphen Berechnungsmodellen besitzt. Ich schlage deswegen eine Zellularautomaten-ähnliche eindimensionale Kette als Berechnungsparadigma vor, auf dem lokale Lese-Schreib-Regeln auf asynchrone Weise mit zeitabhängigen Wahrscheinlichkeiten ausgeführt werden. Seine Wirkungsweise ist demzufolge konzeptionell ähnlich zu den wohlbekannten Theorien von komplexen Systemen. Zudem hat der Chromatincomputer volatilen Speicher mit einem massiven Informationsgehalt, der von der Zelle benutzt werden kann. Ich schätze ab, dass die Speicherkapazität im Bereich von mehreren Hundert Megabytes von schreibbarer Information pro Zelle liegt, was ich zudem mit DNA und cis-regulatorischen Modulen vergleiche. Ich zeige weiterhin, dass ein Chromatincomputer nicht nur Berechnungen in einem biologischen Kontext ausführen kann, sondern auch in einem strikt informatischen Sinn. Zumindest theoretisch kann er deswegen für jede berechenbare Funktion benutzt werden. Chromatin ist demzufolge ein weiteres Beispiel für die steigende Anzahl von unkonventionellen Berechnungsmodellen. Als Beispiel für eine biologische Herausforderung, die vom Chromatincomputer gelöst werden kann, formuliere ich die epigenetische Vererbung als rechnergestütztes Problem. Ich entwickle ein flexibles Simulationssystem zur Untersuchung der epigenetische Vererbung von individuellen Histonmodifikationen, welches auf der Neuberechnung der partiell verlorengegangenen Informationen der Histonmodifikationen beruht. Die Implementierung benutzt Gillespies stochastischen Simulationsalgorithmus, um die chemische Mastergleichung der zugrundeliegenden stochastischen Prozesse über die Zeit auf exakte Art und Weise zu modellieren. Der Algorithmus ist zudem effizient genug, um in einen evolutionären Algorithmus eingebettet zu werden. Diese Kombination erlaubt es ein System von Enzymen zu finden, dass einen bestimmten Chromatinstatus über mehrere Zellteilungen hinweg stabil vererben kann. Dabei habe ich festgestellt, dass es relativ einfach ist, ein solches System von Enzymen zu evolvieren, auch ohne explizite Einbindung von Randelementen zur Separierung differentiell modifizierter Chromatindomänen. Dennoch ängt der Erfolg dieser Aufgabe von mehreren bisher unbeachteten Faktoren ab, wie zum Beispiel der Länge der Domäne, dem bestimmten zu vererbenden Muster, der Zeit zwischen Replikationen sowie verschiedenen chemischen Parametern. Alle diese Faktoren beeinflussen die Anhäufung von Fehlern als Folge von Zellteilungen. Chromatin-regulatorische Prozesse und epigenetische Vererbungsmechanismen stellen ein komplexes und sensitives System dar und jede Fehlregulation kann bedeutend zu verschiedenen Krankheiten, wie zum Beispiel der Alzheimerschen Krankheit, beitragen. In der Ätiologie der Alzheimerschen Krankheit wird die Bedeutung von epigenetischen und Chromatin-basierten Prozessen sowie nicht-kodierenden RNAs zunehmend erkannt. Im zweiten Teil der Dissertation adressiere ich explizit und auf systematische Art und Weise die zwei Hypothesen, dass (i) ein fehlregulierter Chromatincomputer eine wichtige Rolle in der Alzheimerschen Krankheit spielt und (ii) die Alzheimersche Krankheit eine evolutionär junge Krankheit darstellt. Zusammenfassend finde ich Belege für beide Hypothesen, obwohl es für erstere schwierig ist, aufgrund der Komplexität der Krankheit Kausalitäten zu etablieren. Dennoch identifiziere ich zahlreiche differentiell exprimierte, Chromatin-assoziierte Bereiche, wie zum Beispiel Histone, Chromatin-modifizierende Enzyme oder deren integrale Bestandteile, nicht-kodierende RNAs mit Führungsfunktionen für Chromatin-modifizierende Komplexe oder Proteine, die direkt oder indirekt epigenetische Stabilität durch veränderte Zellzyklus-Regulation beeinflussen. Zur Identifikation von differentiell exprimierten Bereichen in der Alzheimerschen Krankheit benutze ich einen maßgeschneiderten Expressions-Microarray, der mit Hilfe einer neuartigen Bioinformatik-Pipeline erstellt wurde. Trotz des Aufkommens von weiter fortgeschrittenen Hochdurchsatzmethoden, wie zum Beispiel RNA-seq, haben Microarrays immer noch einige Vorteile und werden ein nützliches und akkurates Werkzeug für Expressionsstudien und Transkriptom-Profiling bleiben. Es ist jedoch nicht trivial eine geeignete Strategie für das Sondendesign von maßgeschneiderten Expressions-Microarrays zu finden, weil alternatives Spleißen und Transkription von nicht-kodierenden Bereichen viel verbreiteter sind als ursprünglich angenommen. Um ein akkurates und vollständiges Bild der Expression von genomischen Bereichen in der Zeit nach dem ENCODE-Projekt zu bekommen, muss diese zusätzliche transkriptionelle Komplexität schon während des Designs eines Microarrays berücksichtigt werden und erfordert daher wohlüberlegte und oft ignorierte Strategien für das Sondendesign. Dies umfasst zum Beispiel eine adäquate Vorbereitung der Zielsequenzen und eine genaue Abschätzung der Sondenspezifität. Mit Hilfe der Pipeline wurden zwei maßgeschneiderte Expressions-Microarrays produziert, die beide eine umfangreiche Sammlung von nicht-kodierenden RNAs beinhalten. Zusätzlich wurde ein nutzerfreundlicher Webserver programmiert, der die entwickelte Pipeline für jeden öffentlich zur Verfügung stellt.
115

[pt] MODELAGEM DA DATA DE ENTRADA EM PRODUÇÃO DE POÇOS DE PETRÓLEO UTILIZANDO INFERÊNCIA FUZZY / [en] MODELING OIL WELL PRODUCTION START DATE USING FUZZY INFERENCE

GABRIEL ALCANTARA BOMFIM 11 May 2017 (has links)
[pt] A previsão de produção é uma das etapas mais críticas do planejamento de curto prazo das empresas de exploração e produção de petróleo. O volume de petróleo que será produzido, denominado meta de produção, influencia diretamente todas as ações das empresas e tem crítico impacto em relação ao mercado. Percebe-se, portanto, a importância da aplicação de modelos que permitam considerar incertezas e avaliar o risco destas previsões. Esta modelagem estocástica tem sido realizada através de um modelo de simulação que considera quatro dimensões de variáveis: Potencial Produtivo Instalado, Entrada de Novos Poços, Parada Programada para Manutenção e Eficiência Operacional. Dentre as dimensões do modelo, a Entrada de Novos Poços é uma das mais sensíveis ao resultado final da previsão por apresentar grande incerteza. Desse modo, este trabalho tem por objetivo desenvolver um sistema de inferência fuzzy para prever a data de entrada em produção de poços de petróleo. O sistema é concebido integrado ao modelo de simulação visando aumentar a sua precisão. Os resultados mostram que o sistema de inferência fuzzy é aplicável à previsão da entrada de novos poços e que o seu uso eleva a acurácia das previsões de produção. / [en] Production forecasting is one of the most critical stages in short-term planning in upstream oil companies. The oil volume that will be produced, called production target, directly influences all companies actions and impact critically their market image. Therefore, it is noticed the importance of using models to consider uncertainties to evaluate production forecasting risks. This stochastic approach has been done through a simulation model which consider four dimensions of variables: installed production potential, new wells entry, scheduled maintenance program, and operational efficiency. Among those dimensions, the new wells entry is one of the most sensitive to the simulation results, because of its high degree of uncertainty. Thus, this work aims to develop a fuzzy inference system to forecast the new wells production start date. The system is designed integrated to the simulation model in order to increase its accuracy. The results show that the fuzzy inference system can be used to forecast wells production start date and its use increases oil production forecasting accuracy.
116

Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification

Zavar Moosavi, Azam Sadat 13 March 2018 (has links)
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes. / Ph. D.
117

Développement d'une méthodologie de modélisation cinétique de procédés de raffinage traitant des charges lourdes / Development of a novel methodology for kinetic modelling of heavy oil refining processes

Pereira De Oliveira, Luís Carlos 21 May 2013 (has links)
Une nouvelle méthodologie de modélisation cinétique des procédés de raffinage traitant les charges lourdes a été développée. Elle modélise, au niveau moléculaire, la composition de la charge et les réactions mises en œuvre dans le procédé.La composition de la charge est modélisée à travers un mélange de molécules dont les propriétés sont proches de celles de la charge. Le mélange de molécules est généré par une méthode de reconstruction moléculaire en deux étapes. Dans la première étape, les molécules sont créées par assemblage de blocs structuraux de manière stochastique. Dans la deuxième étape, les fractions molaires sont ajustées en maximisant un critère d’entropie d’information.Le procédé de raffinage est ensuite simulé en appliquant, réaction par réaction, ses principales transformations sur le mélange de molécules, à l'aide d'un algorithme de Monte Carlo.Cette méthodologie est appliquée à deux cas particuliers : l’hydrotraitement de gazoles et l’hydroconversion de résidus sous vide (RSV). Pour le premier cas, les propriétés globales de l’effluent sont bien prédites, ainsi que certaines propriétés moléculaires qui ne sont pas accessibles dans les modèles traditionnels. Pour l'hydroconversion de RSV, dont la structure moléculaire est nettement plus complexe, la conversion des coupes lourdes est correctement reproduite. Par contre, la prédiction des rendements en coupes légères et de la performance en désulfuration est moins précise. Pour les améliorer, il faut d'une part inclure de nouvelles réactions d'ouverture de cycle et d'autre part mieux représenter la charge en tenant compte des informations moléculaires issues des analyses des coupes de l'effluent. / In the present PhD thesis, a novel methodology for the kinetic modelling of heavy oil refining processes is developed. The methodology models both the feedstock composition and the process reactions at a molecular level. The composition modelling consists of generating a set of molecules whose properties are close to those obtained from the process feedstock analyses. The set of molecules is generated by a two-step molecular reconstruction algorithm. In the first step, an equimolar set of molecules is built by assembling structural blocks in a stochastic manner. In the second step, the mole fractions of the molecules are adjusted by maximizing an information entropy criterion. The refining process is then simulated by applying, step by step, its main reactions to the set of molecules, by a Monte Carlo method. This methodology has been applied to two refining processes: The hydrotreating (HDT) of Light Cycle Oil (LCO) gas oils and the hydroconversion of vacuum residues (VR). For the HDT of LCO gas oils, the overall properties of the effluent are well predicted. The methodology is also able to predict molecular properties of the effluent that are not accessible from traditional kinetic models. For the hydroconversion of VR, which have more complex molecules than LCO gas oils, the conversion of heavy fractions is correctly predicted. However, the results for the composition of lighter fractions and the desulfurization yield are less accurate. To improve them, one must on one hand include new ring opening reactions and on the other hand refine the feedstock representation by using additional molecular information from the analyses of the process effluents.
118

International breeding programs to improve health in pedigree dogs / Programmes d'élevage internationaux pour améliorer la santé des chiens de race

Wang, Shizhi 15 June 2018 (has links)
La santé du constitue une préoccupation croissante pour les éleveurs, propriétaires, et le grand public, plusieurs rapports ayant récemment souligné les potentiels impacts négatifs des pratiques d'élevage sur la santé des chiens de race (APGAW 2009, Nicholas 2011), au travers de la diffusion d’affections héréditaires par exemple. Ainsi, l’OFA (Orthopedic Foundation for Animals, http://www.offa.org) considère que la dysplasie de la hanche affecte au moins 163 races de chiens, avec des prévalences allant de 1,2 à 72,1%. La mise en œuvre de stratégie d'élevage afin de réduire l'incidence des maladies héréditaires et leur impact sur le bien-être constitue une priorité pour les éleveurs et les organisations d'élevage. L'efficacité de ces stratégies dépend toutefois fortement de facteurs tels que leur déterminisme génétique, la disponibilité de diagnostics cliniques ou génétiques efficaces, ainsi que les conditions spécifiques au contexte (la prévalence, la démographie, l'existence d'autres affections, la coopération des éleveurs ...). Par exemple, il a été montré que pour une maladie monogénique récessive, à fréquence égale, l'impact d’une stratégie sur la variabilité génétique sera extrêmement différent en fonction de la race (Leroy et Rognon 2012). Il est important de souligner également que le contexte et le cadre réglementaire de l'élevage varient beaucoup en fonction des pays. A titre d'exemple, en Suède, la proportion importante d'animaux de compagnie assurés (environ 50%) permet la mise en place d’enquêtes sur la santé des chiens à grande échelle (Bonnett et al., 2005), facilitant l'identification des affections impactant le bien-être. En fonction des pays, des mesures différentes de luttes contre les affections héréditaires ont pu être mises en place, pouvant aller de l’incitation à utiliser des reproducteurs sains, à l’interdiction de reproduction pour des individus atteints d’affection problématiques. Dans le cas de dysplasie de la hanche, un système d'évaluation génétique a été mis en œuvre dans certains pays (Allemagne, Suède, Royaume-Uni) pour quelques races, alors que dans certains autres pays, il est encore en cours de développement. Notons qu’un projet préliminaire à la thèse sera mise en place à l’échelle des kennels clubs nordiques (KNU) pour s’intéresser à la valeur ajoutée des échanges internationaux de données généalogiques et de santé. / Dog health constitutes a major concern for breeders, owners, as well as the general public, all the more since several study and reports have recently underlined potential impacts of breeding practices on dog health and fitness (APGAW 2009, Nicholas 2011). According to Online Mendelian Inheritance in Animals (OMIA, omia.angis.org.au) more than 586 disorders/traits have been reported in dogs, with various prevalence and consequences for canine health (Collins et al. 2011, Nicholas et al. 2011). As an exemple, Orthopedic Foundation for Animals (OFA 2011, http://www.offa.org) consider that Hip Dysplasia, a polygenetic trait affected by environmental factors, with variable impact on welfare, affects at least 163 dog breeds, with prevalence ranging from 1.2 to 72.1%. Implementation of breeding plans in order to reduce incidence of inherited disorders and their impact on welfare should be a priority for breeders and breeding organizations. Efficiency of such strategies is however highly dependant on several factors such as inheritance pattern, availability of efficient clinical/genetic test, and specific context conditions (prevalence, demography, existence of other disorders, cooperation of breeders…). For instance, it has been showed that for a monogenic recessive disorder with the same frequency, impact of a given strategy on genetic diversity will be completely different depending on the breed (Leroy and Rognon 2012). It is also important to underline that breeding context and breeding rules are very different according to countries. As an exemple, in Sweden the large proportion of pets insured (about 50%) allows the settlement of large surveys on dog health (Bonnett et al. 2005), leading to the identification of disorders critical to breed welfare. Depending on countries, the control of inherited disorders is implemented through various measures, from breeding recommandations to mating ban. In the case of hip dysplasia, a genetic evaluation system has been implemented in some countries (Germany, Sweden, UK) for a few breeds, while in some other countries, it is still under development. The fact that for many breeds there is an exchange of breeding animals between several countries with different breeding policies constitutes also a critical point to be taken into account, when settling a breeding strategy. Moreover it has been showed that efficiency of genetic evaluation for a polygenic trait such a hip dysplasia could be improved by joint evaluation between different countries (Fikse et al. 2012). For this purpose, a preliminary project, starting 2013 in Sweden, will investigate the interest of exchanging pedigree and health data within the framework of Nordic Kennel Union. The aim of this project is to provide operational tools to improve breed health in an international context, concerning both genetic evaluation and implementation of breeding policies.
119

Estudo comparativo de métodos geoestatísticos de estimativas e simulações estocásticas condicionais / Comparative study of geostatistical estimation methods and conditional stochastic simulations

Furuie, Rafael de Aguiar 05 October 2009 (has links)
Diferentes métodos geoestatísticos são apresentados como a melhor solução para diferentes contextos de acordo com a natureza dos dados a serem analisados. Alguns dos métodos de estimativa mais populares incluem a krigagem ordinária e a krigagem ordinária lognormal, esta ultima requerendo a transformação dos dados originais para uma distribuição gaussiana. No entanto, esses métodos apresentam limitações, sendo uma das mais discutidas o efeito de suavização apresentado pelas estimativas obtidas. Alguns algoritmos recentes foram propostos como meios de se corrigir este efeito, e são avaliados neste trabalho para a sua eficiência, assim como alguns algoritmos para a transformada reversa dos valores convertidos na krigagem ordinária lognormal. Outra abordagem para o problema é por meio do grupo de métodos denominado de simulação estocástica, alguns dos mais populares sendo a simulação gaussiana seqüencial e a simulação por bandas rotativas, que apesar de não apresentar o efeito de suavização da krigagem, não possuem a precisão local característica dos métodos de estimativa. Este trabalho busca avaliar a eficiência dos diferentes métodos de estimativa (krigagem ordinária, krigagem ordinária lognormal, assim como suas estimativas corrigidas) e simulação (simulação seqüencial gaussiana e simulação por bandas rotativas) para diferentes cenários de dados. Vinte e sete conjuntos de dados exaustivos (em grid 50x50) foram amostrados em 90 pontos por meio da amostragem aleatória simples. Estes conjuntos de dados partiam de uma distribuição gaussiana (Log1) e tinham seus coeficientes de variação progressivamente aumentados até se chegar a uma distribuição altamente assimétrica (Log27). Semivariogramas amostrais foram computados e modelados para os processos geoestatísticos de estimativa e simulação. As estimativas ou realizações resultantes foram então comparadas com os dados exaustivos originais de maneira a se avaliar quão bem esses dados originais eram reproduzidos. Isto foi feito pela comparação de parâmetros estatísticos dos dados originais com os dos dados reconstruídos, assim como por meio de análise gráfica. Resultados demonstraram que o método que apresentou melhores resultados foi a krigagem ordinária lognormal, estes ainda melhores quando aplicada a transformação reversa de Yamamoto, com grande melhora principalmente nos resultados para os dados altamente assimétricos. A krigagem ordinária apresentou sérias limitações na reprodução da cauda inferior dos conjuntos de dados mais assimétricos, apresentando para estes resultados piores que as estimativas não corrigidas. Ambos os métodos de simulação utilizados apresentaram uma baixa correlação como os dados exaustivos, seus resultados também cada vez menos representativos de acordo com o aumento do coeficiente de variação, apesar de apresentar a vantagem de fornecer diferentes cenários para tomada de decisões. / Different geostatistical methods present themselves as the optimal solution to different realities according to the characteristics displayed by the data in analysis. Some of the most popular estimation methods include ordinary kriging and lognormal ordinary kriging, this last one involving the transformation of data from their original space to a Gaussian distribution. However, these methods present some limitations, one of the most prominent ones being the smoothing effect observed in the resulting estimates. Some recent algorithms have been proposed as a way to correct this effect, and are tested in this work for their effectiveness, as well as some methods for the backtransformation of the lognormal converted values. Another approach to the problem is by means of the group of methods known as stochastic simulation, some of the most popular ones being the sequential Gaussian simulation and turning bands simulation, which although do not present the smoothing effect, lack the local accuracy characteristic of the estimation methods. This work seeks to assess the effectiveness of the different estimation (ordinary kriging, lognormal ordinary kriging, and their corrected estimates) and simulation (sequential Gaussian simulation and turning bands simulation) methods for different scenarios. Twenty seven exhaustive data sets (in a 50x50 grid) have been sampled at 90 points based on simple random sampling. These data sets started from a Gaussian distribution (Log1) and had their variation coefficients increased progressively, up to a highly asymmetrical distribution (Log27). Experimental semivariograms have been computed and modeled for geostatistical estimation and simulation processes. The resulting estimates or realizations were then compared to the original exhaustive data in order to assess how well these reproduced the original data. This was done by comparing statistical parameters of the original data and the ones of the reconstructed data, as well as graphically. Results showed that the method that presented the best correlation with the exhaustive data was lognormal ordinary kriging, even better when the backtransformation technique by Yamamoto is applied, which much improved the results for the more asymmetrical data sets. Ordinary kriging and its correction had some severe limitations in reproducing the lower tail of the more asymmetrical data sets, with worst results than those for the uncorrected estimates. Both simulation methods used presented a very small degree of correlation to the exhaustive data, their results also progressively less representative as the variation coefficient grew, even though it has the advantage of presenting several scenarios for decision making.
120

"Testes de hipótese e critério bayesiano de seleção de modelos para séries temporais com raiz unitária" / "Hypothesis testing and bayesian model selection for time series with a unit root"

Silva, Ricardo Gonçalves da 23 June 2004 (has links)
A literatura referente a testes de hipótese em modelos auto-regressivos que apresentam uma possível raiz unitária é bastante vasta e engloba pesquisas oriundas de diversas áreas. Nesta dissertação, inicialmente, buscou-se realizar uma revisão dos principais resultados existentes, oriundos tanto da visão clássica quanto da bayesiana de inferência. No que concerne ao ferramental clássico, o papel do movimento browniano foi apresentado de forma detalhada, buscando-se enfatizar a sua aplicabilidade na dedução de estatísticas assintóticas para a realização dos testes de hipótese relativos à presença de uma raíz unitária. Com relação à inferência bayesiana, foi inicialmente conduzido um exame detalhado do status corrente da literatura. A seguir, foi realizado um estudo comparativo em que se testa a hipótese de raiz unitária com base na probabilidade da densidade a posteriori do parâmetro do modelo, considerando as seguintes densidades a priori: Flat, Jeffreys, Normal e Beta. A inferência foi realizada com base no algoritmo Metropolis-Hastings, usando a técnica de simulação de Monte Carlo por Cadeias de Markov (MCMC). Poder, tamanho e confiança dos testes apresentados foram computados com o uso de séries simuladas. Finalmente, foi proposto um critério bayesiano de seleção de modelos, utilizando as mesmas distribuições a priori do teste de hipótese. Ambos os procedimentos foram ilustrados com aplicações empíricas à séries temporais macroeconômicas. / Testing for unit root hypothesis in non stationary autoregressive models has been a research topic disseminated along many academic areas. As a first step for approaching this issue, this dissertation includes an extensive review highlighting the main results provided by Classical and Bayesian inferences methods. Concerning Classical approach, the role of brownian motion is discussed in a very detailed way, clearly emphasizing its application for obtaining good asymptotic statistics when we are testing for the existence of a unit root in a time series. Alternatively, for Bayesian approach, a detailed discussion is also introduced in the main text. Then, exploring an empirical façade of this dissertation, we implemented a comparative study for testing unit root based on a posteriori model's parameter density probability, taking into account the following a priori densities: Flat, Jeffreys, Normal and Beta. The inference is based on the Metropolis-Hastings algorithm and on the Monte Carlo Markov Chains (MCMC) technique. Simulated time series are used for calculating size, power and confidence intervals for the developed unit root hypothesis test. Finally, we proposed a Bayesian criterion for selecting models based on the same a priori distributions used for developing the same hypothesis tests. Obviously, both procedures are empirically illustrated through application to macroeconomic time series.

Page generated in 0.1613 seconds