• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 43
  • 11
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 137
  • 137
  • 30
  • 27
  • 22
  • 18
  • 17
  • 16
  • 15
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Adaptive Sampling Methods for Stochastic Optimization

Daniel Andres Vasquez Carvajal (10631270) 08 December 2022 (has links)
<p>This dissertation investigates the use of sampling methods for solving stochastic optimization problems using iterative algorithms. Two sampling paradigms are considered: (i) adaptive sampling, where, before each iterate update, the sample size for estimating the objective function and the gradient is adaptively chosen; and (ii) retrospective approximation (RA), where, iterate updates are performed using a chosen fixed sample size for as long as progress is deemed statistically significant, at which time the sample size is increased. We investigate adaptive sampling within the context of a trust-region framework for solving stochastic optimization problems in $\mathbb{R}^d$, and retrospective approximation within the broader context of solving stochastic optimization problems on a Hilbert space. In the first part of the dissertation, we propose Adaptive Sampling Trust-Region Optimization (ASTRO), a class of derivative-based stochastic trust-region (TR) algorithms developed to solve smooth stochastic unconstrained optimization problems in $\mathbb{R}^{d}$ where the objective function and its gradient are observable only through a noisy oracle or using a large dataset. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure that the objective function and its gradient are sampled only to the extent needed, so that small sample sizes are chosen when the iterates are far from a critical point and large sample sizes are chosen when iterates are near a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We prove three main results for ASTRO and for general stochastic trust-region methods that estimate function and gradient values adaptively, using sample sizes that are stopping times with respect to the sigma algebra of the generated observations. The first asserts strong consistency when the adaptive sample sizes have a mild logarithmic lower bound, assuming that the oracle errors are light-tailed. The second and third results characterize the iteration and oracle complexities in terms of certain risk functions. Specifically, the second result asserts that the best achievable $\mathcal{O}(\epsilon^{-1})$ iteration complexity (of squared gradient norm) is attained when the total relative risk associated with the adaptive sample size sequence is finite; and the third result characterizes the corresponding oracle complexity in terms of the total generalized risk associated with the adaptive sample size sequence. We report encouraging numerical results in certain settings. In the second part of this dissertation, we consider the use of RA as an alternate adaptive sampling paradigm to solve smooth stochastic constrained optimization problems in infinite-dimensional Hilbert spaces. RA generates a sequence of subsampled deterministic infinite-dimensional problems that are approximately solved within a dynamic error tolerance. The bottleneck in RA becomes solving this sequence of problems efficiently. To this end, we propose a progressive subspace expansion (PSE) framework to solve smooth deterministic optimization problems in infinite-dimensional Hilbert spaces with a TR Sequential Quadratic Programming (SQP) solver. The infinite-dimensional optimization problem is discretized, and a sequence of finite-dimensional problems are solved where the problem dimension is progressively increased. Additionally, (i) we solve this sequence of finite-dimensional problems only to the extent necessary, i.e., we spend just enough computational work needed to solve each problem within a dynamic error tolerance, and (ii) we use the solution of the current optimization problem as the initial guess for the subsequent problem. We prove two main results for PSE. The first assesses convergence to a first-order critical point of a subsequence of iterates generated by the PSE TR-SQP algorithm. The second characterizes the relationship between the error tolerance and the problem dimension, and provides an oracle complexity result for the total amount of computational work incurred by PSE. This amount of computational work is closely connected to three quantities: the convergence rate of the finite-dimensional spaces to the infinite-dimensional space, the rate of increase of the cost of making oracle calls in finite-dimensional spaces, and the convergence rate of the solution method used. We also show encouraging numerical results on an optimal control problem supporting our theoretical findings.</p> <p>  </p>
72

Stochastic Simulation of Multiscale Reaction-Diffusion Models via First Exit Times

Meinecke, Lina January 2016 (has links)
Mathematical models are important tools in systems biology, since the regulatory networks in biological cells are too complicated to understand by biological experiments alone. Analytical solutions can be derived only for the simplest models and numerical simulations are necessary in most cases to evaluate the models and their properties and to compare them with measured data. This thesis focuses on the mesoscopic simulation level, which captures both, space dependent behavior by diffusion and the inherent stochasticity of cellular systems. Space is partitioned into compartments by a mesh and the number of molecules of each species in each compartment gives the state of the system. We first examine how to compute the jump coefficients for a discrete stochastic jump process on unstructured meshes from a first exit time approach guaranteeing the correct speed of diffusion. Furthermore, we analyze different methods leading to non-negative coefficients by backward analysis and derive a new method, minimizing both the error in the diffusion coefficient and in the particle distribution. The second part of this thesis investigates macromolecular crowding effects. A high percentage of the cytosol and membranes of cells are occupied by molecules. This impedes the diffusive motion and also affects the reaction rates. Most algorithms for cell simulations are either derived for a dilute medium or become computationally very expensive when applied to a crowded environment. Therefore, we develop a multiscale approach, which takes the microscopic positions of the molecules into account, while still allowing for efficient stochastic simulations on the mesoscopic level. Finally, we compare on- and off-lattice models on the microscopic level when applied to a crowded environment.
73

Simulation and analysis of wind turbine loads for neutrally stable inflow turbulence

Sim, Chungwook 2009 August 1900 (has links)
Efficient temporal resolution and spatial grids are important in simulation of the inflow turbulence for wind turbine loads analyses. There have not been many published studies that address optimal space-time resolution of generated inflow velocity fields in order to estimate accurate load statistics. This study investigates turbine extreme and fatigue load statistics for a utility-scale 5MW wind turbine with a hub-height of 90 m and a rotor diameter of 126 m. Load statistics, spectra, and time-frequency analysis representations are compared for various alternative space and time resolutions employed in inflow turbulence field simulation. Conclusions are drawn regarding adequate resolution in space of the inflow turbulence simulated on the rotor plane prior to extracting turbine load statistics. Similarly, conclusions are drawn with regard to what constitutes adequate temporal filtering to preserve turbine load statistics. This first study employs conventional Fourier-based spectral methods for stochastic simulation of velocity fields for a neutral atmospheric boundary layer. In the second part of this study, large-eddy simulation (LES) is employed with similar resolutions in space and time as in the earlier Fourier-based simulations to again establish turbine load statistics. A comparison of extreme and fatigue load statistics is presented for the two approaches used for inflow field generation. The use of LES-generated flows (enhanced in deficient high-frequency energy by the use of fractal interpolation) to establish turbine load statistics in this manner is computationally very expensive but the study is justified in order to evaluate the ability of LES to be used as an alternative to more common approaches. LES with fractal interpolation is shown to lead to accurate load statistics when compared with stochastic simulation. A more compelling reason for using LES in turbine load studies is the following: for stable boundary layers, it is not possible to generate realistic inflow velocity fields using stochastic simulation. The present study presents a demonstration that, despite the computational costs involved, LES-generated inflows can be used for loads analyses for utility-scale turbines. The study sets the stage for future computations in the stable boundary layer where low-level jets, large speed and direction shears across the rotor, etc. can possibly cause large turbine loads; then, LES will likely be the inflow turbulence generator of choice. / text
74

Mathematical modelling of oncolytic virotherapy

Shabala, Alexander January 2013 (has links)
This thesis is concerned with mathematical modelling of oncolytic virotherapy: the use of genetically modified viruses to selectively spread, replicate and destroy cancerous cells in solid tumours. Traditional spatially-dependent modelling approaches have previously assumed that virus spread is due to viral diffusion in solid tumours, and also neglect the time delay introduced by the lytic cycle for viral replication within host cells. A deterministic, age-structured reaction-diffusion model is developed for the spatially-dependent interactions of uninfected cells, infected cells and virus particles, with the spread of virus particles facilitated by infected cell motility and delay. Evidence of travelling wave behaviour is shown, and an asymptotic approximation for the wave speed is derived as a function of key parameters. Next, the same physical assumptions as in the continuum model are used to develop an equivalent discrete, probabilistic model for that is valid in the limit of low particle concentrations. This mesoscopic, compartment-based model is then validated against known test cases, and it is shown that the localised nature of infected cell bursts leads to inconsistencies between the discrete and continuum models. The qualitative behaviour of this stochastic model is then analysed for a range of key experimentally-controllable parameters. Two-dimensional simulations of in vivo and in vitro therapies are then analysed to determine the effects of virus burst size, length of lytic cycle, infected cell motility, and initial viral distribution on the wave speed, consistency of results and overall success of therapy. Finally, the experimental difficulty of measuring the effective motility of cells is addressed by considering effective medium approximations of diffusion through heterogeneous tumours. Considering an idealised tumour consisting of periodic obstacles in free space, a two-scale homogenisation technique is used to show the effects of obstacle shape on the effective diffusivity. A novel method for calculating the effective continuum behaviour of random walks on lattices is then developed for the limiting case where microscopic interactions are discrete.
75

Accelerating Finite State Projection through General Purpose Graphics Processing

Trimeloni, Thomas 07 April 2011 (has links)
The finite state projection algorithm provides modelers a new way of directly solving the chemical master equation. The algorithm utilizes the matrix exponential function, and so the algorithm’s performance suffers when it is applied to large problems. Other work has been done to reduce the size of the exponentiation through mathematical simplifications, but efficiently exponentiating a large matrix has not been explored. This work explores implementing the finite state projection algorithm on several different high-performance computing platforms as a means of efficiently calculating the matrix exponential function for large systems. This work finds that general purpose graphics processing can accelerate the finite state projection algorithm by several orders of magnitude. Specific biological models and modeling techniques are discussed as a demonstration of the algorithm implemented on a general purpose graphics processor. The results of this work show that general purpose graphics processing will be a key factor in modeling more complex biological systems.
76

The Eukaryotic Chromatin Computer

Arnold, Christian 01 November 2016 (has links) (PDF)
Eukaryotic genomes are typically organized as chromatin, the complex of DNA and proteins that forms chromosomes within the cell\\\'s nucleus. Chromatin has pivotal roles for a multitude of functions, most of which are carried out by a complex system of covalent chemical modifications of histone proteins. The propagation of patterns of these histone post-translational modifications across cell divisions is particularly important for maintenance of the cell state in general and the transcriptional program in particular. The discovery of epigenetic inheritance phenomena - mitotically and/or meiotically heritable changes in gene function resulting from changes in a chromosome without alterations in the DNA sequence - was remarkable because it disproved the assumption that information is passed to daughter cells exclusively through DNA. However, DNA replication constitutes a dramatic disruption of the chromatin state that effectively amounts to partial erasure of stored information. To preserve its epigenetic state the cell reconstructs (at least part of) the histone post-translational modifications by means of processes that are still very poorly understood. A plausible hypothesis is that the different combinations of reader and writer domains in histone-modifying enzymes implement local rewriting rules that are capable of \\\"recomputing\\\" the desired parental patterns of histone post-translational modifications on the basis of the partial information contained in that half of the nucleosomes that predate replication. It is becoming increasingly clear that both information processing and computation are omnipresent and of fundamental importance in many fields of the natural sciences and the cell in particular. The latter is exemplified by the increasingly popular research areas that focus on computing with DNA and membranes. Recent work suggests that during evolution, chromatin has been converted into a powerful cellular memory device capable of storing and processing large amounts of information. Eukaryotic chromatin may therefore also act as a cellular computational device capable of performing actual computations in a biological context. A recent theoretical study indeed demonstrated that even relatively simple models of chromatin computation are computationally universal and hence conceptually more powerful than gene regulatory networks. In the first part of this thesis, I establish a deeper understanding of the computational capacities and limits of chromatin, which have remained largely unexplored. I analyze selected biological building blocks of the chromatin computer and compare it to system components of general purpose computers, particularly focusing on memory and the logical and arithmetical operations. I argue that it has a massively parallel architecture, a set of read-write rules that operate non-deterministically on chromatin, the capability of self-modification, and more generally striking analogies to amorphous computing. I therefore propose a cellular automata-like 1-D string as its computational paradigm on which sets of local rewriting rules are applied asynchronously with time-dependent probabilities. Its mode of operation is therefore conceptually similar to well-known concepts from the complex systems theory. Furthermore, the chromatin computer provides volatile memory with a massive information content that can be exploited by the cell. I estimate that its memory size lies in the realms of several hundred megabytes of writable information per cell, a value that I compare with DNA itself and cis-regulatory modules. I furthermore show that it has the potential to not only perform computations in a biological context but also in a strict informatics sense. At least theoretically it may therefore be used to calculate any computable function or algorithm more generally. Chromatin is therefore another representative of the growing number of non-standard computing examples. As an example for a biological challenge that may be solved by the \\\"chromatin computer\\\", I formulate epigenetic inheritance as a computational problem and develop a flexible stochastic simulation system for the study of recomputation-based epigenetic inheritance of individual histone post-translational modifications. The implementation uses Gillespie\\\'s stochastic simulation algorithm for exactly simulating the time evolution of the chemical master equation of the underlying stochastic process. Furthermore, it is efficient enough to use an evolutionary algorithm to find a system of enzymes that can stably maintain a particular chromatin state across multiple cell divisions. I find that it is easy to evolve such a system of enzymes even without explicit boundary elements separating differentially modified chromatin domains. However, the success of this task depends on several previously unanticipated factors such as the length of the initial state, the specific pattern that should be maintained, the time between replications, and various chemical parameters. All these factors also influence the accumulation of errors in the wake of cell divisions. Chromatin-regulatory processes and epigenetic (inheritance) mechanisms constitute an intricate and sensitive system, and any misregulation may contribute significantly to various diseases such as Alzheimer\\\'s disease. Intriguingly, the role of epigenetics and chromatin-based processes as well as non-coding RNAs in the etiology of Alzheimer\\\'s disease is increasingly being recognized. In the second part of this thesis, I explicitly and systematically address the two hypotheses that (i) a dysregulated chromatin computer plays important roles in Alzheimer\\\'s disease and (ii) Alzheimer\\\'s disease may be considered as an evolutionarily young disease. In summary, I found support for both hypotheses although for hypothesis 1, it is very difficult to establish causalities due to the complexity of the disease. However, I identify numerous chromatin-associated, differentially expressed loci for histone proteins, chromatin-modifying enzymes or integral parts thereof, non-coding RNAs with guiding functions for chromatin-modifying complexes, and proteins that directly or indirectly influence epigenetic stability (e.g., by altering cell cycle regulation and therefore potentially also the stability of epigenetic states). %Notably, we generally observed enrichment of probes located in non-coding regions, particularly antisense to known annotations (e.g., introns). For the identification of differentially expressed loci in Alzheimer\\\'s disease, I use a custom expression microarray that was constructed with a novel bioinformatics pipeline. Despite the emergence of more advanced high-throughput methods such as RNA-seq, microarrays still offer some advantages and will remain a useful and accurate tool for transcriptome profiling and expression studies. However, it is non-trivial to establish an appropriate probe design strategy for custom expression microarrays because alternative splicing and transcription from non-coding regions are much more pervasive than previously appreciated. To obtain an accurate and complete expression atlas of genomic loci of interest in the post-ENCODE era, this additional transcriptional complexity must be considered during microarray design and requires well-considered probe design strategies that are often neglected. This encompasses, for example, adequate preparation of a set of target sequences and accurate estimation of probe specificity. With the help of this pipeline, two custom-tailored microarrays have been constructed that include a comprehensive collection of non-coding RNAs. Additionally, a user-friendly web server has been set up that makes the developed pipeline publicly available for other researchers. / Eukaryotische Genome sind typischerweise in Form von Chromatin organisiert, dem Komplex aus DNA und Proteinen, aus dem die Chromosomen im Zellkern bestehen. Chromatin hat lebenswichtige Funktionen in einer Vielzahl von Prozessen, von denen die meisten durch ein komplexes System von kovalenten Modifikationen an Histon-Proteinen ablaufen. Muster dieser Modifikationen sind wichtige Informationsträger, deren Weitergabe über die Zellteilung hinaus an beide Tochterzellen besonders wichtig für die Aufrechterhaltung des Zellzustandes im Allgemeinen und des Transkriptionsprogrammes im Speziellen ist. Die Entdeckung von epigenetischen Vererbungsphänomenen - mitotisch und/oder meiotisch vererbbare Veränderungen von Genfunktionen, hervorgerufen durch Veränderungen an Chromosomen, die nicht auf Modifikationen der DNA-Sequenz zurückzuführen sind - war bemerkenswert, weil es die Hypothese widerlegt hat, dass Informationen an Tochterzellen ausschließlich durch DNA übertragen werden. Die Replikation der DNA erzeugt eine dramatische Störung des Chromatinzustandes, welche letztendlich ein partielles Löschen der gespeicherten Informationen zur Folge hat. Um den epigenetischen Zustand zu erhalten, muss die Zelle Teile der parentalen Muster der Histonmodifikationen durch Prozesse rekonstruieren, die noch immer sehr wenig verstanden sind. Eine plausible Hypothese postuliert, dass die verschiedenen Kombinationen der Lese- und Schreibdomänen innerhalb von Histon-modifizierenden Enzymen lokale Umschreibregeln implementieren, die letztendlich das parentale Modifikationsmuster der Histone neu errechnen. Dies geschieht auf Basis der partiellen Informationen, die in der Hälfte der vererbten Histone gespeichert sind. Es wird zunehmend klarer, dass sowohl Informationsverarbeitung als auch computerähnliche Berechnungen omnipräsent und in vielen Bereichen der Naturwissenschaften von fundamentaler Bedeutung sind, insbesondere in der Zelle. Dies wird exemplarisch durch die zunehmend populärer werdenden Forschungsbereiche belegt, die sich auf computerähnliche Berechnungen mithilfe von DNA und Membranen konzentrieren. Jüngste Forschungen suggerieren, dass sich Chromatin während der Evolution in eine mächtige zelluläre Speichereinheit entwickelt hat und in der Lage ist, eine große Menge an Informationen zu speichern und zu prozessieren. Eukaryotisches Chromatin könnte also als ein zellulärer Computer agieren, der in der Lage ist, computerähnliche Berechnungen in einem biologischen Kontext auszuführen. Eine theoretische Studie hat kürzlich demonstriert, dass bereits relativ simple Modelle eines Chromatincomputers berechnungsuniversell und damit mächtiger als reine genregulatorische Netzwerke sind. Im ersten Teil meiner Dissertation stelle ich ein tieferes Verständnis des Leistungsvermögens und der Beschränkungen des Chromatincomputers her, welche bisher größtenteils unerforscht waren. Ich analysiere ausgewählte Grundbestandteile des Chromatincomputers und vergleiche sie mit den Komponenten eines klassischen Computers, mit besonderem Fokus auf Speicher sowie logische und arithmetische Operationen. Ich argumentiere, dass Chromatin eine massiv parallele Architektur, eine Menge von Lese-Schreib-Regeln, die nicht-deterministisch auf Chromatin operieren, die Fähigkeit zur Selbstmodifikation, und allgemeine verblüffende Ähnlichkeiten mit amorphen Berechnungsmodellen besitzt. Ich schlage deswegen eine Zellularautomaten-ähnliche eindimensionale Kette als Berechnungsparadigma vor, auf dem lokale Lese-Schreib-Regeln auf asynchrone Weise mit zeitabhängigen Wahrscheinlichkeiten ausgeführt werden. Seine Wirkungsweise ist demzufolge konzeptionell ähnlich zu den wohlbekannten Theorien von komplexen Systemen. Zudem hat der Chromatincomputer volatilen Speicher mit einem massiven Informationsgehalt, der von der Zelle benutzt werden kann. Ich schätze ab, dass die Speicherkapazität im Bereich von mehreren Hundert Megabytes von schreibbarer Information pro Zelle liegt, was ich zudem mit DNA und cis-regulatorischen Modulen vergleiche. Ich zeige weiterhin, dass ein Chromatincomputer nicht nur Berechnungen in einem biologischen Kontext ausführen kann, sondern auch in einem strikt informatischen Sinn. Zumindest theoretisch kann er deswegen für jede berechenbare Funktion benutzt werden. Chromatin ist demzufolge ein weiteres Beispiel für die steigende Anzahl von unkonventionellen Berechnungsmodellen. Als Beispiel für eine biologische Herausforderung, die vom Chromatincomputer gelöst werden kann, formuliere ich die epigenetische Vererbung als rechnergestütztes Problem. Ich entwickle ein flexibles Simulationssystem zur Untersuchung der epigenetische Vererbung von individuellen Histonmodifikationen, welches auf der Neuberechnung der partiell verlorengegangenen Informationen der Histonmodifikationen beruht. Die Implementierung benutzt Gillespies stochastischen Simulationsalgorithmus, um die chemische Mastergleichung der zugrundeliegenden stochastischen Prozesse über die Zeit auf exakte Art und Weise zu modellieren. Der Algorithmus ist zudem effizient genug, um in einen evolutionären Algorithmus eingebettet zu werden. Diese Kombination erlaubt es ein System von Enzymen zu finden, dass einen bestimmten Chromatinstatus über mehrere Zellteilungen hinweg stabil vererben kann. Dabei habe ich festgestellt, dass es relativ einfach ist, ein solches System von Enzymen zu evolvieren, auch ohne explizite Einbindung von Randelementen zur Separierung differentiell modifizierter Chromatindomänen. Dennoch ängt der Erfolg dieser Aufgabe von mehreren bisher unbeachteten Faktoren ab, wie zum Beispiel der Länge der Domäne, dem bestimmten zu vererbenden Muster, der Zeit zwischen Replikationen sowie verschiedenen chemischen Parametern. Alle diese Faktoren beeinflussen die Anhäufung von Fehlern als Folge von Zellteilungen. Chromatin-regulatorische Prozesse und epigenetische Vererbungsmechanismen stellen ein komplexes und sensitives System dar und jede Fehlregulation kann bedeutend zu verschiedenen Krankheiten, wie zum Beispiel der Alzheimerschen Krankheit, beitragen. In der Ätiologie der Alzheimerschen Krankheit wird die Bedeutung von epigenetischen und Chromatin-basierten Prozessen sowie nicht-kodierenden RNAs zunehmend erkannt. Im zweiten Teil der Dissertation adressiere ich explizit und auf systematische Art und Weise die zwei Hypothesen, dass (i) ein fehlregulierter Chromatincomputer eine wichtige Rolle in der Alzheimerschen Krankheit spielt und (ii) die Alzheimersche Krankheit eine evolutionär junge Krankheit darstellt. Zusammenfassend finde ich Belege für beide Hypothesen, obwohl es für erstere schwierig ist, aufgrund der Komplexität der Krankheit Kausalitäten zu etablieren. Dennoch identifiziere ich zahlreiche differentiell exprimierte, Chromatin-assoziierte Bereiche, wie zum Beispiel Histone, Chromatin-modifizierende Enzyme oder deren integrale Bestandteile, nicht-kodierende RNAs mit Führungsfunktionen für Chromatin-modifizierende Komplexe oder Proteine, die direkt oder indirekt epigenetische Stabilität durch veränderte Zellzyklus-Regulation beeinflussen. Zur Identifikation von differentiell exprimierten Bereichen in der Alzheimerschen Krankheit benutze ich einen maßgeschneiderten Expressions-Microarray, der mit Hilfe einer neuartigen Bioinformatik-Pipeline erstellt wurde. Trotz des Aufkommens von weiter fortgeschrittenen Hochdurchsatzmethoden, wie zum Beispiel RNA-seq, haben Microarrays immer noch einige Vorteile und werden ein nützliches und akkurates Werkzeug für Expressionsstudien und Transkriptom-Profiling bleiben. Es ist jedoch nicht trivial eine geeignete Strategie für das Sondendesign von maßgeschneiderten Expressions-Microarrays zu finden, weil alternatives Spleißen und Transkription von nicht-kodierenden Bereichen viel verbreiteter sind als ursprünglich angenommen. Um ein akkurates und vollständiges Bild der Expression von genomischen Bereichen in der Zeit nach dem ENCODE-Projekt zu bekommen, muss diese zusätzliche transkriptionelle Komplexität schon während des Designs eines Microarrays berücksichtigt werden und erfordert daher wohlüberlegte und oft ignorierte Strategien für das Sondendesign. Dies umfasst zum Beispiel eine adäquate Vorbereitung der Zielsequenzen und eine genaue Abschätzung der Sondenspezifität. Mit Hilfe der Pipeline wurden zwei maßgeschneiderte Expressions-Microarrays produziert, die beide eine umfangreiche Sammlung von nicht-kodierenden RNAs beinhalten. Zusätzlich wurde ein nutzerfreundlicher Webserver programmiert, der die entwickelte Pipeline für jeden öffentlich zur Verfügung stellt.
77

Introduction de pièces déformables dans l’analyse de tolérances géométriques de mécanismes hyperstatiques / Introduction of flexible parts in tolerance analysis of over-constrained mechanisms

Gouyou, Doriane 04 December 2018 (has links)
Les mécanismes hyperstatiques sont souvent utilisés dans l’industrie pour garantir une bonne tenue mécanique du système et une bonne robustesse aux écarts de fabrication des surfaces. Même si ces assemblages sont très courants, les méthodologies d’analyse de tolérances de ces mécanismes sont difficiles à mettre en oeuvre.En fonction de ses écarts de fabrication, un assemblage hyperstatique peut soit présenter des interférences de montage, soit être assemblé avec jeu. Dans ces travaux de thèse, nous avons appliqué la méthode des polytopes afin de détecter les interférences de montage. Pour un assemblage donné, le polytope résultant du mécanisme est calculé. Si ce polytope est non vide, l’assemblage ne présente pas d’interférence. Si ce polytope est vide, l’assemblage présente des interférences de montage. En fonction du résultat obtenu, deux méthodes d’analyse distinctes sont proposées.Si l’assemblage est réalisable sans interférence le polytope résultant du mécanisme permet de conclure sur sa conformité au regard de l’exigence fonctionnelle. Si l’assemblage présente des interférences de montage, une analyse prenant en compte la raideur des pièces est réalisée. Cette approche est basée sur une réduction de modèle avec des super-éléments. Elle permet de déterminer rapidement l’état d’équilibre du système après assemblage. Un effort de montage est ensuite estimé à partir de ces résultats pour conclure sur la faisabilité de l’assemblage. Si l’assemblage est déclaré réalisable, la propagation des déformations dans les pièces est caractérisée pour vérifier la conformité du système au regard de l’exigence fonctionnelle.La rapidité de mise en oeuvre de ces calculs nous permet de réaliser des analyses de tolérances statistiques par tirage de Monte Carlo pour estimer les probabilités de montage et de respect d’une Condition Fonctionnelle. / Over-constrained mechanisms are often used in industries to ensure a good mechanical strength and a good robustness to manufacturing deviations of parts. The tolerance analysis of such assemblies is difficult to implement.Indeed, depending on the geometrical deviations of parts, over-constrained mechanisms can have assembly interferences. In this work, we used the polytope method to check whether the assembly has interferences or not. For each assembly, the resulting polytope of the mechanism is computed. If it is non empty, the assembly can be performed without interference. If not, there is interferences in the assembly. According to the result, two different methods can be implemented.For an assembly without interference, the resulting polytope enables to check directly its compliance. For an assembly with interferences, a study taking into account the stiffness of the parts is undertaken. This approach uses a model reduction with super elements. It enables to compute quickly the assembly with deformation. Then, an assembly load is computed to conclude on its feasibility. Finally, the spreading of deformation through the parts is calculated to check the compliance of the mechanism.The short computational time enables to perform stochastic tolerance analyses in order to provide the rates of compliant assemblies.
78

Compositional and kinetic modeling of bio-oil from fast pyrolysis from lignocellulosic biomass / Modélisation compositionnelle et cinétique des bio-huiles de pyrolyse rapide issues de la biomasse lignocellulosique

Costa da Cruz, Ana Rita 25 January 2019 (has links)
La pyrolyse rapide est une des voies de conversion thermochimique qui permet la transformation de biomasse lignocellulosique en bio-huiles. Ces bio-huiles, différentes des coupes lourdes du pétrole ne peuvent pas être directement mélangés dans les procédés de valorisation. En effet, en raison de leur forte teneur en oxygène, les bio-huiles nécessitent une étape de pré-raffinage, telle que l’hydrotraitement, pour éliminer ces composants.L’objectif de ce travail est de comprendre la structure, la composition et la réactivité de la bio-huile grâce à la modélisation de données expérimentales. Pour comprendre leur structure et leur composition, des techniques de reconstruction moléculaire basées sur des données analytiques, ont été appliquées, générant un mélange synthétique, dont les propriétés correspondent à celles du mélange. Pour comprendre leur réactivité, l'hydrotraitement de molécules modèles a été étudié: gaïacol et furfural. Pour cela, un modèle déterministe et stochastique a été créé pour chacun d’eux. L’approche déterministe visait à récupérer une gamme de paramètres cinétiques, qui ont ensuite été affinés par l’approche stochastique créant un nouveau modèle. Cette approche a permis de générer un réseau de réactions en définissant et en utilisant un nombre limité de familles et règles des réactions. Finalement, le mélange synthétique a été utilisé dans la simulation stochastique de l’hydrotraitement de la bio-huile, étayée par la cinétique des molécules modèles.En conclusion, ce travail a permis de recréer la fraction légère de la bio-huile et de simuler leur l'hydrotraitement, via les paramètres cinétiques des composés modèles, qui prédisent de manière raisonnable les effluents de l'hydrotraitement de celles-ci, mais sont inadéquat pour le bio-huile / Fast pyrolysis is one of the thermochemical conversion routes that enable the transformation of solid lignocellulosic biomass into liquid bio-oils. These complex mixtures are different from oil fractions and cannot be directly integrated into existing petroleum upgrading facilities. Indeed, because of their high levels of oxygen compounds, bio-oils require a dedicated pre-refining step, such as hydrotreating, to remove these components.The aim of the present work is to understand the structure, composition and reactivity of bio-oil compounds through modeling of experimental data. To understand the structure and composition, molecular reconstruction techniques, based on analytical data, were applied generating a synthetic mixture, whose properties are consistent with the mixture properties. To understand the reactivity, the hydrotreating of two model molecules was studied: Guaiacol and Furfural. A deterministic and stochastic model were created for each compounds. The deterministic approach intended to retrieve a range of kinetic parameters, later on refined by the stochastic simulation approach into a new model. This approach generates an reaction network by defining and using a limited number of reaction classes and reaction rules. To consolidate the work, the synthetic mixture was used in the stochastic simulation of the hydrotreating of bio-oils, supported by the kinetics of the model compounds.In sum, the present work was able to recreate the light fraction of bio-oil and simulate the hydrotreating of bio-oils, via the kinetic parameters of model compounds, which can reasonably predict the effluents of the hydrotreating of these, but are unsuitable for bio-oil.Fast pyrolysis is one of the thermochemical conversion routes that enable the transformation of solid lignocellulosic biomass into liquid bio-oils. These complex mixtures are different from oil fractions and cannot be directly integrated into existing petroleum upgrading facilities. Indeed, because of their high levels of oxygen compounds, bio-oils require a dedicated pre-refining step, such as hydrotreating, to remove these components.The aim of the present work is to understand the structure, composition and reactivity of bio-oil compounds through modeling of experimental data. To understand the structure and composition, molecular reconstruction techniques, based on analytical data, were applied generating a synthetic mixture, whose properties are consistent with the mixture properties. To understand the reactivity, the hydrotreating of two model molecules was studied: Guaiacol and Furfural. A deterministic and stochastic model were created for each compounds. The deterministic approach intended to retrieve a range of kinetic parameters, later on refined by the stochastic simulation approach into a new model. This approach generates an reaction network by defining and using a limited number of reaction classes and reaction rules. To consolidate the work, the synthetic mixture was used in the stochastic simulation of the hydrotreating of bio-oils, supported by the kinetics of the model compounds.In sum, the present work was able to recreate the light fraction of bio-oil and simulate the hydrotreating of bio-oils, via the kinetic parameters of model compounds, which can reasonably predict the effluents of the hydrotreating of these, but are unsuitable for bio-oil
79

Análise geoestatística multi-pontos / Analysis of multiple-point geostatistics

Cruz Rodriguez, Joan Neylo da 12 June 2013 (has links)
Estimativa e simulação baseados na estatística de dois pontos têm sido usadas desde a década de 1960 na análise geoestatístico. Esses métodos dependem do modelo de correlação espacial derivado da bem conhecida função semivariograma. Entretanto, a função semivariograma não pode descrever a heterogeneidade geológica encontrada em depósitos minerais e reservatórios de petróleo. Assim, ao invés de usar a estatística de dois pontos, a geoestatística multi-pontos, baseada em distribuições de probabilidade de múltiplo pontos, tem sido considerada uma alternativa confiável para descrição da heterogeneidade geológica. Nessa tese, o algoritmo multi-ponto é revisado e uma nova solução é proposta. Essa solução é muito melhor que a original, pois evita usar as probabilidades marginais quando um evento que nunca ocorre é encontrado no template. Além disso, para cada realização a zona de incerteza é ressaltada. Uma base de dados sintética foi gerada e usada como imagem de treinamento. A partir dessa base de dados completa, uma amostra com 25 pontos foi extraída. Os resultados mostram que a aproximação proposta proporciona realizações mais confiáveis com zonas de incerteza menores. / Estimation and simulation based on two-point statistics have been used since 1960\'s in geostatistical analysis. These methods depend on the spatial correlation model derived from the well known semivariogram function. However, the semivariogram function cannot describe the geological heterogeneity found in mineral deposits and oil reservoirs. Thus, instead of using two-point statistics, multiple-point geostatistics based on probability distributions of multiple-points has been considered as a reliable alternative for describing the geological heterogeneity. In this thesis, the multiple-point algorithm is revisited and a new solution is proposed. This solution is much better than the former one because it avoids using marginal probabilities when a never occurring event is found in a template. Moreover, for each realization the uncertainty zone is highlighted. A synthetic data base was generated and used as training image. From this exhaustive data set, a sample with 25 points was drawn. Results show that the proposed approach provides more reliable realizations with smaller uncertainty zones.
80

Análise de cheias anuais segundo distribuição generalizada / Analysis of annual floods by generalized distribution

Queiroz, Manoel Moisés Ferreira de 02 July 2002 (has links)
A análise de freqüência de cheias através da distribuição de probabilidade generalizada de valores extremos-GEV tem crescido nos últimos anos. A estimação de altos quantis de cheias é comumente praticada extrapolando o ajuste, representado por uma das 3 formas inversas de distribuição GEV, para períodos de retorno bem superiores ao período dos dados observados. Eventos hidrológicos ocorrem na natureza com valores finitos, tal que, seus valores máximos seguem a forma assintótica da GEV limitada. Neste trabalho estuda-se a estimabilidade da distribuição GEV através de momentos LH, usando séries de cheias anuais com diferentes características e comprimentos, obtidas de séries de vazões diária gerada de diversas formas. Primeiramente, sequências estocásticas de vazões diárias foram obtidas da distribuição limitada como subjacente da distribuição GEV limitada. Os resultados da estimação dos parâmetros via momentos-LH, mostram que o ajuste da distribuição GEV as amostras de cheias anuais com menos de 100 valores, pode indicar qualquer forma de distribuição de valores extremos e não somente a forma limitada como seria esperado. Também, houve grande incerteza na estimação dos parâmetros obtidos de 50 séries geradas de uma mesma distribuição. Ajustes da distribuição GEV às séries de vazões anuais, obtidas séries de fluxo diários gerados com 4 modelos estocásticos disponíveis na literatura e calibrados aos dados dos rio Paraná e dos Patos, resultaram na forma de Gumbel. Propõe-se um modelo de geração diária que simula picos de vazões usando a distribuição limitada. O ajuste do novo modelo às vazões diárias do rio Paraná reproduziu as estatísticas diárias, mensais, anuais, assim como os valores extremos da série histórica. Além disso, a série das cheias anuais com longa duração, foi adequadamente descrita pela forma da distribuição GEV limitada. / Frequency analysis of floods by Generalized Extreme Value probability distribution has multiplied in the last few years. The estimations of high quantile floods is commonly practiced extrapolating the adjustment represented by one of the three forms of inverse GEV distribution for the return periods much greater than the period of observation. The hydrologic events occur in nature with finite values such that their maximum values follow the asymptotic form of limited GEV distribution. This work studies the identifiability of GEV distribution by LH-moments using annual flood series of different characteristics and lengths, obtained from daily flow series generated by various methods. Firstly, stochastic sequences of daily flows were obtained from the limited distribution underlying the GEV limited distribution. The results from the LH-moment estimation of parameters show that fitting GEV distribution to annual flood samples of less than 100 values may indicate any form of extreme value distribution and not just the limited form as one would expect. Also, there was great uncertainty noticed in the estimated parameters obtained for 50 series generated from the some distribution. Fitting GEV distribution to annual flood series, obtained from daily flow series generated by 4 stochastic model available in literature calibrated for the data from Paraná and dos Patos rivers, indicated Gumbel distribution. A daily flow generator is proposed which simulated the high flow pulses by limited distribution. It successfully reproduced the statistics related to daily, monthly and annual values as well as the extreme values of historic data. Further, annual flood series of long duration are shown to follow the form of asymptotic limited GEV distribution.

Page generated in 0.1348 seconds