• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 43
  • 11
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 137
  • 137
  • 30
  • 27
  • 22
  • 18
  • 17
  • 16
  • 15
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Contribution à la quantification des programmes de maintenance complexes / Contribution to complex maintenance program quantification

Ruin, Thomas 09 December 2013 (has links)
Face aux nouveaux cadres législatifs ou environnementaux dans lesquels ils doivent évoluer, les systèmes industriels actuels sont, en plus d'être contraints par des exigences classiques de productivité et de cout, sujets au respect de nouvelles exigences relatives à la sûreté, la sécurité ou au développement durable notamment. Pour répondre à ces exigences et améliorer la maitrise des performances de ses systèmes, EDF souhaite faire évoluer sa démarche d'Optimisation de la Maintenance Basée sur la Fiabilité vers une nouvelle méthode. Cette méthode nécessite en autre la proposition d'une approche outillée permettant de quantifier a priori les programmes de maintenance (CMPQ) sur ses systèmes par rapport aux indicateurs de performance attendus (KPIs). Cet outillage fait l'objet de cette thèse financée dans le cadre du GIS 3SGS - projet DEPRADEM 2. Après avoir généralisé les besoins d'EDF en regard de la CMPQ, nous proposons une identification de la connaissance générique nécessaire pour évaluer les KPI. Afin d'aboutir à un outil permettant l'automatisation de ces études de CMPQ, cette connaissance générique est ensuite modélisée sur la base de deux langages : le langage semi-formel SysML capitalisant, par l'intermédiaire de différents diagrammes, la connaissance statique, interactionnelle et comportementale ; et le langage AltaRicaDF, supportant un modèle dynamique permettant d'évaluer les KPIs par simulation stochastique. La création de ce modèle dynamique à partir des différents diagrammes est basée sur un mapping entre les concepts d'intérêt des deux langages. La démarche dans sa globalité est validée à la CMPQ d'un cas d'étude fourni par EDF : le système ARE / To face with new legislatives and environmental contexts in which they have to operate, it is needed now that the industrials systems have to satisfy to many different requirements and constraints. Thus, these requirements are not only conventional ones such as availability and costs, but also emergent ones such as safety and sustainability. This report implies for the specific French company EDF (energy power supplier) to evolve from its usual approach of reliability centered maintenance (RCM) to a new approach. It is consisting mainly in developing a tool able to support the Complex Maintenance Programs Quantification (CMPQ). This Ph.D. is dealing with this the engineering and deployment of this tool in the frame of the GIS 3SGS - DEPRADEM 2 project. The first step of the work is to generalize EDF needs, then to propose a framework enabling to identify required generic knowledge needed to assess the Key Performances Indicators (KPIs) for supporting quantification. The next step is to model the generic knowledge in two complementary ways: a formalization of the static, interactional and behavioral knowledge based on different SysML diagrams; and a formalization of the dynamic and executable knowledge formalized by AltaRicaDF (ADF) language, allowing to perform stochastic simulation and to assess required KPIs. The path to elaborate dynamic executable vision from SysML diagrams is released by means of rules between each element of interest of both languages. All this approach/tool is applied to a specific EDF case study: the ARE system
82

Stochastic simulation and analysis of biochemical networks

Pahle, Jürgen 27 June 2008 (has links)
Stochastische Effekte können einen großen Einfluss auf die Funktionsweise von biochemischen Netzwerken haben. Vor allem Signalwege, z.B. Calciumsignaltransduktion, sind anfällig gegenüber zufälligen Schwankungen. Daher stellt sich die wichtige Frage, wie dadurch der Informationstransfer in diesen Systemen beeinträchtigt wird. Zunächst werden eine Reihe von stochastischen Simulationsmethoden diskutiert und systematisch klassifiziert. Dies dient als methodische Grundlage der ganzen Dissertation. Der Schwerpunkt liegt hier auf approximativen und hybriden Ansätzen, einschließlich der Hybridmethode des Softwaresystems Copasi, deren Implementierung Teil dieser Arbeit war. Die Dynamik biochemischer Systeme zeigt in den meisten Fällen einen Übergang von stochastischem zu deterministischem Verhalten mit steigender Partikelzahl. Dieser Übergang wird für Calciumsignaltransduktion und andere Systeme untersucht. Es zeigt sich, dass das Auftreten stochastischer Effekte stark von der Sensitivität des Systems abhängt. Ein Maß dafür ist die Divergenz. Systeme mit hoher Divergenz zeigen noch mit hohen Teilchenzahlen stochastische Effekte und umgekehrt. Schließlich wird der Einfluss von zufälligen Fluktuationen auf die Leistungsfähigkeit von Signalpfaden erforscht. Dazu werden simulierte sowie experimentell gemessene Calcium-Zeitreihen stochastisch an die Aktivierung eines Zielenzyms gekoppelt. Das Schätzen des informationstheoretischen Maßes Transferentropie unter unterschiedlichen zellulären Bedingungen dient zur Abschätzung des Informationstransfers. Dieser nimmt mit steigender Partikelzahl zu, ist jedoch sehr abhängig von der momentanen Dynamik (z.B. spikende, burstende oder irreguläre Oszillationen). Die hier entwickelten Methoden, wie der Gebrauch der Divergenz als Indikator für den stoch./det. Übergang oder die stochastische Kopplung und informationstheoretische Analyse mittels Transferentropie, sind wertvolle Werkzeuge für die Analyse von biochemischen Systemen. / Stochastic effects in biochemical networks can affect the functioning of these systems significantly. Signaling pathways, such as calcium signal transduction, are particularly prone to random fluctuations. Thus, an important question is how this influences the information transfer in these pathways. First, a comprehensive overview and systematic classification of stochastic simulation methods is given as methodical basis for the thesis. Here, the focus is on approximate and hybrid approaches. Also, the hybrid solver in the software system Copasi is described whose implementation was part of this PhD work. Then, in most cases, the dynamic behavior of biochemical systems shows a transition from stochastic to deterministic behavior with increasing particle numbers. This transition is studied in calcium signaling as well as other test systems. It turns out that the onset of stochastic effects is very dependent on the sensitivity of the specific system quantified by its divergence. Systems with high divergence show stochastic effects even with high particle numbers and vice versa. Finally, the influence of noise on the performance of signaling pathways is investigated. Simulated and experimentally measured calcium time series are stochastically coupled to an intracellular target enzyme activation process. Then, the information transfer under different cellular conditions is estimated with the information-theoretic quantity transfer entropy. The amount of information that can be transferred increases with rising particle numbers. However, this increase is very dependent on the current dynamical mode of the system, such as spiking, bursting or irregular oscillations. The methods developed in this thesis, such as the use of the divergence as an indicator for the transition from stochastic to deterministic behavior or the stochastic coupling and information-theoretic analysis using transfer entropy, are valuable tools for the analysis of biochemical systems.
83

Reverse-time modeling of channelized meandering systems from geological observations / Modélisation rétro-chronologique de systèmes chenalisés méandriformes à partir d’observations géologiques

Parquer, Marion 05 April 2018 (has links)
Les systèmes méandriformes constituent la plupart des rivières terrestres ou sous-marines qui modèlent les paysages par leur évolution temporelle et spatiale. Les témoins de leur évolution peuplent les plaines traversées par celles-ci. Parmi eux, des barres d’accrétion latérale témoignant de la migration des boucles du chenal peuvent être identifiées, tout comme des méandres abandonnés par simplification naturelle de la trajectoire du chenal ou encore des chenaux entiers abandonnés lors d’un changement de direction principale du chenal par avulsion. La diversité et le volume des dépôts résultant font des systèmes chenalisés, une fois enfouis, de bons candidats pour le stockage de ressources naturelles. L’étude de la disposition des différents faciès est donc cruciale pour leur exploitation. Les techniques d’imagerie satellitaire ou LIDAR permettent l’étude des systèmes actuels. L’architecture en subsurface peut être imagée globalement par image sismique ou GPR, ou localement par des puits d’investigation. Ces techniques permettent d’avoir une bonne évaluation du dernier état du système chenalisé. Les états précédents, quant à eux, peuvent être observés par morceaux lorsqu’ils ont été épargnés par l’érosion. En effet, le remaniement de la ceinture de méandres par migration latérale des chenaux rend souvent difficile l’analyse des états antérieurs que ce soit en termes de géométrie ou de chronologie des dépôts. Cette thèse propose une méthode de simulation des systèmes chenalisés qui respecte au mieux les différentes informations disponibles. Parmi celles-ci, souvent, l’image sismique permet d’identifier le dernier état du système et des boucles de méandres abandonnés en contrastant avec la ceinture de méandres par des dépôts souvent plus argileux et datant de la période succédant l’abandon. Des barres d’accrétion latérale peuvent aussi être observées, témoignant des directions de migration des méandres. Parfois, des données de puits sont aussi accessibles et informent sur la nature des faciès rencontrés (e.g., sableux, argileux). La méthode présentée dans ce manuscrit part du dernier état du système observé sur l’image sismique. Une simulation en temps inversé de la migration du chenal, inspirée par l’analyse des cartes chronologiques du Mississippi, est appliquée et permet, pas de temps par pas de temps, de retrouver de potentiels états antérieurs. Selon une simulation de la chronologie estimée par des critères spatiaux et statistiques (e.g., distance et orientation au chenal courant, probabilité d’abandon), les méandres abandonnés sont intégrés à l’étape de temps voulue dans le chenal principal. Les boucles de méandres disparues par érosion sont compensées par la simulation d’autres méandres dans la ceinture de méandres. Cette simulation respecte les critères géométriques observés sur les méandres épargnés par l’érosion mais également d’autres critères statistiques tels que la probabilité d’érosion observée sur des analogues sédimentaires tels que le Mississippi. Cette approche ouvre la possibilité d’honorer les faciès observés sur les puits par la simulation de méandres abandonnés en ces points. Elle a été appliquée sur divers jeux de données bidimensionnels satellite ou sismique / Meandering systems constitute the majority of aerial and sub-marine rivers which shape the landscapes by their temporal and spatial evolution. The witnesses of this evolution can be observed on the plains crossed by these channels. Among them, lateral point bar resulting from the channel migration but also abandoned meanders created by the natural stream rectifying and abandoned channels originating from the main direction change by avulsion. Once buried, the channelized systems are good candidates for natural resources storage thanks to the diversity and the volume of the resulting deposits. The understanding of the internal architecture of facies is thus crucial for resource exploitation. Satellite and LIDAR images permit current system studies. Subsurface architecture can be imaged by seismic images, GPR or LIDAR. These techniques give a good evaluation of the system last channel path. However, anterior stages, when spared by the erosion can be observed locally. Indeed, the reworking of the channel belt by lateral and downstream migration makes it difficult to observe the geometric or chronologic features of anterior deposits. This thesis proposes a simulation method of channelized systems conditioning to available information. Among them, the seismic image often permits to identify the last system stage and the abandoned meanders thanks to their muddy filling after the abandonment time contrasting with the channel belt. Lateral point bars can also be observed, witnessing of meander paleo-migration direction. Sometimes, well data inform on the facies (e.g., muddy, sandy). The method presented here starts from the last channel path observed on the seismic image and go back in time by reverse migration to reconstruct anterior channel paths. This stochastic migration model is inspired by the analysis of historic Mississippi maps. According to a chronology simulation based on spatial and statistical criteria (e.g., distance and orientation to the current channel, abandonment probability distribution), abandoned meanders are integrated at the relevant time step inside the main channel path. Erosion of abandoned meanders is addressed by abandoned meander simulation inside the meander belt. This stochastic simulation conditions to geometrical criteria observed on the abandoned meanders spared by the erosion but also to statistical criteria observed on sedimentary analogs such as the Mississippi river (e.g., erosion probability distribution). One of the main perspectives is to condition to well data through the simulation of abandoned meanders. This technique has been applied on two satellite and seismic 2D case studies
84

Análise geoestatística multi-pontos / Analysis of multiple-point geostatistics

Joan Neylo da Cruz Rodriguez 12 June 2013 (has links)
Estimativa e simulação baseados na estatística de dois pontos têm sido usadas desde a década de 1960 na análise geoestatístico. Esses métodos dependem do modelo de correlação espacial derivado da bem conhecida função semivariograma. Entretanto, a função semivariograma não pode descrever a heterogeneidade geológica encontrada em depósitos minerais e reservatórios de petróleo. Assim, ao invés de usar a estatística de dois pontos, a geoestatística multi-pontos, baseada em distribuições de probabilidade de múltiplo pontos, tem sido considerada uma alternativa confiável para descrição da heterogeneidade geológica. Nessa tese, o algoritmo multi-ponto é revisado e uma nova solução é proposta. Essa solução é muito melhor que a original, pois evita usar as probabilidades marginais quando um evento que nunca ocorre é encontrado no template. Além disso, para cada realização a zona de incerteza é ressaltada. Uma base de dados sintética foi gerada e usada como imagem de treinamento. A partir dessa base de dados completa, uma amostra com 25 pontos foi extraída. Os resultados mostram que a aproximação proposta proporciona realizações mais confiáveis com zonas de incerteza menores. / Estimation and simulation based on two-point statistics have been used since 1960\'s in geostatistical analysis. These methods depend on the spatial correlation model derived from the well known semivariogram function. However, the semivariogram function cannot describe the geological heterogeneity found in mineral deposits and oil reservoirs. Thus, instead of using two-point statistics, multiple-point geostatistics based on probability distributions of multiple-points has been considered as a reliable alternative for describing the geological heterogeneity. In this thesis, the multiple-point algorithm is revisited and a new solution is proposed. This solution is much better than the former one because it avoids using marginal probabilities when a never occurring event is found in a template. Moreover, for each realization the uncertainty zone is highlighted. A synthetic data base was generated and used as training image. From this exhaustive data set, a sample with 25 points was drawn. Results show that the proposed approach provides more reliable realizations with smaller uncertainty zones.
85

Análise de cheias anuais segundo distribuição generalizada / Analysis of annual floods by generalized distribution

Manoel Moisés Ferreira de Queiroz 02 July 2002 (has links)
A análise de freqüência de cheias através da distribuição de probabilidade generalizada de valores extremos-GEV tem crescido nos últimos anos. A estimação de altos quantis de cheias é comumente praticada extrapolando o ajuste, representado por uma das 3 formas inversas de distribuição GEV, para períodos de retorno bem superiores ao período dos dados observados. Eventos hidrológicos ocorrem na natureza com valores finitos, tal que, seus valores máximos seguem a forma assintótica da GEV limitada. Neste trabalho estuda-se a estimabilidade da distribuição GEV através de momentos LH, usando séries de cheias anuais com diferentes características e comprimentos, obtidas de séries de vazões diária gerada de diversas formas. Primeiramente, sequências estocásticas de vazões diárias foram obtidas da distribuição limitada como subjacente da distribuição GEV limitada. Os resultados da estimação dos parâmetros via momentos-LH, mostram que o ajuste da distribuição GEV as amostras de cheias anuais com menos de 100 valores, pode indicar qualquer forma de distribuição de valores extremos e não somente a forma limitada como seria esperado. Também, houve grande incerteza na estimação dos parâmetros obtidos de 50 séries geradas de uma mesma distribuição. Ajustes da distribuição GEV às séries de vazões anuais, obtidas séries de fluxo diários gerados com 4 modelos estocásticos disponíveis na literatura e calibrados aos dados dos rio Paraná e dos Patos, resultaram na forma de Gumbel. Propõe-se um modelo de geração diária que simula picos de vazões usando a distribuição limitada. O ajuste do novo modelo às vazões diárias do rio Paraná reproduziu as estatísticas diárias, mensais, anuais, assim como os valores extremos da série histórica. Além disso, a série das cheias anuais com longa duração, foi adequadamente descrita pela forma da distribuição GEV limitada. / Frequency analysis of floods by Generalized Extreme Value probability distribution has multiplied in the last few years. The estimations of high quantile floods is commonly practiced extrapolating the adjustment represented by one of the three forms of inverse GEV distribution for the return periods much greater than the period of observation. The hydrologic events occur in nature with finite values such that their maximum values follow the asymptotic form of limited GEV distribution. This work studies the identifiability of GEV distribution by LH-moments using annual flood series of different characteristics and lengths, obtained from daily flow series generated by various methods. Firstly, stochastic sequences of daily flows were obtained from the limited distribution underlying the GEV limited distribution. The results from the LH-moment estimation of parameters show that fitting GEV distribution to annual flood samples of less than 100 values may indicate any form of extreme value distribution and not just the limited form as one would expect. Also, there was great uncertainty noticed in the estimated parameters obtained for 50 series generated from the some distribution. Fitting GEV distribution to annual flood series, obtained from daily flow series generated by 4 stochastic model available in literature calibrated for the data from Paraná and dos Patos rivers, indicated Gumbel distribution. A daily flow generator is proposed which simulated the high flow pulses by limited distribution. It successfully reproduced the statistics related to daily, monthly and annual values as well as the extreme values of historic data. Further, annual flood series of long duration are shown to follow the form of asymptotic limited GEV distribution.
86

Hybrid modeling and analysis of multiscale biochemical reaction networks

Wu, Jialiang 23 December 2011 (has links)
This dissertation addresses the development of integrative modeling strategies capable of combining deterministic and stochastic, discrete and continuous, as well as multi-scale features. The first set of studies combines the purely deterministic modeling methodology of Biochemical Systems Theory (BST) with a hybrid approach, using Functional Petri Nets, which permits the account of discrete features or events, stochasticity, and different types of delays. The efficiency and significance of this combination is demonstrated with several examples, including generic biochemical networks with feedback controls, gene regulatory modules, and dopamine based neuronal signal transduction. A study expanding the use of stochasticity toward systems with small numbers of molecules proposes a rather general strategy for converting a deterministic process model into a corresponding stochastic model. The strategy characterizes the mathematical connection between a stochastic framework and the deterministic analog. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations where internal noise affecting the system needs to be taken into account for a valid conversion from a deterministic to a stochastic model. The conversion procedure is illustrated with several representative examples, including elemental reactions, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. The last study establishes two novel, particle-based methods to simulate biochemical diffusion-reaction systems within crowded environments. These simulation methods effectively simulate and quantify crowding effects, including reduced reaction volumes, reduced diffusion rates, and reduced accessibility between potentially reacting particles. The proposed methods account for fractal-like kinetics, where the reaction rate depends on the local concentrations of the molecules undergoing the reaction. Rooted in an agent based modeling framework, this aspect of the methods offers the capacity to address sophisticated intracellular spatial effects, such as macromolecular crowding, active transport along cytoskeleton structures, and reactions on heterogeneous surfaces, as well as in porous media. Taken together, the work in this dissertation successfully developed theories and simulation methods which extend the deterministic, continuous framework of Biochemical Systems Theory to allow the account of delays, stochasticity, discrete features or events, and spatial effects for the modeling of biological systems, which are hybrid and multiscale by nature.
87

固定給付制退休金之最佳控管:隨機模擬方法之應用

張乃懿, Chang, Nai Yi Unknown Date (has links)
本研究中以隨機模擬的方法應用於退休金最佳控制理論中,並將下跌風險(Downside Risks)加入二次最佳化函數中作為最適化準則,再以英國與美加地區不同提撥率模型做為研究對象,觀察不同情境下之結果。Haberman(1994)首先提出以最適化方法應用於固定給付制退休金基金上,並具體建立二次最適化準則,以提撥與資產的變異作為控制因子。Chang(2003)以下跌風險的觀念,指出退休金基金經營時管理人常較注意提撥過多與資產不足風險,若經營時考慮下跌風險,則會產生與原來考量不同之結果。本文以Chang(2003)之研究為基礎,將其建議之最佳化函數做為考量下跌風險之依據,並提出改良英國與美加地區之提撥率模型,採模擬的方式進行最佳化,探討其對不同提撥率模型之影響。研究結果發現若以隨機模擬作為最佳控制方法,在不同人口假設及精算模型下,會產生相同之結果,且發現下跌風險對於不同提撥率模型有不同之影響,其中建議的英式模型有效降低風險,而美式提撥率模型對於提撥率比例與資產負債比例在最佳化下有較理想之結果。最重要的,退休金基金管理人可利用隨機模擬的方式進行最佳化控制,以提供決策之參考依據。
88

Simulating the flow of some non-Newtonian fluids with neural-like networks and stochastic processes

Tran-Canh, Dung January 2004 (has links)
The thesis reports a contribution to the development of neural-like network- based element-free methods for the numerical simulation of some non-Newtonian fluid flow problems. The numerical approximation of functions and solution of the governing partial differential equations are mainly based on radial basis function networks. The resultant micro-macroscopic approaches do not require any element-based discretisation and only rely on a set of unstructured collocation points and hence are truly meshless or element-free. The development of the present methods begins with the use of the multi-layer perceptron networks (MLPNs) and radial basis function networks (RBFNs) to effectively eliminate the volume integrals in the integral formulation of fluid flow problems. An adaptive velocity gradient domain decomposition (AVGDD) scheme is incorporated into the computational algorithm. As a result, an improved feed forward neural network boundary-element-only method (FFNN- BEM) is created and verified. The present FFNN-BEM successfully simulates the flow of several Generalised Newtonian Fluids (GNFs), including the Carreau, Power-law and Cross models. To the best of the author's knowledge, the present FFNN-BEM is the first to achieve convergence for difficult flow situations when the power-law indices are very small (as small as 0.2). Although some elements are still used to discretise the governing equations, but only on the boundary of the analysis domain, the experience gained in the development of element-free approximation in the domain provides valuable skills for the progress towards an element-free approach. A least squares collocation RBFN-based mesh-free method is then developed for solving the governing PDEs. This method is coupled with the stochastic simulation technique (SST), forming the mesoscopic approach for analyzing viscoelastic flid flows. The velocity field is computed from the RBFN-based mesh-free method (macroscopic component) and the stress is determined by the SST (microscopic component). Thus the SST removes a limitation in traditional macroscopic approaches since closed form constitutive equations are not necessary in the SST. In this mesh-free method, each of the unknowns in the conservation equations is represented by a linear combination of weighted radial basis functions and hence the unknowns are converted from physical variables (e.g. velocity, stresses, etc) into network weights through the application of the general linear least squares principle and point collocation procedure. Depending on the type of RBFs used, a number of parameters will influence the performance of the method. These parameters include the centres in the case of thin plate spline RBFNs (TPS-RBFNs), and the centres and the widths in the case of multi-quadric RBFNs (MQ-RBFNs). A further improvement of the approach is achieved when the Eulerian SST is formulated via Brownian configuration fields (BCF) in place of the Lagrangian SST. The SST is made more efficient with the inclusion of the control variate variance reduction scheme, which allows for a reduction of the number of dumbbells used to model the fluid. A highly parallelised algorithm, at both macro and micro levels, incorporating a domain decomposition technique, is implemented to handle larger problems. The approach is verified and used to simulate the flow of several model dilute polymeric fluids (the Hookean, FENE and FENE-P models) in simple as well as non-trivial geometries, including shear flows (transient Couette, Poiseuille flows)), elongational flows (4:1 and 10:1 abrupt contraction flows) and lid-driven cavity flows.
89

Contributions to parallel stochastic simulation : application of good software engineering practices to the distribution of pseudorandom streams in hybrid Monte Carlo simulations / Contributions à la simulation stochastique parallèle : architectures logicielles pour la distribution de flux pseudo-aléatoires dans les simulations Monte Carlo sur CPU/GPU

Passerat-Palmbach, Jonathan 11 October 2013 (has links)
Résumé non disponible / The race to computing power increases every day in the simulation community. A few years ago, scientists have started to harness the computing power of Graphics Processing Units (GPUs) to parallelize their simulations. As with any parallel architecture, not only the simulation model implementation has to be ported to the new parallel platform, but all the tools must be reimplemented as well. In the particular case of stochastic simulations, one of the major element of the implementation is the pseudorandom numbers source. Employing pseudorandom numbers in parallel applications is not a straightforward task, and it has to be done with caution in order not to introduce biases in the results of the simulation. This problematic has been studied since parallel architectures are available and is called pseudorandom stream distribution. While the literature is full of solutions to handle pseudorandom stream distribution on CPU-based parallel platforms, the young GPU programming community cannot display the same experience yet.In this thesis, we study how to correctly distribute pseudorandom streams on GPU. From the existing solutions, we identified a need for good software engineering solutions, coupled to sound theoretical choices in the implementation. We propose a set of guidelines to follow when a PRNG has to be ported to GPU, and put these advice into practice in a software library called ShoveRand. This library is used in a stochastic Polymer Folding model that we have implemented in C++/CUDA. Pseudorandom streams distribution on manycore architectures is also one of our concerns. It resulted in a contribution named TaskLocalRandom, which targets parallel Java applications using pseudorandom numbers and task frameworks.Eventually, we share a reflection on the methods to choose the right parallel platform for a given application. In this way, we propose to automatically build prototypes of the parallel application running on a wide set of architectures. This approach relies on existing software engineering tools from the Java and Scala community, most of them generating OpenCL source code from a high-level abstraction layer.
90

[en] UNCERTAINTY ANALYSIS OF 2D VECTOR FIELDS THROUGH THE HELMHOLTZ-HODGE DECOMPOSITION / [pt] ANALISE DE INCERTEZAS EM CAMPOS VETORIAIS 2D COM O USO DA DECOMPOSIÇÃO DE HELMHOLTZ-HODGE

PAULA CECCON RIBEIRO 20 March 2017 (has links)
[pt] Campos vetoriais representam um papel principal em diversas aplicações científicas. Eles são comumente gerados via simulações computacionais. Essas simulações podem ser um processo custoso, dado que em muitas vezes elas requerem alto tempo computacional. Quando pesquisadores desejam quantificar a incerteza relacionada a esse tipo de aplicação, costuma-se gerar um conjunto de realizações de campos vetoriais, o que torna o processo ainda mais custoso. A Decomposição de Helmholtz-Hodge é uma ferramenta útil para a interpretação de campos vetoriais uma vez que ela distingue componentes conservativos (livre de rotação) de componentes que preservam massa (livre de divergente). No presente trabalho, vamos explorar a aplicabilidade de tal técnica na análise de incerteza de campos vetoriais 2D. Primeiramente, apresentaremos uma abordagem utilizando a Decomposição de Helmholtz-Hodge como uma ferramenta básica na análise de conjuntos de campos vetoriais. Dado um conjunto de campos vetoriais epsilon, obtemos os conjuntos formados pelos componentes livre de rotação, livre de divergente e harmônico, aplicando a Decomposição Natural de Helmholtz- Hodge em cada campo vetorial em epsilon. Com esses conjuntos em mãos, nossa proposta não somente quantifica, por meio de análise estatística, como cada componente é pontualmente correlacionado ao conjunto de campos vetoriais original, como também permite a investigação independente da incerteza relacionado aos campos livre de rotação, livre de divergente e harmônico. Em sequência, propomos duas técnicas que em conjunto com a Decomposição de Helmholtz-Hodge geram, de forma estocástica, campos vetoriais a partir de uma única realização. Por fim, propomos também um método para sintetizar campos vetoriais a partir de um conjunto, utilizando técnicas de Redução de Dimensionalidade e Projeção Inversa. Testamos os métodos propostos tanto em campos sintéticos quanto em campos numericamente simulados. / [en] Vector field plays an essential role in a large range of scientific applications. They are commonly generated through computer simulations. Such simulations may be a costly process because they usually require high computational time. When researchers want to quantify the uncertainty in such kind of applications, usually an ensemble of vector fields realizations are generated, making the process much more expensive. The Helmholtz-Hodge Decomposition is a very useful instrument for vector field interpretation because it traditionally distinguishes conservative (rotational-free) components from mass-preserving (divergence-free) components. In this work, we are going to explore the applicability of such technique on the uncertainty analysis of 2-dimensional vector fields. First, we will present an approach of the use of the Helmholtz-Hodge Decomposition as a basic tool for the analysis of a vector field ensemble. Given a vector field ensemble epsilon, we firstly obtain the corresponding rotational-free, divergence-free and harmonic component ensembles by applying the Natural Helmholtz-Hodge Decomposition to each1 vector field in epsilon. With these ensembles in hand, our proposal not only quantifies, via a statistical analysis, how much each component ensemble is point-wisely correlated to the original vector field ensemble, but it also allows to investigate the uncertainty of rotational-free, divergence-free and harmonic components separately. Then, we propose two techniques that jointly with the Helmholtz-Hodge Decomposition stochastically generate vector fields from a single realization. Finally, we propose a method to synthesize vector fields from an ensemble, using both the Dimension Reduction and Inverse Projection techniques. We test the proposed methods with synthetic vector fields as well as with simulated vector fields.

Page generated in 0.1629 seconds