Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
191 |
Paralelização de um modelo global de previsão do tempo em malhas localmente refinadas / Parallelization of a numerical weather prediction global model with local refinement gridsVidaurre Navarrete, Nelson Leonardo 31 October 2014 (has links)
O objetivo principal deste trabalho é a paralelização de um modelo global de previsão do tempo em diferenças finitas com refinamento local. Este é baseado nas equações primitivas, e faz uso de uma discretização semi-Lagrangiana e semi-implícita em três níveis no tempo em uma malha de Lorenz na vertical e uma malha do tipo C de Arakawa na horizontal. A discretização horizontal é feita através de diferenças finitas de segunda ordem. A equação escalar elíptica tridimensional resultante é desacoplada em um sistema de equações bidimensionais do tipo Helmholtz, o qual é resolvido por meio de um método multigrid. O modelo de paralelização foi desenvolvido para máquinas com memória distribuída, fazendo uso de MPI para passagens de mensagens e baseado em técnicas de decomposição de domínio. O acoplamento apenas local dos operadores de diferenças finitas viabiliza a decomposição em duas direções horizontais. Evitamos a decomposição vertical, tendo em vista o forte acoplamento nesta direção das parametrizações de fenômenos físicos. A estratégia de paralelização foi elaborada visando o uso eficiente de centenas ou alguns milhares de processadores, dependendo da resolução do modelo. Para tal, a malha localmente refinada é separada em três regiões: uma grossa, uma de transição e uma fina, onde cada uma delas é dividida de forma independente entre um número de processadores proporcional ao número de pontos que cada uma armazena, garantindo assim um balanceamento de carga adequado. Não obstante, para resolver o sistema de equações bidimensionais do tipo Helmholtz foi necessário mudar a estratégia de paralelização, dividindo o domínio unicamente nas direções vertical e latitudinal. Ambas partes do modelo com paralelizações diferentes estão conectadas por meio da estratégia de transposição de dados. Testamos nosso modelo utilizando até 1024 processadores e os resultados ainda mostraram uma boa escalabilidade. / The main goal of this work is the parallelization of a weather prediction model employing finite differences on locally refined meshes. The model is based on the primitive equations and uses a three-time-level semi-implicit semi-Lagrangian temporal discretization on a Lorenz-type vertical grid combined with a horizontal Arakawa C-grid. The horizontal discretization is performed by means of second order finite differences. The resulting three-dimensional scalar elliptic equation is decoupled into a set of Helmholtz-type two-dimensional equations, solved by a multigrid method. The parallelization has been written for distributed-memory machines, employing the MPI message passing standard and was based on domain decomposition techniques. The local coupling of the finite difference operators was exploited in a two-dimensional horizontal decomposition. We avoid a vertical decomposition due to the strong coupling of physical parameterization routines. The parallelization strategy has been designed in order to allow the efficient use of hundreds to a few thousand processors, depending on the model resolution. In order to achieve this, the locally refined mesh is split into three regions: a coarse, a transition and a fine one, each decomposed independently. The number of allocated processors for each region is proportional to the number of the grid-points it contains, in order to guarantee a good load-balancing distribution. However, to solve the set of Helmholtz-type bidimensional equations it was necessary to change the parallelization strategy, splitting the domain only in vertical and latitudinal directions. Both parts of the model with different parallelizations are related by means the data transposition strategy. We tested our model using up to 1024 processors and the results still showed a good scalability.
|
192 |
Techniques for formal modelling and verification on dynamic memory allocators / Techniques de modélisation et de vérification formelles des allocateurs de mémoire dynamiquesFang, Bin 10 September 2018 (has links)
Cette thèse est une contribution à la spécification et à la vérification formelles des allocateurs de mémoire dynamiques séquentiels (SDMA, en abrégé), qui sont des composants clés des systèmes d'exploitation ou de certaines bibliothèques logiciel. Les SDMA gèrent la partie tas de la mémoire des processus. Leurs implémentations utilisent à la fois des structures de données complexes et des opérations de bas niveau. Cette thèse se concentre sur les SDMA qui utilisent des structures de données de type liste pour gérer les blocs du tas disponibles pour l'allocation (SDMA à liste).La première partie de la thèse montre comment obtenir des spécifications formelles de SDMA à liste en utilisant une approche basée sur le raffinement. La thèse définit une hiérarchie de modèles classés par la relation de raffinement qui capture une grande variété de techniques et de politiques employées par le implémentations réelles de SDMA. Cette hiérarchie forme une théorie algorithmique pour les SDMA à liste et pourrait être étendue avec d'autres politiques. Les spécifications formelles sont écrites en Event-B et les raffinements ont été prouvés en utilisant la plateforme Rodin. La thèse étudie diverses applications des spécifications formelles obtenues: le test basé sur des modèles, la génération de code et la vérification.La deuxième partie de la thèse définit une technique de vérification basée sur l'interprétation abstraite. Cette technique peut inférer des invariants précis des implémentations existantes de SDMA. Pour cela, la thèse définit un domaine abstrait dont les valeurs representent des ensembles d'états du SDMA. Le domaine abstrait est basé sur un fragment de la logique de séparation, appelé SLMA. Ce fragment capture les propriétés liées à la forme et au contenu des structures de données utilisées par le SDMA pour gérer le tas. Le domaine abstrait est défini comme un produit spécifique d'un domaine abstrait pour graphes du tas avec un domaine abstrait pour des sequences finies d'adresses mémoire. Pour obtenir des valueurs abstraites compactes, la thèse propose une organisation hiérarchique des valeurs abstraites: un premier niveau abstrait la liste de tous les blocs mémoire, alors qu'un second niveau ne sélectionne que les blocs disponibles pour l’allocation. La thèse définit les transformateurs des valeurs abstraites qui capturent la sémantique des instructions utilisées dans les implémentations des SDMA. Un prototype d'implémentation de ce domaine abstrait a été utilisé pour analyser des implémentations simples de SDMA. / The first part of the thesis demonstrates how to obtain formal specifications of free-list SDMA using a refinement-based approach. The thesis defines a hierarchy of models ranked by the refinement relation that capture a large variety of techniques and policies employed by real-work SDMA. This hierarchy forms an algorithm theory for the free-list SDMA and could be extended with other policies. The formal specifications are written in Event-B and the refinements have been proved using the Rodin platform. The thesis investigates applications of the formal specifications obtained, such as model-based testing, code generation and verification.The second part of the thesis defines a technique for inferring precise invariants of existing implementations of SDMA based abstract interpretation. For this, the thesis defines an abstract domain representing sets of states of the SDMA. The abstract domain is based on a fragment of Separation Logic, called SLMA. This fragment captures properties related with the shape and the content of data structures used by the SDMA to manage the heap. The abstract domain is defined as a specific product of an abstract domain for heap shapes with an abstract domain for finite arrays of locations. To obtain compact elements of this abstract domain, the thesis proposes an hierarchical organisation of the abstract values: a first level abstracts the list of all chunks while a second level selects only the chunks available for allocation. The thesis defines transformers of the abstract values that soundly capture the semantics of statements used in SDMA implementations. A prototype implementation of this abstract domain has been used to analyse simple implementations of SDMA
|
193 |
APPLICATIONS OF MOLECULAR DYNAMICS SIMULATIONS IN PROTEIN X-RAY CRYSTALLOGRAPHYOleg Mikhailovskii (8748906) 23 April 2020 (has links)
<div>X-ray crystallography is a foundation of the modern structural biology. Thus, refinement of crystallographic structures remains an important and actively pursued area of research. We have built a software solution for refinement of crystallographic protein structures using X-ray diffraction data in conjunction with state-of-the-art MD modeling setup. This solution was implemented on the platform of Amber 16 biomolecular simulation package, making use of graphical processing unit (GPU) computing. The proposed refinement protocol consists of a short MD simulation, which represents an entire crystal unit cell containing multiple protein molecules and interstitial solvent. The simulation is guided by crystallographic restraints based on experimental structure factors, as well as conventional force-field terms. We assessed the performance of this new protocol against various refinement procedures based on the Phenix engine, which represents the current industry standard. The evaluation was conducted on a set of 84 protein structures with different realizations of initial models; the main criterion of success was free R-factor, R_free. Initially, we performed the re-refinement of the models deposited in the PDB bank. We found that in 58% of all cases our protocol achieved better R_free than Phenix. As a next step, we conducted the refinement on three different sets of lower-quality models that were manufactured specifically to test the competing algorithms (average C^α RMSD from the target structures 0.75, 0.89, and 1.02 Å). In these tests, our protocol outperformed the refinement procedures available in Phenix in up to 89% of all cases. Aside from R-factors, we also compared geometric qualities of the models as measured by MolProbity scores. It was found that our protocol led to consistently better geometries in all of the refinement comparisons.</div><div>Recently, a number of attempts have been made to fully utilize the information encoded in protein diffraction data, including diffuse scattering, which is dependent on molecular dynamics in the crystal. To understand the nature of this dependence, we have chosen three different crystalline forms of ubiquitin. By post-processing the MD data, we separated the effects from different types of motion on the diffuse scattering profiles. This analysis failed to identify any features of the diffuse scattering profiles that could be uniquely linked to certain specific motional modes (e.g. small-amplitude rocking motion of protein molecules in the crystal lattice). However, we were able to confirm the previous experimental observations, made in the combined X-ray diffraction and NMR study, suggesting that the amount of motion in the specific crystal is reflected in the amplitude of diffuse scattering.</div>
|
194 |
A situation refinement model for complex event processingAlakari, Alaa A. 07 January 2021 (has links)
Complex Event Processing (CEP) systems aim at processing large flows of events
to discover situations of interest (SOI). Primarily, CEP uses predefined pattern templates
to detect occurrences of complex events in an event stream. Extracting complex
event is achieved by employing techniques such as filtering and aggregation to detect
complex patterns of many simple events. In general, CEP systems rely on domain
experts to de fine complex pattern rules to recognize SOI. However, the task of fine
tuning complex pattern rules in the event streaming environment face two main challenges:
the issue of increased pattern complexity and the event streaming constraints
where such rules must be acquired and processed in near real-time.
Therefore, to fine-tune the CEP pattern to identify SOI, the following requirements
must be met: First, a minimum number of rules must be used to re fine the CEP pattern
to avoid increased pattern complexity, and second, domain knowledge must be
incorporated in the refinement process to improve awareness about emerging situations.
Furthermore, the event data must be processed upon arrival to cope with
the continuous arrival of events in the stream and to respond in near real-time.
In this dissertation, we present a Situation Refi nement Model (SRM) that considers
these requirements. In particular, by developing a Single-Scan Frequent Item
Mining algorithm to acquire the minimal number of CEP rules with the ability to
adjust the level of re refinement to t the applied scenario. In addition, a cost-gain
evaluation measure to determine the best tradeoff to identify a particular SOI is
presented. / Graduate
|
195 |
A new approach to boundary integral simulations of axisymmetric droplet dynamics / 軸対称液滴運動の境界積分シミュレーションに対する新しいアプローチKoga, Kazuki 24 November 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22861号 / 情博第740号 / 新制||情||127(附属図書館) / 京都大学大学院情報学研究科先端数理科学専攻 / (主査)教授 青柳 富誌生, 教授 磯 祐介, 教授 田口 智清 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
196 |
Vliv modifikace a očkování na strukturu a mechanické vlastnosti slitin hliníku / Influence of modification and inoculation on structure and mechanical properties of aluminium alloysJanošťák, Jan January 2014 (has links)
This thesis examines the influence of inoculation and modification on the internal structure and properties of aluminium-silicon alloys. The effect of inoculation by titanium and boron and modification by sodium and strontium is investigated in experimental part of the thesis. All these metallurgical interventions in the melt are tested on three types of hypoeutectic Al-Si alloys (AlSi10Mg(a), AlSi8Cu3, AlSi7Mg0,3). The experiment was carried out in cooperation with the foundry of non-ferrous metals, Aluminium Group, a.s. in Sloup.
|
197 |
Informal Production Networksvan der Merwe, Jan Gabriel Jr. January 2017 (has links)
The relationship between industry and the city is a damaged
one. However, with its existing mix of residents, industry
and commerce (albeit segregated from one-another)
Pretoria West holds the potential for a unique relationship
between industry and the citizens of Pretoria. Only by understanding
the role that these industries play within the
greater context of the city can the rich character and culture
of a place be ampli ed and solidi ed in a development
plan. Catalyzed by its heritage, development becomes
a manifestation of the character of place that will attract
further growth and simultaneously embrace the existing
stakeholders.
e existing industrial built-environment is often misshapen
and illegible and whilst it is di cult to organize
(and navigate) the seemingly disorganized site, it is possible
to resolve; through understanding historic boundaries and
development patterns that can be utilized as organizational
grids. In this case historic erf divisions and consolidations
can be utilized as an organizational tool at a large scale and
should serve as a guide to where future structures should be
erected in order to maintain a legible built environment.
When designing future additions, understanding the historic
expansion of these industrial buildings holds the key
to a harmonious relationship between old and new. With
minimal architectural intent these buildings supply little
for the architect to grapple onto, but with material spans
and structural repetition forming the underlying ordering
principle; it is possible to create a logical and ordered extension
of the past. / Die verhouding tussen industrie en die stad is beskadig
en as gevolg word industrië stelselmatig verwyder van die
stad. Die mengsel tussen inwoners, industrie en handel in
Pretoria Wes (albeit geissoleer van mekaar) gun egter die
potensiaal tot ‘n unieke verbandskap tussen industrie en
die inwoners van Pretoria. Slegs deur die rol te erken wat
die industrië speel ten opsigte van die stad se groter konteks,
kan die karakter en kultuur van so ‘n omgewing versterk en
vasgevang word in ‘n ontwikkelings plan. Erfenis dien as
katalisator vir ontwikkeling van die karakter van plek wat
in beurt verdere nansiële groei sal aanhits.
Die bestaande industriële bou-omgewing is misvorm en onvoorspelbaar.
Alhoewel so ‘n omgewing nie aan die individie
toeleen om weg te vind of organiseer nie, is dit moontlik
deur die ontginning van historiese grense en ontwikkelings
patrone wat kan dien as organiseerings mates. Historiese erf
indelings en konsolodasies kan gebruik word om te dien as
‘n gids vir toekomstige toevoegings, om sodoende die nuwe
argitektuur uit die bestaande te laat vloei. Die resultaat is
‘n leesbare en geordende bou-omgewing.
Die ontwerp van die nuwe verbeelding steun op die
morfologie van die bestaande omgewing om ‘n harmoniese
verhouding tussen oud en nuut te skep. Materiale se span
afstande neem die rol van die onderliggende orde stelsels
aan as gevolg van die gebrek aan aansienlike argitektoniese
bedoelings in die bestaande omgewing. Sodoende is
‘n leesbare en logiese uitbreiding van die verlede en na die
toekoms moontlik in ‘n omgewing wat ontstaan het sonder
ontwerp vir ervaring van mense. / Mini Dissertation MArch(Prof)--University of Pretoria, 2017. / Architecture / MArch(Prof) / Unrestricted
|
198 |
Link Discovery: Algorithms and ApplicationsNgonga Ngomo, Axel-Cyrille 03 December 2018 (has links)
Ziel dieser Arbeit ist die Erarbeitung von effizienten (semi-)automatischen Verfahren zur Verknüpfung von Wissensbasen. Eine Vielzahl von Lösungsklassen können zu diesem Zweck eingesetzt werden. In dieser Arbeit werden ausschließlich deklarative Ansätze erörtert. Deklarative Ansätze gehen davon aus, dass das direkte Errechnen von Mappings zwischen Mengen von Ressourcen in vielen Fällen nur schwer möglich ist oder eines nicht vertretbaren Aufwands bedarf. Diese Ansätze zielen daher darauf ab, eine Ähnlichkeitsfunktion sowie einen Schwellwert zu finden, die zur Approximation eines Mappings genutzt werden können. Zwei Herausforderungen gehen mit dieser Modellierung des Problems einher: (a) Effizienz sowie (b) Genauigkeit und Vollständigkeit. Lösungen zu beiden Herausforderungen sowie auf echten Daten basierende Anwendungen dieser Lösungen werden in der Arbeit vorgestellt.
|
199 |
Convergence rates of adaptive algorithms for deterministic and stochastic differential equationsMoon, Kyoung-Sook January 2001 (has links)
NR 20140805
|
200 |
A Framework for Mesh Refinement Suitable for Finite-Volume and Discontinuous-Galerkin Schemes with Application to Multiphase Flow PredictionDion-Dallaire, Andrée-Anne 26 May 2021 (has links)
Modelling multiphase flow, more specifically particle-laden flow, poses multiple challenges. These difficulties are heightened when the particles are differentiated by a set of “internal” variables, such as size or temperature. Traditional treatments of such flows can be classified in two main categories, Lagrangian and Eulerian methods. The former approaches are highly accurate but can also lead to extremely expensive computations and challenges to load balancing on parallel machines. In contrast, the Eulerian models offer the promise of less expensive computations but often introduce modelling artifacts and can become more complicated and expensive when a large number of internal variables are treated. Recently, a new model was proposed to treat such situations. It extends the ten-moment Gaussian model for viscous gases to the treatment of a dilute particle phase with an arbitrary number of internal variables. In its initial application, the only internal variable chosen for the particle phase was the particle diameter. This new polydisperse Gaussian model (PGM) comprises 15 equations, has an eigensystem that can be expressed in closed form and also possesses a convex entropy. Previously, this model has been tested in one dimension. The PGM was developed with the detonation of radiological dispersal devices (RDD) as an immediate application. The detonation of RDDs poses many numerical challenges, namely the wide range of spatial and temporal scales as well as the high computational costs to accurately resolve solutions. In order to address these issues, the goal of this current project is to develop a block-based adaptive mesh refinement (AMR) implementation that can be used in conjunction with a parallel computer. Another goal of this project is to obtain the first three-dimensional results for the PGM. In this thesis, the kinetic theory of gases underlying the development of the PGM is studied. Different numerical schemes and adaptive mesh refinement methods are described. The new block-based adaptive mesh refinement algorithm is presented. Finally, results for different flow problems using the new AMR algorithm are shown, as well as the first three-dimensional results for the PGM.
|
Page generated in 0.0497 seconds