Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
231 |
Refinamento sequencial e paramétrico pelo método de Rietveld : aplicação na caracterização de fármacos e excipientes /Tita, Diego Luiz. January 2018 (has links)
Orientador: Carlos de Oliveira Paiva Santos / Coorientadora: Selma Gutierrez Antonio / Banca: Marlus Chorilli / Banca: Vinícius Danilo Nonato Bezzon / Banca: Flavio Machado de Souza Carvalho / Banca: Alexandre Urbano / Resumo: O refinamento de estruturas cristalinas pelo método de Rietveld (MR) consiste em ajustar um modelo estrutural a uma medida de difração. Essa é uma ferramenta eficiente para identificação e quantificação de estruturas polimórficas presentes em fármacos e excipientes. Uma forma avançada do método é o refinamento sequencial por Rietveld (RSR) que visa, a partir de um conjunto de difratogramas de uma mesma amostra, estudar o comportamento do material em função de uma variável externa (e.g. temperatura, pressão, tempo ou ambiente químico). No presente trabalho, com o objetivo de estudar as transições polimórficas e as expansões/contrações dos parâmetros de cela unitária (PCU) dos insumos farmacêuticos: espironolactona (SPR), lactose monoidratada (LACMH) e lactose anidra (LACA), empregou-se o RSR em medidas obtidas em diferentes temperaturas. O RSR foi eficiente para que os PCU fossem refinados até temperaturas próximas ao ponto de fusão dos materiais. Após o RSR, a partir da análise matemática dos PCU obtidos, foram propostas funções que regem a tendência desses parâmetros quando submetidos à variação de temperatura. Com essas funções modelaram-se os PCU em uma outra modalidade de refinamento, o refinamento paramétrico por Rietveld (RPR), assim, os PCU seguem a modelagem imposta pelas equações obtidas via RSR. O RPR mostrou-se mais eficiente nas análises, o que evitou perda de fases ou problemas de ajustes, resultando assim em informações mais precisas do sistema. Embora o RSR e R... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The crystal structural refinement by the Rietveld method (MR) consists of fitting a structural model to a diffraction measure. This is an efficient tool for identification and quantification of polymorphic structures present in drugs and excipients. An advanced way to use this method is the Sequential Rietveld Refinement (RSR), which aims, from a set of data of the same sample, to study the behavior of the material as a function of an external variable (e.g. temperature, pressure, time or chemical environment). In the present work, with the objective of studying the polymorphic transitions and the expansions / contractions of the unit cell parameters (PCU) of the pharmaceutical ingredients: spironolactone (SPR), lactose monohydrate (LACMH) and anhydrous lactose (LACA), the RSR in measurements obtained at different temperatures. The RSR was efficient so that the PCU were refined to temperatures close to the melting point of the materials. After the RSR, from the mathematical analysis of the obtained PCU, functions were proposed that govern the trend of these parameters when submitted to the temperature variation. With these functions the PCU were modeled in another modality of refinement, the Parametric Rietveld Refinement (RPR), thus, the PCU follow the modeling imposed by the equations obtained via RSR. The RPR was more efficient in the analyzes, which avoided loss of phases or problems of adjustments, resulting in more accurate information of the system. Although RSR and RP... (Complete abstract click electronic access below) / Doutor
|
232 |
A practical framework for harmonising welfare and quality of data output in the laboratory-housed dogHall, Laura E. January 2014 (has links)
In the UK, laboratory-housed dogs are primarily used as a non-rodent species in the safety testing of new medicines and other chemical entities. The use of animals in research is governed by the Animals (Scientific Procedures) Act (1986, amended 2012) and legislation is underpinned by the principles of humane experimental technique: Replacement, Reduction and Refinement. A link between animal welfare and the quality of data produced has been shown in other species (e.g. rodents, nonhuman primates), however, no established, integrated methodology for identifying or monitoring welfare and quality of data output previously existed in the laboratory-housed dog. In order to investigate the effects of planned Refinements to various aspects of husbandry and regulated procedures, this project sought to integrate behavioural, physiological and other measures (e.g. cognitive bias, mechanical pressure threshold) and to provide a means for staff to monitor welfare whilst also establishing the relationship between welfare and quality of data output. Affective state was identified using an established method of cognitive bias testing, before measuring welfare at ‘baseline’ using measures of behaviour and physiology. Dogs then underwent ‘positive’ and ‘negative’ behavioural challenges to identify the measures most sensitive to changing welfare and most suitable for use in a framework. The resulting Welfare Assessment Framework, developed in three groups of dogs from contrasting backgrounds within the facility, found a consistent pattern of behaviour, cardiovascular function, affect and mechanical pressure threshold (MPT). Dogs with a negative affective state had higher blood pressure at baseline than those with positive affective states, and the magnitude of the effect of negative welfare suggests that welfare may act as a confound in the interpretation of cardiovascular data. The responses to restraint included increases in blood pressure and heart rate measures which approached ceiling levels, potentially reducing the sensitivity of measurement. If maintained over time this response could potentially have a negative health impact on other organ systems and affecting the data obtained from those. Dogs with a negative welfare state also had a lower mechanical pressure threshold, meaning they potentially experienced greater stimulation from unpleasant physical stimuli. Taken together with the behaviours associated with a negative welfare state (predominantly vigilant or stereotypic behaviours) the data suggest that dogs with a negative welfare state have a greater behavioural and physiological response to stimuli in their environment; as such, data obtained from their use is different from that obtained from dogs with a positive welfare state. This was confirmed by examining the effect size (Cohen’s d ) resulting from the analysis of affective state on cardiovascular data. An increase in variance, particularly in the small dog numbers typical of safety assessment studies, means a reduction in the power of the study to detect the effect under observation; a decrease in variation has the potential to reduce the number of dogs use, in line with the principle of Reduction and good scientific practice. The development of the framework also identified areas of the laboratory environment suitable for Refinement (e.g. restriction to single-housing and restraint) and other easily-implemented Refinements (e.g. feeding toy and human interaction) which could be used to improve welfare. As a result of this, a Welfare Monitoring Tool (WMT) in the form of a tick sheet was developed for technical and scientific staff to identify those dogs at risk of reduced welfare and producing poor quality data, as well as to monitor the effects of Refinements to protocols. Oral gavage is a common regulated procedure, known to be potentially aversive and was identified as an area in need of Refinement. A program of desensitisation and positive reinforcement training was implemented in a study also comparing the effects of a sham dose condition versus a control, no-training, condition. A number of the measures used, including home pen behaviour, behaviour during dosing, MPT and the WMT showed significant benefits to the dogs in the Refined condition. Conversely, dogs in the sham dose condition showed more signs of distress and took longer to dose than dogs in the control condition. The welfare of control dogs was intermediate to sham dose and Refined protocol dogs. This project identified a positive relationship between positive welfare and higher quality of data output. It developed and validated a practical and feasible means of measuring welfare in the laboratory environment in the Welfare Assessment Framework, identified areas in need of Refinement and developed practical ways to implement such Refinements to husbandry and regulated procedures. As such it should have wide implications for the pharmaceutical industry and other users of dogs in scientific research.
|
233 |
Vers la vérification de propriétés de sûreté pour des systèmes infinis communicants : décidabilité et raffinement des abstractionsHeussner, Alexander 27 June 2011 (has links)
La vérification de propriétés de sûreté des logiciels distribués basés sur des canaux fifo non bornés et fiables mène directement au model checking de systèmes infinis. Nous introduisons la famille des (q)ueueing (c)oncurrent (p)rocesses (QCP) composant des systèmes de transitions locaux, par exemple des automates finis/à pile, qui communiquent entre eux par des files fifo. Le problème d'atteignabilité des états de contrôle est indécidable pour des automates communicants et des automates à plusieurs piles, et par conséquent pour QCP.Nous présentons deux solutions pour contourner ce résultat négatif :Primo, une sur-approximation basée sur l'approche abstraire-tester-raffiner qui s'appuie sur notre nouveau concept de raffinement par chemin. Cette approche mène à permettre d'écrire un semi-algorithme du type CEGAR qui est implémenté avec des QDD et réalisé dans le framework McScM dont le banc d'essai conclut notre présentation.Secundo, nous proposons des restrictions pour les QCP à des piles locales pour démêler l'interaction causale entre les données locales (la pile), et la synchronisation globale. Nous montrons qu'en supposant qu'il existe une borne existentielle sur les exécutions et qu'en ajoutant une condition sur l'architecture, qui entrave la synchronisation de deux piles, on arrive à une réponse positive pour le problème de décidabilité de l'atteignabilité qui est EXPTime-complet (et qui généralise des résultats déjà connus). La construction de base repose sur une simulation du système par un automate à une pile équivalent du point de vue de l'atteignabilité --- sous-jacente, nos deux restrictions restreignent les exécutions à une forme hors-contexte. Nous montrons aussi que ces contraintes apparaissent souvent dans des situations ``concrètes''et qu'elles sont moins restrictives que celles actuellement connues. Une autre possibilité pour arriver à une solution pratiquement utilisable consiste à supposer une borne du problème de décidabilité : nous montrons que l'atteignabilité par un nombre borné de phases est décidable par un algorithme constructif qui est 2EXPTime-complet.Finalement, nous montrons qu'élargir les résultats positifs ci-dessus à la vérification de la logique linéaire temporelle demande soit de sacrifier l'expressivité de la logique soit d'ajouter des restrictions assez fortes aux QCP --- deux restrictions qui rendent cette approche inutilisable en pratique. En réutilisant notre argument de type ``hors-contexte'', nous représentons l'ordre partiel sous-jacent aux exécutions par des grammaires hypergraphes. Cela nous permet de bénéficier de résultats connus concertant le model checking des formules de la logique MSO sur les graphes (avec largeur arborescente bornée), et d'arriver aux premiers résultats concernant la vérification des propriétés sur l'ordre partiel des automates (à pile) communicants. / The safety verification of distributed programs, that are based on reliable, unbounded fifo communication, leads in a straight line to model checking of infinite state systems. We introduce the family of (q)ueueing (c)oncurrent (p)rocesses (QCP): local transition systems, e.g., (pushdown-)automata, that are globally communicating over fifo channels. QCP inherits thus the known negative answers to the control-state reachability question from its members, above all from communicating automata and multi-stack pushdown systems. A feasible resolution of this question is, however, the corner stone for safety verification.We present two solutions to this intricacy: first, an over-approximation in the form of an abstract-check-refine algorithm on top of our novel notion of path invariant based refinement. This leads to a \cegar semi-algorithm that is implemented with the help of QDD and realized in a small software framework (McScM); the latter is benchmarked on a series ofsmall example protocols. Second, we propose restrictions for QCP with local pushdowns that untangle the causal interaction of local data, i.e., thestack, and global synchronization. We prove that an existential boundedness condition on runs together with an architectural restriction, that impedes the synchronization of two pushdowns, is sufficient and leads to an EXPTime-complete decision procedure (thus subsuming and generalizing known results). The underlying construction relies on a control-state reachability equivalent simulation on a single pushdown automaton, i.e., the context-freeness of the runs under the previous restrictions. We can demonstrate that our constraints arise ``naturally'' in certain classes of practical situations and are less restrictive than currently known ones. Another possibility to gain a practicable solution to safety verification involves limiting the decision question itself: we show that bounded phase reachability is decidable by a constructive algorithms in 2ExpTime, which is complete.Finally, trying to directly extend the previous positive results to model checking of linear temporal logic is not possible withouteither sacrificing expressivity or adding strong restrictions (i.e., that are not usable in practice). However, we can lift our context-freeness argument via hyperedge replacement grammars to graph-like representation of the partial order underlying each run of a QCP. Thus, we can directly apply the well-known results on MSO model checking on graphs (of bounded treewidth) to our setting and derive first results on verifying partial order properties on communicating (pushdown-) automata.
|
234 |
Étude des mécanismes de déchloruration d'objets archéologiques ferreux d'origine sous-marine / Study of dechlorination mechanisms of ferrous artefacts coming from archaeological marine siteKergourlay, Florian 19 April 2012 (has links)
La mise au jour du mobilier archéologique ferreux s'accompagne de dégradations si ce dernier n'est pas rapidement stocké en atmosphère inerte ou traité. Dans le cas des objets provenant de fouilles sous-marines, la présence de produits de corrosion chlorurés instables au contact de l'air accélère les phénomènes de reprise de corrosion. Afin de limiter ces dégradations et de stabiliser l'objet, il est nécessaire d'extraire les ions chlorure piégés au sein du système de corrosion. Divers traitements de déchloruration ont été développés par les ateliers de conservation (immersion dans des solutions alcalines, polarisation cathodique, plasma d'hydrogène, fluide subcritique…). Ces traitements ont une efficacité certaine mais une meilleure compréhension des mécanismes de déchloruration (évolution des phases chlorurées durant le traitement, phénomènes de transport dans la couche, cinétiques d'extractions des chlorures…) permettrait de les optimiser (temps de traitement, fiabilité, reproductibilité…). Les objectifs de cette thèse sont multiples. En un premier temps la caractérisation fine du système de corrosion développé en milieu marin puis lors de son abandon à l'air a été réalisée à l'aide de techniques multi-échelles sur un corpus expérimental composé de lingots en fer forgé provenant de frégates gallo-romaines découvertes au large des Saintes-Maries-de-la-Mer (Bouches-du-Rhône, France) immergées durant 2000 ans. Il a notamment été mis en évidence que ce faciès de corrosion est principalement composé d'hydroxychlorure de fer (β-Fe2(OH)3Cl), phase chlorurée contenant près de 20% en masse de chlore. En un second temps l'évolution du système de corrosion développé en milieu marin a été suivi lors des étapes constituant un traitement de déchloruration : l'étape de traitement à proprement parlé qui consiste en la circulation d'une solution de NaOH aérée ou désaérée, l'étape de lavage puis l'étape de séchage. Ce second axe s'est déroulé en deux temps. Tout d'abord le suivi in situ de l'évolution de la couche de corrosion lors de l'étape de traitement a été réalisé sous rayonnement synchrotron par diffraction des rayons X. Puis le système de corrosion a été caractérisé à l'ssu des étapes de lavage et de séchage. Ainsi le comportement de la couche de corrosion a pu être appréhendée et une meilleure compréhension des mécanismes de déchloruration proposée. Les objectifs de cette étude sont d'une part de suivre in situ l'évolution du faciès de corrosion d'objets archéologiques lors d'un traitement en solution aérée d'hydroxyde de sodium (NaOH) et d'autre part de caractériser le faciès de corrosion après lavage et séchage de l'objet (…) / The excavation of archaeological iron artefacts are accompanied by degradation if they are not readily stored in an inert atmosphere or treated. In the case of artifacts from underwater excavations, the presence of chlorinated volatile corrosion products on contact with air accelerates the recovery of corrosion phenomena. To limit this degradation and stabilize the object, it is necessary to remove chloride ions trapped in the corrosion pattern. Various dechlorination treatments have been developed by conservation workshops (immersion in alkaline solutions, cathodic polarization, hydrogen plasma, subcritical fluid ...). Despite a genuine efficiency, these treatments need to be optimized (processing time, reliability, reproducibility ...) by a better understanding of dechlorination mechanisms (structural evolution of corrosion pattern during treatment, transport phenomena in the layer, kinetics of extraction of chlorides ...). The objectives of this study are multiple. At first the detailed characterization of the corrosion pattern developed in the marine environment and then when it was abandoned in air was performed using multiscale techniques on an experimental corpus consists of wrought iron ingots from Gallo-Roman ships discoveries off the Saintes-Maries-de-la-Mer (Bouches-du-Rhone, France) submerged for over 2000 years. In particular it was shown that the corrosion pattern of an freshly excavated object is mainly composed of the ferrous hydroxychloride β-Fe2(OH)3Cl, chlorinated phase containing about 20 wt% chlorine. In a second time the evolution of the corrosion pattern was followed during the treatment steps constituting dechlorination: the step of processing, strictly speaking, which consists of the circulation of a NaOH solution aerated or deaerated, step then washing the drying step. The second axis was conducted in two stages. First, the in situ monitoring of the corrosion layer's evolution in the step of processing was carried out under synchrotron radiation by X-ray diffraction coupled to the determination of chloride ions in solution extracts. Then the corrosion pattern was characterized ex situ, elementarily and structurally, from the steps of washing and drying. The whole data allowed us firstly to refine processes structural evolution of the corrosion layer at each stage and also discuss models of chloride extraction proposed in the literature
|
235 |
Sytnéza povrchových dat pro digitální modely terénu / Synthesis of digital landscape surface dataŠebesta, Michal January 2016 (has links)
A procedural generation of landscapes often meets a need for real spatial data at finer resolution that data available at the moment. We introduce a method that refines the spatial data at the coarse resolution into the finer resolution utilizing other data sources which are already at the better resolution. We construct weighted local linear statistical models from both the coarse and utility data and use the by- models-learned dependencies between the data sources to predict the needed data at better resolution. To achieve higher computational speed and evade utility data imperfection, we utilize truncated singular value decomposition which reduce a dimensionality of the data space we work with. The~method is highly modifiable and its application shows plausible real-like results. Thanks to this, the method can be of practical use for simulation software development. Powered by TCPDF (www.tcpdf.org)
|
236 |
High-resolution simulation and rendering of gaseous phenomena from low-resolution dataEilertsen, Gabriel January 2010 (has links)
Numerical simulations are often used in computer graphics to capture the effects of natural phenomena such as fire, water and smoke. However, simulating large-scale events in this way, with the details needed for feature film, poses serious problems. Grid-based simulations at resolutions sufficient to incorporate small-scale details would be costly and use large amounts of memory, and likewise for particle based techniques. To overcome these problems, a new framework for simulation and rendering of gaseous phenomena is presented in this thesis. It makes use of a combination of different existing concepts for such phenomena to resolve many of the issues in using them separately, and the result is a potent method for high-detailed simulation and rendering at low cost. The developed method utilizes a slice refinement technique, where a coarse particle input is transformed into a set of two-dimensional view-aligned slices, which are simulated at high resolution. These slices are subsequently used in a rendering framework accounting for light scattering behaviors in participating media to achieve a final highly detailed volume rendering outcome. However,the transformations from three to two dimensions and back easily introduces visible artifacts, so a number of techniques have been considered to overcome these problems, where e.g. a turbulence function is used in the final volume density function to break up possible interpolation artifacts.
|
237 |
Efeito do refinamento da microestrutura e da adição de nióbio na resistência ao desgaste abrasivo de ferros fundidos de alto cromo. / Effect of microstructure refinement and niobium addition on abrasion resistance of high chromium cast irons.Penagos, Jose Jimmy 06 May 2016 (has links)
Os Ferros Fundidos de Alto Cromo (FFAC), por apresentarem excelentes propriedades tribológicas, têm sido amplamente utilizados em aplicações específicas envolvendo elevadas perdas de material por abrasão, especialmente no setor da mineração. Entretanto, a demanda por materiais com maior resistência ao desgaste aumenta continuamente, sendo necessárias novas pesquisas nesta área. Portanto, o presente trabalho objetiva avaliar a utilização do nióbio para aumentar, ainda mais, a resistência à abrasão dos FFAC\'s. Por outro lado, quando o FFAC é utilizado na fabricação de peças com geometrias irregulares (por exemplo, rotores de bombas), o componente pode apresentar diferentes níveis de refinamento da microestrutura, entre as regiões finas e espessas, devido às variações na taxa de resfriamento. No presente trabalho foi avaliado, o efeito do grau de refinamento da microestrutura, e a interação do refinamento com a adição de nióbio, na resistência ao desgaste abrasivo dos FFAC\'s. Para tanto, foram desenvolvidos quatro estudos principais: no primeiro estudo foram fabricados blocos de FFAC variando o grau de refinamento da microestrutura e foi mostrado que: grandes incrementos no grau de refinamento resultam em maiores perdas de massa por abrasão. Nas microestruturas menos refinadas, os carbonetos de cromo M7C3, de maior tamanho, são menos susceptíveis ao micro trincamento e podem, ocasionalmente, atuar como barreiras ante os eventos abrasivos. Em uma segunda série de experimentos, foi avaliada a interação do efeito do grau de refinamento da microestrutura com a adição de nióbio em teores baixos (1 %); mostrando que, para microestruturas com alto grau de refinamento, adições de nióbio reduzem as perdas de massa por abrasão em até 50 %. Em uma terceira série de experimentos foi avaliada a interação dos efeitos da adição de nióbio e de molibdênio. Quando comparado com a liga isenta de molibdênio, adições simultâneas de nióbio e molibdênio resultaram em microestruturas mais refinadas, com maior microdureza da matriz, e com carbonetos de nióbio (NbC) de maior dureza. Para condições de desgaste abrasivo por baixos esforços, onde o desgaste foi mais acentuado na matriz, adições simultâneas de nióbio e molibdênio resultaram em aumentos da resistência á abrasão dos FFAC estudados. Na última etapa do trabalho foi adicionado 3 % de nióbio em um liga de FFAC com composição química inicial hipereutética (25%Cr/3%C), a qual apresentaria carbonetos primários de cromo M7C3 de grande tamanho que induziriam comportamento frágil do material quando exposto ao desgaste. Porém, a adição de nióbio resultou em um FFAC com microestrutura mais refinada (eutética), contendo NbC\'s compactos e por conseguinte, mais resistente ao desgaste abrasivo. / High Chromium Cast Irons (HCCI\'s), because of their excellent tribological properties, have been widely used for specific applications involving high wear rates by abrasion, especially in the mining sector. However, the demand for materials with higher wear resistance is continuously growing and thus further research is needed in this area. For that reason, the current work purposes to assess the use of niobium to further increase the wear resistance of HCCI\'s. On the other hand, when HCCI is used for manufacturing components with irregular geometries (e.g. pump impellers), the components thin and thick regions can contain different levels of structure refinement due to variation in their cooling rates. In this work, the effect of structure refinement and the interaction between structure refinement and niobium addition on the abrasion resistance of HCCI\'s were evaluated. For that purpose, four systematic main studies were developed: in the first study, blocks of HCCI were manufactured varying the structure refinement and it was shown that large increases in the degree of structure refinement result in higher wear mass losses by abrasion. In less refined microstructures, the larger M7C3 chromium carbides are less susceptible to microcracking and can occasionally act as a barrier to abrasive particles. In the second series of experiments, the interaction between structure refinement and niobium addition in low concentrations (1 %) was evaluated; showing that for more refined microstructures, niobium additions reduce the mass losses by abrasion up to 50 %. In the third series of experiments, the interaction between niobium and molybdenum additions was evaluated. Compared to molybdenum-free alloy, simultaneous additions of niobium and molybdenum resulted in a more refined microstructure, higher hardness of the matrix and harder niobium carbides (NbC). For Low Stress Sliding Abrasion (LSSA) wear configuration, where wear was more pronounced in the matrix, simultaneous addition of niobium and molybdenum resulted in increase of abrasion resistance in the studied HCCI. In the last stage of this work, 3 % of niobium were added in an HCCI alloy with hypereutectic initial chemical composition (25%Cr/3%C), which presents primary large sized chromium carbides that induce a brittle behavior of the HCCI when subjected to wear. However, the niobium addition resulted in a more refined microstructure (eutectic) HCCI containing compact-shaped NbC carbides, and consequently in more resistance to abrasive wear.
|
238 |
Simulação numérica de uma função indicadora de fluidos tridimensional empregando refinamento adaptativo de malhas / Numerical simulation of a 3D fluid indicator function using adaptive mesh refinementAzeredo, Daniel Mendes 10 December 2007 (has links)
No presente trabalho, utilizou-se o Método da Fronteira Imersa, o qual utiliza dois tipos de malhas computacionais: euleriana (utilizada para o fluido) e lagrangiana (utilizada para representar a interface de separação de dois fluidos). O software livre GMSH foi utilizado para representar um sólido por meio da sua superfície externa e também para gerar uma malha triangular, bidimensional e não estruturada para discretizar essa superfície. Essa superfície foi utilizada como condição inicial para a malha lagrangiana (fronteira imersa). Os dados da malha lagrangiana são armazenados em uma estrutura de dados chamada Halfedge, a qual é largamente utilizada em Computação Gráfica para armazenar superfícies fechadas e orientáveis. Uma vez que a malha lagrangiana esteja armazenada nesta estrutura de dados, passa-se a estudar uma hipotética interação dinâmica entre a fronteira imersa e o escoamento do fluido. Esta interação é estudada apenas em um sentido, considera-se apenas a condição de não deslizamento, isto é, a fronteira imersa acompanhará passivamente um campo de velocidades pré-estabelecido (imposto), sem exercer qualquer força ou influência sobre ele. Foi utilizado um campo de distância local com sinal (função indicadora de fluidos) para identificar o interior e o exterior da superfície que representa a interface entre os fluidos. Este campo de distância é atualizado a cada passo no tempo utilizando idéias de Geometria Computacional, o que tornou o custo computacional para calcular esse campo otimal independente da complexidade geométrica da interface. Esta metodologia mostrou-se robusta e produz uma definição nítida das distintas fases dos fluidos em todos os passos no tempo. Para acompanhar e visualizar de forma mais precisa o comportamento dos fluidos na vizinhança da superfície que representa a interface de separação dos fluido, foi utilizado um algoritmo chamado de Refinamento Adaptativo de Malhas para fazer um refinamento dinâmico da malha euleriana na vizinhança da malha lagrangiana. / The scientific motivation of the present work is the mathematical modeling and the computational simulation of multiphase flows. Specifically, the equations of a two-phase flow are written by combining the Immersed Boundary Method with a suitable fluid indicator function. It is assumed that the fluid equations are discretized on an Eulerian mesh covering completely the flow domain and that the interface between the fluid phases is discretized by a non-structured Lagrangian mesh formed by triangles. In this context, employing tools commonly found in Computational Geometry, the computation of the fluid indicator function is efficiently performed on a block-structured Eulerian mesh bearing dynamical refinement patches. Formed by a set of triangles, the Lagrangian mesh, which is initally generated employing the free software GMSH, is stored in a Halfedge data structure, a data structure which is widely used in Computer Graphics to represent bounded, orientable closed surfaces. Once the Lagrangian mesh has been generated, next, one deals with the hipothetical situation of dealing with the one-way dynamical interaction between the immersed boundary and the fluid flow, that is, considering the non-slip condition, only the action of the flow on the interface is studied. No forces arising on the interface affects the flow, the interface passively being advect with the flow under a prescribed, imposed velocity field. In particular, the Navier-Stokes equations are not solved. The fluid indicator function is given by a signed distance function in a vicinity of the immersed boundary. It is employed to identify interior/exterior points with respect to the bounded, closed region which is assumed to contain one of the fluid phases in its interior. The signed distance is update every time step employing Computational Geometry methods with optimal cost. Several examples in three dimensions, showing the efficiency and efficacy in the computation of the fluid indicator function, are given which employ the dynamical adaptive properties of the Eurlerian mesh for a moving interface.
|
239 |
Développement d'algorithmes répartis corrects par construction / Developing correct-by-construction distributed algorithmsAndriamiarina, Manamiary Bruno 20 October 2015 (has links)
Nous présentons dans cette thèse intitulée "Développement d'algorithmes répartis corrects par construction" nos travaux sur le développement et la vérification formels d'algorithmes répartis. Nous nous intéressons à ces algorithmes, à cause de la difficulté de leur vérification et validation. Pour analyser ces algorithmes, nous avons choisi d'utiliser Event B pour le raffinement de modèles, la vérification de propriétés de sûreté, et TLA, pour la vérification des propriétés temporelles (vivacité et équité). Nous nous sommes focalisé sur le paradigme de correction-par-construction, basé sur la modélisation par raffinement, la preuve de propriétés, ainsi que la réutilisation de modèles/preuves/propriétés (~ patrons de conception) pour guider le développement formel des algorithmes étudiés. Nous avons mis en place un paradigme de développement lors duquel un algorithme réparti est dans un premier temps caractérisé par les services qu'il fournit, et qui sont ensuite exprimés par des propriétés de vivacité, guidant la construction des modèles Event B de cet algorithme. Les règles d'inférence de TLA nous permettent ensuite de détailler les propriétés de vivacité, et de guider le développement formel par raffinement de l'algorithme. Ce paradigme, appelé "service-as-event", est caractérisé par des diagrammes d'assertions permettant de représenter les propriétés de vivacité (en prenant en compte l'équité) des algorithmes répartis étudiés, de comprendre leurs mécanismes. Ce paradigme nous a permis d'analyser des algorithmes de routage (Anycast RP de Cisco Systems et XY pour les réseaux-sur-puce (NoC)), des algorithmes de snapshot et des algorithmes d'auto-stabilisation. / The subject of this thesis is the formal development and verification of distributed algorithms. We are interested in this topic, because proving that a distributed algorithm satisfies given specification and properties is a difficult task. We choose to use the Event B method (refinement, safety properties) and the temporal logic TLA (fairness, liveness properties) for modelling the distributed algorithms. There are several existing approaches for formalising distributed algorithms, and we choose to focus on the "correct-by-construction" paradigm, which is characterised by the use of model refinement, proof of properties (safety, liveness) and reuse of formal models/proofs/properties, developments (~ design patterns) for modelling distributed algorithms. Our works introduce a paradigm which allows us to describe an algorithm with a set of services/functionalities, which are then expressed using liveness properties. These properties guide us in developing the formal Event B models of the studied algorithms. Inference rules from TLA allow to decompose the liveness properties, therefore detailing the services and guiding the refinement process. This paradigm, called "service-as-event" is also characterized by (assertions) diagrams, which allow to graphically represent liveness properties (with respect to fairness hypotheses) and detail the mecanisms and functioning of the studied distributed algorithms. The "service-as-event" paradigm allowed us to develop and verify the following algorithms : routing algorithms, such as Anycast RP (Cisco Systems), XY for Networks-on-Chip (NoC), snapshot and self-* algorithms.
|
240 |
Entwurf und Verifikation von Petrinetzmodellen verteilter Algorithmen durch Verfeinerung unverteilter AlgorithmenWu, Bixia 12 July 2007 (has links)
Um Entwurf und Verifikation komplizierter verteilter Algorithmen leichter und verständlicher zu machen, wird oft eine Verfeinerungsmethode verwendet. Dabei wird ein einfacher Algorithmus, der gewünschte Eigenschaften erfüllt, schrittweise zu einem komplizierten Algorithmus verfeinert. In jedem Schritt sollen die gewünschten Eigenschaften erhalten bleiben. Für nachrichtenbasierte verteilte Algorithmen haben wir eine neue Verfeinerungsmethmode entwickelt. Wir beginnen mit einem Anfangsalgorithmus, der Aktionen enthält, die gemeinsame Aufgaben mehrerer Agenten beschreiben. In jedem Schritt verfeinern wir eine dieser Aktionen zu einem Netz, das nur solche Aktionen enthält, die die Aufgaben einzelner Agenten beschreiben. Jeder Schritt ist also eine Verteilung einer unverteilten Aktion. Die Analyse solcher Verfeinerungsschritte wird mit Hilfe eines neuen Verfeinerungsbegriffs - der verteilenden Verfeinerung - durchgeführt. Entscheidend dabei ist das Erhaltenbleiben der Halbordnungen des zu verfeinernden Algorithmus. Dies ist durch Kausalitäten der Aktionen der Agenten im lokalen Verfeinerungsnetz zu erreichen. Die Kausalitäten im lokalen Verfeinerungsnetz lassen sich einerseits beim Entwurf direkt durch Nachrichtenaustausch realisieren. Andererseits kann man bei der Verifikation die Gültigkeit einer Kausalität im lokalen Verfeinerungsnetz direkt vom Netz ablesen. Daher ist diese Methode leicht zu verwenden. Die Anwendung der Methode wird in der Arbeit an verschiedenen nicht trivialen Beispielen demonstriert. / In order to make design and verification of complicated distributed algorithms easier and more understandable, a refinement method is often used. A simple algorithm, which fulfills desired properties, is refined stepwise to a complicated algorithm. In each step the desired properties are preserved. For messages-based distributed algorithms we have developed a new refinement method. We begin with an initial algorithm, which contains actions, which describe common tasks of several agents. In each step we refine one of these actions to a net, which contains only such actions, which describe the tasks of individual agents. Thus, each step is a distribution of an undistributed action. The analysis of such refinement steps is accomplished with the help of a new refinement notation - the distributing refinement. Preservation of the partial order of the refined algorithm is important. This can be achieved by causalities of the actions of the agents in the local refinement net. Causalities in the local refinement net can be realized on the one hand at design directly by messages passing. On the other hand, at verification one can read the validity of causality in the local refinement net directly from the net. Therefore, this method is easy to use. The application of the method is demonstrated by several nontrivial examples in this thesis.
|
Page generated in 0.0571 seconds