• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 17
  • 14
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 21
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Development and validation of the Euler-Lagrange formulation on a parallel and unstructured solver for large-eddy simulation / Développement et validation du formalisme Euler-Lagrange dans un solveur parallèle et non-structuré pour la simulation aux grandes échelles

García Martinez, Marta 19 January 2009 (has links)
De nombreuses applications industrielles mettent en jeu des écoulements gaz-particules, comme les turbines aéronautiques et les réacteurs a lit fluidisé de l'industrie chimique. La prédiction des propriétés de la phase dispersée, est essentielle à l'amélioration et la conception des dispositifs conformément aux nouvelles normes européennes des émissions polluantes. L'objectif de cette these est de développer le formalisme Euler- Lagrange dans un solveur parallèle et non-structuré pour la simulation aux grandes échelles pour ce type d'écoulements. Ce travail est motivé par l'augmentation rapide de la puissance de calcul des machines massivement parallèles qui ouvre une nouvelle voie pour des simulations qui étaient prohibitives il y a une décennie. Une attention particulière a été portée aux structures de données afin de conserver une certaine simplicité et la portabilité du code sur des differentes! architectures. Les développements sont validés pour deux configurations : un cas académique de turbulence homogène isotrope décroissante et un calcul polydisperse d'un jet turbulent recirculant chargé en particules. L'équilibrage de charges de particules est mis en évidence comme une solution prometteuse pour les simulations diphasiques Lagrangiennes afin d'améliorer les performances des calculs lorsque le déséquilibrage est trop important. / Particle-laden flows occur in industrial applications ranging from droplets in gas turbines tofluidized bed in chemical industry. Prediction of the dispersed phase properties such as concentration and dynamics are crucial for the design of more efficient devices that meet the new pollutant regulations of the European community. The objective of this thesis is to develop an Euler-Lagrange formulation on a parallel and unstructured solver for large- eddy simulation. This work is motivated by the rapid increase in computing power which opens a new way for simulations that were prohibitive one decade ago. Special attention is taken to keep data structure simplicity and code portability. Developments are validated in two configurations : an academic test of a decaying homogeneous isotropic turbulence and a polydisperse two-phase flow of a confined bluff body. The use of load-balancing capabilities is highlighted as a promising solut! ion in Lagrangian two-phase flow simulations to improve performance when strong imbalance of the dispersed phase is present
92

Preuves d’algorithmes distribués par raffinement

Tounsi, Mohamed 04 July 2012 (has links)
Dans cette thèse, nous avons étudié et développé un environnement de preuve pour les algorithmes distribués. Nous avons choisi de combiner d’une part l’approche "correct-par-construction" basée sur la méthode "B évènementielle" et d’autre part les calculs locaux comme un outil de codage et de preuve d’algorithmes distribués. Ainsi, nous avons proposé un patron et une approche qui caractérisent d’une façon incrémentale une démarche générale de preuve de plusieurs classes d’algorithmes distribués. Les solutions proposées sont validées et implémentées par un outil de preuve appelé B2Visidia. / In this thesis, we have studied and developed a proof environment for distributed algorithms. We have chosen to combine the “correct-by-construction” approach based on the “Event-B” method and the local computations models. These models define abstract computing processes for solving problems by distributed algorithms. Thus, we have proposed a pattern and an approach to characterize a general approach to prove several classes of distributed algorithms. The proposed solutions are implemented by a tool called B2Visidia.
93

Caractérisation du comportement des assemblages par goujons collés dans les structures bois

Lartigau, Julie 12 July 2013 (has links)
L’utilisation des goujons collés dans les structures bois répond au souci de conservation du bâti et de discrétion de l’intervention. A ce jour, plusieurs procédés de caractérisation et de dimensionnement sont disponibles, sans pour autant donner un socle commun à l’évaluation de la résistance des assemblages collés. L’utilisation des goujons collés suscite des interrogations concernant leur tenue au feu. Bien que le matériau bois entourant les tiges de renforcement soit isolant, il est nécessaire de fournir de plus amples estimations concernant la tenue mécanique des polymères pour différentes températures que l’on pourrait trouver au sein d’une liaison au cours d’un incendie. La présente étude permet de coupler méthodes expérimentales et simulations numériques, afin d’appréhender les mécanismes gouvernant la rupture de ces assemblages. La caractérisation expérimentale permet d’estimer les propriétés mécaniques locales des assemblages suivant divers paramètres (les longueurs de scellement, l’orientation du fil du bois, l’essence de bois, ou encore la température d’exposition), ainsi que les propriétés intrinsèques aux matériaux constitutifs. Cette base de données expérimentale est indispensable pour l’ajustement du modèle aux éléments finis en élasticité linéaire. L’approche numérique met en évidence une présence importante de contraintes normales en tête de collage, avant l’apparition de contraintes de cisaillement. Les outils de la Mécanique Linéaire Élastique de la Rupture équivalente permettent d’établir des courbes de résistance liées à chaque mode de ruine (mode I et mode II). Enfin, afin de décrire précisément le processus complet de rupture de ces assemblages, un critère de rupture en mode mixte (mode I et mode II) est utilisé. Une formulation analytique permettant d’estimer la charge au pic est proposée et permettra la réalisation d’abaques de dimensionnement des assemblages par goujons collés, utilisables en bureau d’études. / Glued-in-rods lead to overcome the use of traditional bolted connections, preserve a large part of original timber and offer aesthetic benefits (since the repair is hidden in the cross sections of the members). Despite previous research programs in many countries, some design rules, predicting the axial strength of such connections, are available, but a common criterion is still lacking. However, the durability of this process according to fluctuating temperature is not well known. During fire exposure, connections are not directly in contact with flames, since they are isolated by surrounding wood. The current study combines experiments with finite element computations, in order to lead a better to a better knowledge about their mechanical and fracture behaviors. An important experimental campaign is carried out on such connections and provides the influence of various parameters, such as the anchorage length, the rod-to-grain angle, the specie of wood or the temperature exposure, on their mechanical behaviors. Moreover, the inherent mechanical properties of the rod and the adhesives used are also studied. The finite element modeling reproduces the experimental configuration, and reveals significant tensile stresses in comparison with shear stresses. Within the framework of equivalent linear elastic fracture mechanics, R-curves in mode I and mode II can be estimated for each specimen. Finally, a fracture criterion in mixed mode is used to describe the complete fracture process of glued-in rods. An analytical formulation is then proposed and allows the evaluation of peak load of each specimen. This approach leads to realize design tables, usable by design offices.
94

Implementation trade-offs for FGPA accelerators / Compromis pour l'implémentation d'accélérateurs sur FPGA

Deest, Gaël 14 December 2017 (has links)
L'accélération matérielle désigne l'utilisation d'architectures spécialisées pour effectuer certaines tâches plus vite ou plus efficacement que sur du matériel générique. Les accélérateurs ont traditionnellement été utilisés dans des environnements contraints en ressources, comme les systèmes embarqués. Cependant, avec la fin des règles empiriques ayant régi la conception de matériel pendant des décennies, ces quinze dernières années ont vu leur apparition dans les centres de calcul et des environnements de calcul haute performance. Les FPGAs constituent une plateforme d'implémentation commode pour de tels accélérateurs, autorisant des compromis subtils entre débit/latence, surface, énergie, précision, etc. Cependant, identifier de bons compromis représente un défi, dans la mesure où l'espace de recherche est généralement très large. Cette thèse propose des techniques de conception pour résoudre ce problème. Premièrement, nous nous intéressons aux compromis entre performance et précision pour la conversion flottant vers fixe. L'utilisation de l'arithmétique en virgule fixe au lieu de l'arithmétique flottante est un moyen efficace de réduire l'utilisation de ressources matérielles, mais affecte la précision des résultats. La validité d'une implémentation en virgule fixe peut être évaluée avec des simulations, ou en dérivant des modèles de précision analytiques de l'algorithme traité. Comparées aux approches simulatoires, les méthodes analytiques permettent une exploration plus exhaustive de l'espace de recherche, autorisant ainsi l'identification de solutions potentiellement meilleures. Malheureusement, elles ne sont applicables qu'à un jeu limité d'algorithmes. Dans la première moitié de cette thèse, nous étendons ces techniques à des filtres linéaires multi-dimensionnels, comme des algorithmes de traitement d'image. Notre méthode est implémentée comme une analyse statique basée sur des techniques de compilation polyédrique. Elle est validée en la comparant à des simulations sur des données réelles. Dans la seconde partie de cette thèse, on se concentre sur les stencils itératifs. Les stencils forment un motif de calcul émergeant naturellement dans de nombreux algorithmes utilisés en calcul scientifique ou dans l'embarqué. À cause de cette diversité, il n'existe pas de meilleure architecture pour les stencils de façon générale : chaque algorithme possède des caractéristiques uniques (intensité des calculs, nombre de dépendances) et chaque application possède des contraintes de performance spécifiques. Pour surmonter ces difficultés, nous proposons une famille d'architectures pour stencils. Nous offrons des paramètres de conception soigneusement choisis ainsi que des modèles analytiques simples pour guider l'exploration. Notre architecture est implémentée sous la forme d'un flot de génération de code HLS, et ses performances sont mesurées sur la carte. Comme les résultats le démontrent, nos modèles permettent d'identifier les solutions les plus intéressantes pour chaque cas d'utilisation. / Hardware acceleration is the use of custom hardware architectures to perform some computations faster or more efficiently than on general-purpose hardware. Accelerators have traditionally been used mostly in resource-constrained environments, such as embedded systems, where resource-efficiency was paramount. Over the last fifteen years, with the end of empirical scaling laws, they also made their way to datacenters and High-Performance Computing environments. FPGAs constitute a convenient implementation platform for such accelerators, allowing subtle, application-specific trade-offs between all performance metrics (throughput/latency, area, energy, accuracy, etc.) However, identifying good trade-offs is a challenging task, as the design space is usually extremely large. This thesis proposes design methodologies to address this problem. First, we focus on performance-accuracy trade-offs in the context of floating-point to fixed-point conversion. Usage of fixed-point arithmetic instead of floating-point is an affective way to reduce hardware resource usage, but comes at a price in numerical accuracy. The validity of a fixed-point implementation can be assessed using either numerical simulations, or with analytical models derived from the algorithm. Compared to simulation-based methods, analytical approaches enable more exhaustive design space exploration and can thus increase the quality of the final architecture. However, their are currently only applicable to limited sets of algorithms. In the first part of this thesis, we extend such techniques to multi-dimensional linear filters, such as image processing kernels. Our technique is implemented as a source-level analysis using techniques from the polyhedral compilation toolset, and validated against simulations with real-world input. In the second part of this thesis, we focus on iterative stencil computations, a naturally-arising pattern found in many scientific and embedded applications. Because of this diversity, there is no single best architecture for stencils: each algorithm has unique computational features (update formula, dependences) and each application has different performance constraints/requirements. To address this problem, we propose a family of hardware accelerators for stencils, featuring carefully-chosen design knobs, along with simple performance models to drive the exploration. Our architecture is implemented as an HLS-optimized code generation flow, and performance is measured with actual execution on the board. We show that these models can be used to identify the most interesting design points for each use case.
95

Laminar Conjugate Natural Convection And Surface Radiation In Horizontal Annuli

Shaija, A 10 1900 (has links)
Numerical studies of two-dimensional laminar conjugate natural convection flow and heat transfer in horizontal annuli formed between inner heat generating solid cylinders and outer isothermal circular boundary are performed with and without the effect of surface radiation. The two configurations of the concentrically placed inner cylinder are a circular cylinder (CC configuration) and a square cylinder (SOS, i.e., Square-On-Side, configuration). The mathematical formulation consists of the continuity equation, momentum equations with Boussinesq approximation and the solid and fluid energy equations. Numerical solutions are obtained by discretising the governing equations on a collocated mesh (non-staggered mesh) and the pressure-velocity coupling is taken into account via the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) algorithm. A cylindrical polar coordinate system is employed for CC configuration and a Cartesian coordinate system is used for the SOS configuration. The convective terms are discretised with donor-cell differencing scheme and the diffusion terms, with central differencing scheme. The algebraic equations resulting from the discretisation of the governing equations are solved using the line by line TDMA (Tri-Diagonal Matrix Algorithm). A global iteration scheme over each time step is used for better coupling of temperature and the flow variables and steady-state solutions are obtained by time-marching. Steady-state results of conjugate pure natural convection are obtained for the volumetric heat generation and outer radius based Grashof number ranging from 104 to 1010, for solid-to-fluid thermal conductivity ratios of 1, 5, 10, 50 and 100, and for the aspect ratios of 0.2 and 0.4, with air as the working medium (Pr=0.708) for the CC and SOS configurations. The flow and temperature distributions are presented in terms of isotherms and streamline maps. Results are presented for several quantities of interest such as local and average Nusselt numbers on the inner and outer boundaries, dimensionless local temperatures on the inner boundary and dimensionless maximum and average solid cylinder temperatures. The results show that the flow in the annulus is characterized by double or quadruple vortex patterns. Of the dimensionless maximum solid temperature, average solid temperature and average inner boundary temperature, the first two are much sensitive to solid-to-fluid thermal conductivity ratio. Surface radiation effects are studied numerically in conjugation with natural convection. The coupling with surface radiation arises through the solid-fluid interface thermal condition. To account for the radiation effects, configuration factors among the subsurfaces of the inner and outer boundaries formed by the computational mesh are determined. Results are obtained for CC and SOS configurations for emissivities ranging from 0.2-0.8, with the other parameters as in pure natural convection case. It is found that even at low surface emissivity, radiation plays a significant role in bringing down the convective component and enhancing the total Nusselt numbers across the annulus. The presence of radiation is found to reduce the dimensionless temperatures inside the solid and homogenise the temperature distribution in the fluid. The radiative Nusselt number is about 50-70 % of the total Nusselt number depending on the radiative parameters chosen. This factor emphasizes the need for taking into account the coupling of radiation and natural convection for the accurate prediction of the flow and heat transfer characteristics in the annulus. The solution of the conjugate problem facilitates the determination of the solid temperature distribution, which is important in connection with the safety aspects of various thermal energy systems. Correlations as functions of Grashof number and thermal conductivity ratio are constructed for the estimation of various quantities of interest for the two configurations and aspect ratios for pure natural convection and for combined natural convection and radiation cases. The results are expected to be useful in the design of thermal systems such as spent nuclear fuel casks during transportation and storage, underground transmission cables and cooling of electrical and electronic components.
96

Programmtransformationen für Vielteilchensimulationen auf Multicore-Rechnern

Schwind, Michael 15 December 2010 (has links) (PDF)
In dieser Dissertation werden Programmtransformationen für die Klasse der regulär-irregulären Schleifenkomplexe, welche typischerweise in komplexen Simulationscodes für Vielteilchensysteme auftreten, betrachtet. Dabei wird die Effizienz der resultierenden Programme auf modernen Multicore-Systemen untersucht. Reguläre Schleifenkomplexe zeichnen sich durch feste Schleifengrenzen und eine regelmäßige Struktur der Abhängigkeiten der Berechnungen aus, bei irregulären Berechnungen sind Abhängigkeiten zwischen Berechnungen erst zur Laufzeit bekannt und stark von den Eingabedaten abhängig. Die hier betrachteten regulären-irregulären Berechnungen koppeln beide Arten von Berechnungen eng. Die Herausforderung der effizienten Realisierung regulär-irregulärer Schleifenkomplexe auf modernen Multicore-Systemen liegt in der Kombination von Transformationstechnicken, die sowohl ein hohes Maß an Parallelität erlauben als auch die Lokalität der Berechnungen berücksichtigen. Moderne Multicore-Systeme bestehen aus einer komplexen Speicherhierachie aus privaten und gemeinsam genutzten Caches, sowie einer gemeinsamen Speicheranbindung. Diese neuen architektonischen Merkmale machen es notwendig Programmtransformationen erneut zu betrachten und die Effizienz der Berechnungen neu zu bewerten. Es werden eine Reihe von Transformationen betrachtet, die sowohl die Reihenfolge der Berechnungen als auch die Reihenfolge der Abspeicherung der Daten im Speicher ändern, um eine erhöhte räumliche und zeitliche Lokalität zu erreichen. Parallelisierung und Lokalität sind eng verknüpft und beeinflussen gemeinsam die Effizienz von parallelen Programmen. Es werden in dieser Arbeit verschiedene Parallelisierungsstrategien für regulär-irreguläre Berechnungen für moderne Multicore-Systeme betrachtet. Einen weiteren Teil der Arbeit bildet die Betrachtung rein irregulärer Berechnungen, wie sie typisch für eine große Anzahl von Vielteilchensimualtionscodes sind. Auch diese Simulationscodes wurden für Multicore-Systeme betrachtet und daraufhin untersucht, inwieweit diese auf modernen Multicore-CPUs skalieren. Die neuartige Architektur von Multicore-System, im besonderen die in hohem Maße geteilte Speicherbandbreite, macht auch hier eine neue Betrachtung solcher rein irregulärer Berechnungen notwendig. Es werden Techniken betrachtet, die die Anzahl der zu ladenden Daten reduzieren und somit die Anforderungen an die gemeinsame Speicherbandbreite reduzieren.
97

Investigation of the electrodynamic retard devices using parallel computer systems / Elektrodinaminių lėtinimo įtaisų tyrimas taikant lygiagrečiąsias kompiuterines sistemas

Pomarnacki, Raimondas 06 January 2012 (has links)
An analysis using numerical methods can calculate electrical and construction characteristics parameters of microwave devices quite accurately. However, numerical methods require a lot of computation resources and time for calculations to be made. Rapid perfection of the computer technologies and software with implementation of the numerical methods has laid down the conditions to the rapid design of the microwave devices using computers. / Disertacijoje nagrinėjamos mikrobangų įtaisų analizės ir sintezės proble-mos, taikant lygiagrečiąsias kompiuterines sistemas. Pagrindiniai tyrimo objektai yra daugialaidės mikrojuostelinės linijos ir meandrinės mikrojuostelinės vėlinimo linijos. Šie objektai leidžia perduoti, sinchronizuoti bei vėlinti siunčiamus signalus ir yra neatsiejama dalis daugelio mikrobangų prietaisų. Jų operatyvi ir tiksli analizė bei sintezė sąlygoja įtaisų kūrimo spartinimą. Pagrindinis disertacijos tikslas – sukurti lygiagrečiąsias metodikas ir algoritmus, skirtus sparčiai ir tiksliai atlikti minėtų linijų analizę ir sintezę. Sukurtų algoritmų ir metodikų taikymo sritis – mikrobangų įtaisų modeliavimo ir automatizuoto projektavimo progra-minė įranga.
98

Hydrodynamics of gas-liquid Taylor flow in microchannels / Hydrodynamique des écoulements de Taylor gaz-liquide en microcanaux

Abadie, Thomas 14 November 2013 (has links)
Cette thèse porte sur l’étude des écoulements de Taylor (ou poche/bouchon) gazliquide en microcanal. Ces écoulements où les effets de tension de surface sont prépondérants ont été étudiées expérimentalement et numériquement pour des géométries rectangulaires avec divers rapports d’aspects. Une première partie expérimentale a consisté à caractériser la formation de bulles (taille, fréquence) en fonction des conditions opératoires, des propriétés des fluides (notamment à travers le nombre capillaire) et du mode de mise en contact des fluides. La dynamique de l’écoulement établi a par la suite été étudiée à l’aide du code JADIM. La simulation de ces écoulements dominés par la tension de surface a nécessité de lever les limitations liées à la prise en compte de la force capillaire. En effet des courants parasites numériques sont créés à proximité de l’interface lors de la simulation d’écoulements capillaires. Une méthode Level Set a été implémentée et comparée à la méthode Volume of Fluid d’origine en termes de courants parasites. Des simulations numériques 3D ont permis l’étude des effets du nombre capillaire et de la géométrie sur la dynamique des bulles de Taylor (vitesse, pression et formes de bulles). Les effets inertiels souvent négligés ont été considérés et leur influence, notamment sur les sauts de pression à l’interface, a été mise en évidence. Le mélange dans le bouchon liquide a également été étudié. / This thesis focuses on the hydrodynamics of gas-liquid Taylor flow (or slug flow) in microchannels. These flows, which are generally dominated by surface tension forces, have been investigated in rectangular channels of various cross-sectional aspect ratios by means of both experimental visualizations and numerical simulations. The first experimental part aims at characterizing the bubble generation process (bubble length and frequency of break-up) depending on the operating conditions, the fluid properties, as well as the junction where both fluids merge. Numerical simulations of fully developed Taylor flow have been carried out with the JADIM code. The computation of such surface tension dominated flows requires an accurate calculation of the surface tension force. Some limitations of the Volume of Fluid method have been highlighted and a Level Set method has been developed in order to improve the calculation of capillary effects. Both methods have been compared in detail in terms of spurious currents. 3D numerical simulations have been performed and the influence of the capillary number, as well as the effects of geometry have been highlighted. Inertial effects have been taken into account and their influence on the pressure drop has been shown to be non-negligible. Mixing in the liquid slug has also been studied.
99

On Three Dimensional High Lift Flow Computations

Gopalakrishna, N January 2014 (has links) (PDF)
Computing 3D high lift flows has been a challenge to the CFD community because of three important reasons: complex physics, complex geometries and large computational requirements. In the recent years, considerable progress has been made in understanding the suitability of various CFD solvers in computing 3D high lift flows, through the systematic studies carried out under High Lift Prediction workshops. The primary focus of these workshops is to assess the ability of the CFD solvers to predict CLmax and αmax associated with the high lift flows, apart from the predictability of lift and drag of such flows in the linear region. Now there is a reasonable consensus in the community about the ability of the CFD solvers to predict these quantities and fresh efforts to further understand the ability of the CFD solvers to predict more complex physics associated with these flows have already begun. The goal of this thesis is to assess the capability of the computational methods in predicting such complex flow phenomena associated with the 3D High-Lift systems. For evaluation NASA three element Trapezoidal wing configuration which poses a challenging task in numerical modeling was selected. Unstructured data based 3D RANS solver HiFUN (HiFUN stands for High Resolution Flow Solver for UNstructured Meshes) is used in investigating the high lift flow. The computations were run fully turbulent, using the one equation Spalart-Allmaras turbulence model. A summary of the results obtained using the flow solver HiFUN for the 3D High lift NASA Trapezoidal wing are presented. Hybrid unstructured grids have been used for the computations. Grid converged solution obtained for the clean wing and the wing with support brackets, are compared with experimental data. The ability of the solver to predict critical design parameters associated with the high lift flow, such as αmax and CLmax is demonstrated. The utility of the CFD tools, in predicting change in aerodynamic parameters in response to perturbational changes in the configuration is brought out. The solutions obtained for the high lift configuration from two variants of the Spalart-Allmaras turbulence model are compared. To check the unsteadiness in the flow, particularly near stall, unsteady simulations were performed on static grid. Lastly, hysteresis on lower leg of lift curve is discussed, the results obtained for quasi-steady and dynamic unsteady simulations are presented. Inferences from the study on useful design practices pertaining to the 3D high lift flow simulations are summarized.
100

Contribution to the Numerical Modeling of the VKI Longshot Hypersonic Wind Tunnel

Bensassi, Khalil 29 January 2014 (has links)
The numerical modelling of the VKI-Longshot facility remains a challeng-ing task as it requires multi-physical numerical methods in order to simulate all the components. In the current dissertation, numerical tools were developed in order to study each component of the facility separately and a deep investigations of each stage of the shot were performed. This helped to better understand the different processes involved in the flow development inside this hypersonic wind tunnel. However the numerical computation of different regions of the facility treated as independent from each others remains an approximation at best.The accuracy of the rebuilding code for determining the free stream conditions and the total enthalpy in the VKI-Longshot facility was investigated by using a series of unsteady numerical computations of axisymmetric hypersonic flow over a heat flux probe. Good agreement was obtained between the numerical results and the measured data for both the stagnation pressure and the heat flux dur- ing the useful test time.The driver-driven part of the Longshot facility was modelled using the quasi one-dimensional Lagrangian solver L1d2. The three main conditions used for the experiments —low, medium and high Reynolds number —were considered.The chambrage effect due to the junction between the driver and the driven tubes in the VKI-Longshot facility was investigated. The computation showed great ben- efit of the chambrage in increasing the speed of the piston and thus the final compression ratio of the test gas.Two dimensional simulations of the flow in the driver and the driven tube were performed using Arbitrary Lagrangian Eulerian (ALE) solver in COOLFLuiD. A parallel multi-domain strategy was developed in order to integrate the moving piston within the computational domain.The computed pressure in the reservoir is compared to the one provided by the experiment and good agreement was obtained for both con- editions.Finally, an attempt was made to compute the starting process of the flow in the contoured nozzle. The transient computation of the flow showed how the primary shock initiates the flow in the nozzle before reaching the exit plan at time of 1.5 [ms] after the diaphragm rupture. The complex interactions of the reflected shocks in the throat raise the temperature above 9500 [K] which was not expected. Chemical dissociation of Nitrogen was not taken into account during this transient investigation which may play a key role considering the range of temperature reached near the throat. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished

Page generated in 0.1235 seconds