• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 589
  • 185
  • 3
  • Tagged with
  • 779
  • 779
  • 489
  • 237
  • 106
  • 100
  • 97
  • 91
  • 90
  • 85
  • 83
  • 78
  • 73
  • 64
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

A covariant 4D formalism to establish constitutive models : from thermodynamics to numerical applications / Modèles covariants de comportement issus d'un formalisme 4D : de la thermodynamique aux applications numériques

Wang, Mingchuan 21 September 2016 (has links)
L’objectif de ce travail est d’établir des modèles de comportement mécaniques pour les matériaux en grandes déformations. Au lieu des approches classiques en 3D dans lesquelles la notion d'objectivité est ambigüe et pour lesquelles différentes dérivées objectives sont utilisées arbitrairement, le formalisme quadridimensionnel dérivé des théories de la Relativité est appliqué. En 4D, les deux aspects de la notion d’objectivité, l’indépendance du référentiel (ou covariance) et l’invariance à la superposition de mouvement de corps rigide, peuvent désormais être distinguées. En outre, l’utilisation du formalisme 4D assure la covariance des modèles. Pour les modèles incrémentaux, la dérivée de Lie est choisie permettant une variation totale par rapport au temps, tout en étant à la fois covariante et invariante à la superposition des mouvements de corps rigide. Dans ce formalisme 4D, nous proposons également un cadre thermodynamique en 4D pour développer des modèles de comportement en 4D tels que l’hyperélasticité, l’élasticité anisotrope, l’hypoélasticité et l’élastoplasticité. Ensuite, les projections en 3D sont obtenus à partir des modèles en 4D et étudiés en les testant sur des simulations numériques par éléments finis avec le logiciel Zset / The objective of this work is to establish mechanical constitutive models for materials undergoing large deformations. Instead of the classical 3D approaches in which the notion of objectivity is ambiguous and different objective transports may be arbitrarily used, the four-dimensional formalism derived from the theories of Relativity is applied. Within a 4D formalism, the two aspects of notion of objectivity: frame-indifference (or covariance) and invariance to the superposition of rigid body motions can now be distinguished. Besides, the use of this 4D formalism ensures the covariance of the models. For rate-form models, the Lie derivative is chosen as a total time derivative, which is also covariant and invariant to the superposition of rigid body motions. Within the 4D formalism, we also propose a framework using the 4D thermodynamic to develop 4D constitutive models for hyperelasticity, anisotropic elasticity, hypoelasticity and elastoplasticity. Then, 3D models are derived from 4D models and studied by applying them in numerical simulations with finite element methods using the software Zset
712

Modèles et méthodes pour la gestion logistique optimisée dans le domaine des services et de la santé / Models and optimization approaches for logistic problems in health care systems and services sector

Ait Haddadene, Syrine Roufaida 30 September 2016 (has links)
Cette thèse aborde le problème de tournées de véhicules (VRP) intégrant des contraintes temporelles : fenêtres de temps (TW), synchronisation (S) et précédence (P), appliqué au secteur de soins à domicile, donnant le VRPTW-SP. Il s’agit d’établir un plan de visite journalier des soignants, aux domiciles des patients ayant besoin d’un ou plusieurs services. Tout d’abord, nous avons abordé ce problème sous angle mono-objectif. Ensuite, le cas bi-objectif est considéré. Pour la version mono-objectif, un Programme Linéaire à Variables Mixtes Entières (PLME), deux heuristiques constructives, deux procédures de recherches locales et trois métaheuristiques à base de voisinages sont proposés : une procédure de recherche constructive adaptative randomisée (GRASP), une recherche locale itérée (ILS) et une approche hybride (GRASP × ILS). Concernant le cas bi-objectif, différentes versions de métaheuristiques évolutionnaires multi-objectifs sont proposées, intégrant différentes recherches locales : l’algorithme génétique avec tri par non-dominance version 2 (NSGAII), une version généralisée de ce dernier avec démarrages multiples (MS-NSGAII) et une recherche locale itérée avec tri par non-dominance (NSILS). Ces algorithmes ont été testés et validés sur des instances adaptées de la littérature. Enfin, nous avons étendu le VRPTW-SP sur un horizon de planification, donnant le VRPTW-SP multi-période. Pour résoudre cette extension, un PLME ainsi qu’une matheuristique sont proposés / This work addresses the vehicle routing problem (VRP) including timing constraints: time windows (TW), synchronization (S) and precedence (P), applied in Home Health Care sector; giving the VRPTW-SP. This problem consists in establishing a daily caregivers planning to patients' homes asking for one or several services. We have started by considering the problem as a single objective case. Then, a bi-objective version of the problem is introduced. For solving the single-objective problem, a Mixed Integer Linear Program (MILP), two constructive heuristics, local search procedures and three local search based metaheuristics are proposed : a Greedy Randomized Adaptive Search procedure (GRASP), an Iterated Local Search (ILS) and a hybrid approach (GRASP × ILS). Regarding the bi-objective VRPTW-SP, different versions of multi-objective evolutionary algorithm, including various local research strategies are proposed: the Non-dominated Sorting Genetic Algorithm version 2 (NSGAII), a generalized version of this latter with multiple restarts (MS-NSGAII) and an Iterated Local Search combined with the Non-dominated Sorting concept (NSILS). All these algorithms have been tested and validated on appropriate instances adapted from the literature. Finally, we extended the VRPTW-SP on a multi-period planning horizon and then proposed a MILP and a matheuristic approach
713

Condition-based maintenance policies for multi-component systems considering stochastic dependences / Politiques de maintenance conditionnelle pour des systèmes multi-composant avec dépendances stochastiques

Li, Heping 04 October 2016 (has links)
De nos jours, les systèmes industriels sont de plus en plus complexes tant du point de vue de leur structure logique que des diverses dépendances (dépendances économique, stochastiques et structurelles) entre leurs composants qui peuvent influencer l'optimisation de la maintenance. La Maintenance conditionnelle qui permet de gérer les activités de maintenance en fonction de l’information de surveillance a fait l’objet de beaucoup d'attention au cours des dernières années, mais les dépendances stochastiques sont rarement utilisées dans le processus de prise de décision. Par conséquent, cette thèse a pour objectif de proposer des politiques de maintenance conditionnelle tenant compte des dépendances économiques et stochastiques pour les systèmes multi-composant. En termes de dépendance économique, les politiques proposées sont conçues pour permettre de favoriser les opportunités de grouper des actions de maintenance. Une règle de décision est établie qui permet le groupement de maintenances avec des périodes d'inspection différentes. La dépendance stochastique causée par une part de dégradation commune est modélisée par copules de Lévy. Des politiques de maintenance conditionnelle sont proposées pour profiter de la dépendance stochastique.Nos travaux montrent la nécessité de tenir compte des dépendances économiques et stochastiques pour la prise de décision de maintenance. Les résultats numériques confirment l’avantage de nos politiques par rapport à d’autres politiques existant dans la littérature / Nowadays, industrial systems contain numerous components so that they become more and more complex regarding the logical structures as well as the various dependences (economic, stochastic and structural dependences) between components. The dependences between components have an impact on the maintenance optimization as well as the reliability analysis. Condition-based maintenance which enables to manage maintenance activities based on information collected through monitoring has gained a lot of attention over recent years but stochastic dependences are rarely used in the decision making process. Therefore, this thesis is devoted to propose condition-based maintenance policies which take advantage of both economic and stochastic dependences for multi-component systems. In terms of economic dependence, the proposed maintenance policies are designed to be maximally effective in providing opportunities for maintenance grouping. A decision rule is established to permit the maintenance grouping with different inspection periods. Stochastic dependence due to a common degradation part is modelled by Lévy and Nested Lévy copulas. Condition-based maintenance policies with non-periodic inspection scheme are proposed to make use of stochastic dependence. Our studies show the necessity of taking account of both economic and stochastic dependences in the maintenance decisions. Numerical experiments confirm the advantages of our maintenance policies when compared with other existing policies in the literature
714

Socially responsible investment and portfolio selection

Drut, Bastien 05 October 2011 (has links)
This thesis aims at determining the theoretical and empirical consequences of the consideration of socially responsible indicators in the traditional portfolio selection. The first chapter studies the significance of the mean-variance efficiency loss of a sovereign bond portfolio when introducing a constraint on the average socially responsible ratings of the governments. By using a sample of developed sovereign bonds on the period 1995-2008, we show that it is possible to increase sensibly the average socially responsible rating without significantly losing in terms of diversification. The second chapter proposes a theoretical analysis of the impact on the efficient frontier of a constraint on the socially responsible ratings of the portfolio. We highlight that different cases may arise depending on the correlation between the expected returns and the socially responsible ratings and on the investor’s risk aversion. Lastly, as the issue of the efficiency of socially responsible portfolios is a central point in the financial literature, the last chapter proposes a new mean-variance efficiency test in the realistic case where there is no available risk-free asset. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
715

Essays in dynamic macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting.<p><p>The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.<p><p>The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.<p>The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.<p><p>The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the<p>latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.<p><p>The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an<p>important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. <p><p>The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
716

Multi-criteria decision aiding model for the evaluation of agricultural countermeasures after an accidental release of radionuclides to the environment

Turcanu, Catrinel 31 October 2007 (has links)
Multi-criteria decision aid has emerged from the operational research field as the answer given to a couple of important questions encountered in complex decisions problems. Firstly, as decision aiding tools, such methods do not replace the decision maker with a mathematical model, but support him to construct his solution by describing and evaluating his options. Secondly, instead of using a unique criterion capturing all aspects of the problem, in the multi-criteria decision aid methods one seeks to build multiple criteria, representing several points of view. <p>This work explores the application of multi-criteria decision aid methods for optimising food chain countermeasure strategies after a radioactive release to the environment. <p>The core of the thesis is dedicated to formulating general lines for the development of a multi-criteria decision aid model. This includes the definition of potential actions, construction of evaluation criteria and preference modelling and is essentially based on the results of a stakeholders’ process. The work is centred on the management of contaminated milk in order to provide a concrete focus and because of its importance as an ingestion pathway in short term after an accident.<p>Among other issues, the public acceptance of milk countermeasures as a key evaluation criterion is analysed in detail. A comparison of acceptance based on stochastic dominance is proposed and, based on that, a countermeasures’ acceptance ranking is deduced.<p>In order to assess “global preferences” taking into account all the evaluation criteria, an ordinal method is chosen. This method allows expressing the relative importance of criteria in a qualitative way instead of using, for instance, numerical weights. Some algorithms that can be used for robustness analysis are also proposed. This type of analysis is an alternative to sensitivity analysis in what concerns data uncertainty and imprecision and seeks to determine how and if a model result or conclusion obtained for a specific instance of a model’s parameters holds over the entire domain of acceptable values for these parameters.<p>The integrated multi-criteria decision aid approach proposed makes use of outranking and interactive methodologies and is implemented and tested through a number of case studies and prototype tools. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
717

First principles and black box modelling of biological systems

Grosfils, Aline 13 September 2007 (has links)
Living cells and their components play a key role within biotechnology industry. Cell cultures and their products of interest are used for the design of vaccines as well as in the agro-alimentary field. In order to ensure optimal working of such bioprocesses, the understanding of the complex mechanisms which rule them is fundamental. Mathematical models may be helpful to grasp the biological phenomena which intervene in a bioprocess. Moreover, they allow prediction of system behaviour and are frequently used within engineering tools to ensure, for instance, product quality and reproducibility.<p> <p>Mathematical models of cell cultures may come in various shapes and be phrased with varying degrees of mathematical formalism. Typically, three main model classes are available to describe the nonlinear dynamic behaviour of such biological systems. They consist of macroscopic models which only describe the main phenomena appearing in a culture. Indeed, a high model complexity may lead to long numerical computation time incompatible with engineering tools like software sensors or controllers. The first model class is composed of the first principles or white box models. They consist of the system of mass balances for the main species (biomass, substrates, and products of interest) involved in a reaction scheme, i.e. a set of irreversible reactions which represent the main biological phenomena occurring in the considered culture. Whereas transport phenomena inside and outside the cell culture are often well known, the reaction scheme and associated kinetics are usually a priori unknown, and require special care for their modelling and identification. The second kind of commonly used models belongs to black box modelling. Black boxes consider the system to be modelled in terms of its input and output characteristics. They consist of mathematical function combinations which do not allow any physical interpretation. They are usually used when no a priori information about the system is available. Finally, hybrid or grey box modelling combines the principles of white and black box models. Typically, a hybrid model uses the available prior knowledge while the reaction scheme and/or the kinetics are replaced by a black box, an Artificial Neural Network for instance.<p><p>Among these numerous models, which one has to be used to obtain the best possible representation of a bioprocess? We attempt to answer this question in the first part of this work. On the basis of two simulated bioprocesses and a real experimental one, two model kinds are analysed. First principles models whose reaction scheme and kinetics can be determined thanks to systematic procedures are compared with hybrid model structures where neural networks are used to describe the kinetics or the whole reaction term (i.e. kinetics and reaction scheme). The most common artificial neural networks, the MultiLayer Perceptron and the Radial Basis Function network, are tested. In this work, pure black box modelling is however not considered. Indeed, numerous papers already compare different neural networks with hybrid models. The results of these previous studies converge to the same conclusion: hybrid models, which combine the available prior knowledge with the neural network nonlinear mapping capabilities, provide better results.<p><p>From this model comparison and the fact that a physical kinetic model structure may be viewed as a combination of basis functions such as a neural network, kinetic model structures allowing biological interpretation should be preferred. This is why the second part of this work is dedicated to the improvement of the general kinetic model structure used in the previous study. Indeed, in spite of its good performance (largely due to the associated systematic identification procedure), this kinetic model which represents activation and/or inhibition effects by every culture component suffers from some limitations: it does not explicitely address saturation by a culture component. The structure models this kind of behaviour by an inhibition which compensates a strong activation. Note that the generalization of this kinetic model is a challenging task as physical interpretation has to be improved while a systematic identification procedure has to be maintained.<p><p>The last part of this work is devoted to another kind of biological systems: proteins. Such macromolecules, which are essential parts of all living organisms and consist of combinations of only 20 different basis molecules called amino acids, are currently used in the industrial world. In order to allow their functioning in non-physiological conditions, industrials are open to modify protein amino acid sequence. However, substitutions of an amino acid by another involve thermodynamic stability changes which may lead to the loss of the biological protein functionality. Among several theoretical methods predicting stability changes caused by mutations, the PoPMuSiC (Prediction Of Proteins Mutations Stability Changes) program has been developed within the Genomic and Structural Bioinformatics Group of the Université Libre de Bruxelles. This software allows to predict, in silico, changes in thermodynamic stability of a given protein under all possible single-site mutations, either in the whole sequence or in a region specified by the user. However, PoPMuSiC suffers from limitations and should be improved thanks to recently developed techniques of protein stability evaluation like the statistical mean force potentials of Dehouck et al. (2006). Our work proposes to enhance the performances of PoPMuSiC by the combination of the new energy functions of Dehouck et al. (2006) and the well known artificial neural networks, MultiLayer Perceptron or Radial Basis Function network. This time, we attempt to obtain models physically interpretable thanks to an appropriate use of the neural networks.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
718

Bioprocess software sensors development facing modelling and model uncertainties / Développement de capteurs logiciels pour les bioprocédés face aux incertitudes de modélisation et de modèle

Hulhoven, Xavier 07 December 2006 (has links)
The exponential development of biotechnology has lead to a quasi unlimited number of potential products going from biopolymers to vaccines. Cell culture has therefore evolved from the simple cell growth outside its natural environment to its use to produce molecules that they do not naturally produce. This rapid development could not be continued without new control and supervising tools as well as a good process understanding. This requirement involves however a large diversity and a better accessibility of process measurements. In this framework, software sensors show numerous potentialities. The objective of a software sensor is indeed to provide an estimation of the system state variables and particularly those which are not obtained through in situ hardware sensors or laborious and expensive analysis. In this context, This work attempts to join the knowledge of increasing bioprocess complexity and diversity and the time scale of process developments and favours systematic modelling methodology, its flexibility and the speed of development. In the field of state observation, an important modelling constraint is the one induced by the selection of the state to estimate and the available measurements. Another important constraint is the model quality. The central axe of this work is to provide solutions in order to reduce the weight of these constraints to software sensors development. On this purpose, we propose four solutions to four main questions that may arise. The first two ones concern modelling uncertainties.<p><p>1."How to develop a software sensor using measurements easily available on pilot scale bioreactor?" The proposed solution is a static software sensor using an artificial neural network. Following this modelling methodology we developed static software sensors for the biomass and ethanol concentrations in a pilot scale S. cerevisae cell culture using the measurement of titrating base quantity, agitation rate and CO& / Doctorat en sciences agronomiques et ingénierie biologique / info:eu-repo/semantics/nonPublished
719

Etude numérique de la balistique intérieure des armes de petit calibre

Papy, Alexandre 30 September 2005 (has links)
Motivation<p><p>Ce document synthétise un travail de quatre années relatif à l'étude des phénomènes dynamiques rencontrés dans une arme de petit calibre. Jusqu'à présent, des efforts ont été réalisés pour simuler des armes de gros calibre, avec plus ou moins de succès. L'adaptation directe de ces méthodes au petit calibre est, la plupart du temps, décevante car peu précise. De plus, le coût des essais en arme de petit calibre, relativement faible par rapport à des essais en armes de calibre plus important, a contribué au désintérêt des études dans ce domaine. Encore aujourd'hui, des fabriquants d'armes de renommée internationale ne disposent pas de modèles pour le petit calibre. Celui-ci a été, et reste encore aujourd'hui, le parent pauvre des simulations numériques en balistique intérieure.<p><p>A l'heure actuelle, de nombreuses recherches sont entreprises dans le cadre des canons électriques ou électromagnétiques. Ces armes, qui représentent peut-être le futur de la balistique, ne sont encore qu'à un stade fort éloigné d'une utilisation effective et opérationnelle. La situation est donc assez paradoxale :les armes de petit calibre sont les plus utilisées (dans le cadre d'une utilisation militaire, sportive ou à des fins de tests) mais il n'existe, à proprement parler, que peu de modèles mathématiques permettant une simulation précise et rigoureuse. Dans ce contexte, ce travail va démontrer que des modèles de balistique intérieure peuvent être utilisés avec succès pour la simulation de tirs en armes de petit calibre.<p><p>Une des originalités de ce travail consiste en l'utilisation d'un logiciel de CFD (Computational Fluid Dynamics} comme squelette d'un simulateur de la balistique intérieure, et à son application sur des armes de petit calibre. L'approche employée permet de dissocier les aspects "mécanique des fluides" et traitement de l'écoulement, des aspects purement balistiques. Nous nous attacherons donc à évaluer la capacité d'un code CFD à fonctionner dans l'environnement particulier de la simulation du "coup de canon".<p><p>Plan du travail<p><p>Cette thèse peut être subdivisée en quatre différentes parties. La première partie, plutôt générale, vise à situer le problème dans son contexte. Elle débute par une introduction rapide à la balistique, et s'attarde sur les buts de la balistique intérieure en mettant l'accent sur les particularités des armes de petit calibre, le cas échéant.<p><p>La simulation sur ordinateur est un aspect important du problème qui doit nécessairement être mis en rapport avec des résultats réels. C'est pourquoi la chaîne de mesure utilisée classiquement en balistique, ainsi que les dispositifs expérimentaux employés pour obtenir des résultats de validation, sont brièvement présentés dans la deuxième partie.<p><p>La troisième partie est axée sur les modèles. Nous présentons les principaux types de modèles que l'on peut retrouver en balistique intérieure. Les modèles à paramètres globaux et à paramètres locaux sont développés et nous formulons quelques remarques générales au sujet de l'état de l'art dans ce domaine, avant de nous interroger sur la problématique du choix d'un logiciel CFD adapté à l'utilisation visée.<p><p>Nous présentons alors le logiciel choisi, et détaillons les modèles qu'il utilise pour tenir compte des particularités de la balistique intérieure. Le mouvement du projectile dans l'arme, la combustion et le traitement du problème diphasique sont notamment passés en revue et développés.<p><p>Mobidic (Mobidic est l'acronyme de :MOdélisation Balistique Intérieure DIphasique Canon) est un logiciel français que nous avons obtenu vers la fin de cette étude. Ce logiciel est reconnu pour sa capacité à modéliser précisément les tirs en arme de moyen et gros calibre. Son fonctionnement et les modèles qu'il utilise sont exposés et comparés à notre implémentation.<p><p>La quatrième et dernière partie n'est certainement pas la moins importante. Elle présente les résultats issus des tirs que nous avons réalisés et les différentes étapes de validation qui ont été menées à bien, depuis les tests de base jusqu'à la validation totale dans deux armes de petit calibre.<p><p>Enfin, les conclusions, remarques et directions futures clôturent ce travail. / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
720

Development of numerical code for the study of marangoni convection

Melnikov, Denis 14 May 2004 (has links)
A numerical code for solving the time-dependent incompressible 3D Navier-Stokes equations with finite volumes on overlapping staggered grids in cylindrical and rectangular geometry is developed. In the code, written in FORTRAN, the momentum equation for the velocity is solved by projection method and Poisson equation for the pressure is solved by ADI implicit method in two directions combined with discrete fast Fourier transform in the third direction. A special technique for overcoming the singularity on the cylinder's axis is developed. This code, taking into account dependence upon temperature of the viscosity, density and surface tension of the liquid, is used to study the fluid motion in a cylinder with free cylindrical surface (under normal and zero-gravity conditions); and in a rectangular closed cell with a source of thermocapillary convection (bubble inside attached to one of the cell's faces). They are significant problems in crystal growth and in general experiments in fluid dynamics respectively. Nevertheless, the main study is dedicated to the liquid bridge problem.<p><p>The development of thermocapillary convection inside a cylindrical liquid bridge is investigated by using a direct numerical simulation of the 3D, time-dependent problem for a wide range of Prandtl numbers, Pr = 0.01 - 108. For Pr > 0.08 (e.g. silicon oils), above the critical value of temperature difference between the supporting disks, two counter propagating hydrothermal waves bifurcate from the 2D steady state. The existence of standing and traveling waves is discussed. The dependence of viscosity upon temperature is taken into account. For Pr = 4, 0-g conditions, and for Pr = 18.8, 1-g case with unit aspect ratio an investigation of the onset of chaos was numerically carried out. <p><p>For a Pr = 108 liquid bridge under terrestrial conditions ,the appearance and the development of thermoconvective oscillatory flows were investigated for different ambient conditions around the free surface.<p><p>Transition from 2D thermoconvective steady flow to a 3D flow is considered for low-Prandtl fluids (Pr = 0.01) in a liquid bridge with a non-cylindrical free surface. For Pr < 0.08 (e.g. liquid metals), in supercritical region of parameters 3D but non-oscillatory convective flow is observed. The computer program developed for this simulation transforms the original non-rectangular physical domain into a rectangular computational domain.<p><p>A study of how presence of a bubble in experimental rectangular cell influences the convective flow when carrying out microgravity experiments. As a model, a real experiment called TRAMP is numerically simulated. The obtained results were very different from what was expected. First, because of residual gravity taking place on board any spacecraft; second, due to presence of a bubble having appeared on the experimental cell's wall. Real data obtained from experimental observations were taken for the calculations.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished

Page generated in 0.0748 seconds