• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 11
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 93
  • 20
  • 19
  • 18
  • 17
  • 17
  • 15
  • 15
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Task-based multifrontal QR solver for heterogeneous architectures / Solveur multifrontal QR à base de tâches pour architectures hétérogènes

Lopez, Florent 11 December 2015 (has links)
Afin de s'adapter aux architectures multicoeurs et aux machines de plus en plus complexes, les modèles de programmations basés sur un parallélisme de tâche ont gagné en popularité dans la communauté du calcul scientifique haute performance. Les moteurs d'exécution fournissent une interface de programmation qui correspond à ce paradigme ainsi que des outils pour l'ordonnancement des tâches qui définissent l'application. Dans cette étude, nous explorons la conception de solveurs directes creux à base de tâches, qui représentent une charge de travail extrêmement irrégulière, avec des tâches de granularités et de caractéristiques différentes ainsi qu'une consommation mémoire variable, au-dessus d'un moteur d'exécution. Dans le cadre du solveur qr mumps, nous montrons dans un premier temps la viabilité et l'efficacité de notre approche avec l'implémentation d'une méthode multifrontale pour la factorisation de matrices creuses, en se basant sur le modèle de programmation parallèle appelé "flux de tâches séquentielles" (Sequential Task Flow). Cette approche, nous a ensuite permis de développer des fonctionnalités telles que l'intégration de noyaux dense de factorisation de type "minimisation de cAfin de s'adapter aux architectures multicoeurs et aux machines de plus en plus complexes, les modèles de programmations basés sur un parallélisme de tâche ont gagné en popularité dans la communauté du calcul scientifique haute performance. Les moteurs d'exécution fournissent une interface de programmation qui correspond à ce paradigme ainsi que des outils pour l'ordonnancement des tâches qui définissent l'application. Dans cette étude, nous explorons la conception de solveurs directes creux à base de tâches, qui représentent une charge de travail extrêmement irrégulière, avec des tâches de granularités et de caractéristiques différentes ainsi qu'une consommation mémoire variable, au-dessus d'un moteur d'exécution. Dans le cadre du solveur qr mumps, nous montrons dans un premier temps la viabilité et l'efficacité de notre approche avec l'implémentation d'une méthode multifrontale pour la factorisation de matrices creuses, en se basant sur le modèle de programmation parallèle appelé "flux de tâches séquentielles" (Sequential Task Flow). Cette approche, nous a ensuite permis de développer des fonctionnalités telles que l'intégration de noyaux dense de factorisation de type "minimisation de cAfin de s'adapter aux architectures multicoeurs et aux machines de plus en plus complexes, les modèles de programmations basés sur un parallélisme de tâche ont gagné en popularité dans la communauté du calcul scientifique haute performance. Les moteurs d'exécution fournissent une interface de programmation qui correspond à ce paradigme ainsi que des outils pour l'ordonnancement des tâches qui définissent l'application. / To face the advent of multicore processors and the ever increasing complexity of hardware architectures, programming models based on DAG parallelism regained popularity in the high performance, scientific computing community. Modern runtime systems offer a programming interface that complies with this paradigm and powerful engines for scheduling the tasks into which the application is decomposed. These tools have already proved their effectiveness on a number of dense linear algebra applications. In this study we investigate the design of task-based sparse direct solvers which constitute extremely irregular workloads, with tasks of different granularities and characteristics with variable memory consumption on top of runtime systems. In the context of the qr mumps solver, we prove the usability and effectiveness of our approach with the implementation of a sparse matrix multifrontal factorization based on a Sequential Task Flow parallel programming model. Using this programming model, we developed features such as the integration of dense 2D Communication Avoiding algorithms in the multifrontal method allowing for better scalability compared to the original approach used in qr mumps. In addition we introduced a memory-aware algorithm to control the memory behaviour of our solver and show, in the context of multicore architectures, an important reduction of the memory footprint for the multifrontal QR factorization with a small impact on performance. Following this approach, we move to heterogeneous architectures where task granularity and scheduling strategies are critical to achieve performance. We present, for the multifrontal method, a hierarchical strategy for data partitioning and a scheduling algorithm capable of handling the heterogeneity of resources. Finally we present a study on the reproducibility of executions and the use of alternative programming models for the implementation of the multifrontal method. All the experimental results presented in this study are evaluated with a detailed performance analysis measuring the impact of several identified effects on the performance and scalability. Thanks to this original analysis, presented in the first part of this study, we are capable of fully understanding the results obtained with our solver.
82

Modélisation et méthodes numériques pour l'étude du transport de particules dans un plasma chaud / Modelling and numerical methods for the study of particle transport in a hot plasma

Guisset, Sébastien 23 September 2016 (has links)
Les modèles aux moments angulaires constituent des descriptions intermédiaires entre les modèles cinétiques et les modèles fluides. Dans ce manuscrit, les modèles aux moments angulaires basés sur un principe de minimisation d'entropie sont étudiés pour des applications en physique des plasmas. Ce mémoire se découpe en trois parties. La première est une contribution à la modélisation en physique des plasmas à travers le formalisme des modèles aux moments angulaires. Dans celle-ci, le domaine de validité de ces modèles est étudié en régimes non-collisionels. Il est également montré que les opérateurs de collisions proposés pour le modèle M1 permettent de retrouver des coefficients de transport plasma précis. La deuxième partie de ce document concerne la dérivation de méthodes numériques pour l'étude du transport de particules en temps long. Dans ce cadre, des schémas numériques appropriés pour le modèle M1, préservant l'asymptotique, sont construits et validés numériquement. La troisième partie représente un premier pas significatif vers la modélisation multi-espèces. Ici, le modèle aux moments angulaire M1, construit dans un référentiel mobile, est appliqué à la dynamique des gaz raréfiés. Les propriétés de ce modèle sont détaillées, un schéma numérique est proposé et une validation numérique est menée. / Angular moments models represent alternative descriptions situated in between the kinetic and the fluid models. In this work, angular moments models based on an entropy minimisation principle are considered for plasma physics applications. This manuscript is organised in three parts. The first one is a contribution to plasma physics modelling within the formalism of angular moments models. The validity domain of angular moments models in collisionless regimes is studied. It is also shown that the collisional operators proposed for the M1 angular moments model enable to recover accurate plasma transport coefficients. The second part of this document deals with the derivation of numerical methods for the long timescales particle transport. Appropriate asymptotic-preserving numerical schemes are designed for the M1 angular moments model and numerical validations are performed. The third part represents a first important step toward multi-species modelling. The M1 angular moments model in a moving frame is introduced and applied to rarefied gas dynamics. The model properties are highlighted, a numerical scheme is proposed and a numerical validation is carried out.
83

Simulations numériques d’écoulements incompressibles interagissant avec un corps déformable : application à la nage des poissons / Numerical simulation of incompressible flows interacting with forced deformable bodies : Application to fish swimming

Ghaffari Dehkharghani, Seyed Amin 15 December 2014 (has links)
Une méthode numérique précise et efficace est proposée pour la simulation de corps déformables interagissant avec un écoulement incompressible. Les équations de Navier-Stokes, considérées dans leur formulation vorticité fonction de courant, sont discrétisées temporellement et spatialement à l'aide respectivement d'un schéma d'ordre 4 de Runge-Kutta et par des différences finies compactes. Grâce à l'utilisation d'un maillage uniforme, nous proposons un nouveau solveur direct au quatrième ordre pour l'équation de Poisson, permettant de garantir l'incompressibilité au zéro machine sur une grille optimale. L'introduction d'un corps déformable dans l'écoulement de fluide est réalisée au moyen d'une méthode de pénalisation de volume. La déformation du corps est imposée par l'utilisation d'un maillage lagrangien structuré mobile qui interagit avec le fluide environnant en raison des forces hydrodynamiques et du moment (calculés sur le maillage eulérien de référence). Une loi de contrôle efficace de la courbure d'un poisson anguilliforme nageant vers une cible prescrite est proposée. La méthode numérique développée prouve son efficacité et précision tant dans le cas de la nage du poisson mais aussi plus d'un grand nombre de problèmes d'interactions fluide-structure. / We present an efficient algorithm for simulation of deformable bodies interacting with two-dimensional incompressible flows. The temporal and spatial discretizations of the Navier--Stokes equations in vorticity stream-function formulation are based on classical fourth-order Runge--Kutta and compact finite differences, respectively. Using a uniform Cartesian grid we benefit from the advantage of a new fourth-order direct solver for the Poisson equation to ensure the incompressibility constraint down to machine zero over an optimal grid. For introducing a deformable body in fluid flow, the volume penalization method is used. A Lagrangian structured grid with prescribed motion covers the deformable body which is interacting with the surrounding fluid due to the hydrodynamic forces and the torque calculated on the Eulerian reference grid. An efficient law for controlling the curvature of an anguilliform fish, swimming toward a prescribed goal, is proposed which is based on the geometrically exact theory of nonlinear beams and quaternions. Validation of the developed method shows the efficiency and expected accuracy of the algorithm for fish-like swimming and also for a variety of fluid/solid interaction problems.
84

Development Of A General Purpose Flow Solver For Euler Equations

Shende, Nikhil Vijay 07 1900 (has links) (PDF)
No description available.
85

[pt] OTIMIZAÇÃO TOPOLÓGICA USANDO MALHAS POLIÉDRICAS / [en] TOPOLOGY OPTIMIZATION USING POLYHEDRAL MESHES

22 February 2019 (has links)
[pt] A otimização topológica tem se desenvolvido bastante e possui potencial para revolucionar diversas áreas da engenharia. Este método pode ser implementado a partir de diferentes abordagens, tendo como base o Método dos Elementos Finitos. Ao se utilizar uma abordagem baseada no elemento, potencialmente, cada elemento finito pode se tornar um vazio ou um sólido, e a cada elemento do domínio é atribuído uma variável de projeto, constante, denominada densidade. Do ponto de vista Euleriano, a topologia obtida é um subconjunto dos elementos iniciais. No entanto, tal abordagem está sujeita a instabilidades numéricas, tais como conexões de um nó e rápidas oscilações de materiais do tipo sólido-vazio (conhecidas como instabilidade de tabuleiro). Projetos indesejáveis podem ser obtidos quando elementos de baixa ordem são utilizados e métodos de regularização e/ou restrição não são aplicados. Malhas poliédricas não estruturadas naturalmente resolvem esses problemas e oferecem maior flexibilidade na discretização de domínios não Cartesianos. Neste trabalho investigamos a otimização topológica em malhas poliédricas por meio de um acoplamento entre malhas. Primeiramente, as malhas poliédricas são geradas com base no conceito de diagramas centroidais de Voronoi e posteriormente otimizadas para uso em análises de elementos finitos. Demonstramos que o número de condicionamento do sistema de equações associado pode ser melhorado ao se minimizar uma função de energia relacionada com a geometria dos elementos. Dada a qualidade da malha e o tamanho do problema, diferentes tipos de resolvedores de sistemas de equações lineares apresentam diferentes desempenhos e, portanto, ambos os resolvedores diretos e iterativos são abordados. Em seguida, os poliedros são decompostos em tetraedros por um algoritmo específico de acoplamento entre as malhas. A discretização em poliedros é responsável pelas variáveis de projeto enquanto a malha tetraédrica, obtida pela subdiscretização da poliédrica, é utilizada nas análises via método dos elementos finitos. A estrutura modular, que separa as rotinas e as variáveis usadas nas análises de deslocamentos das usadas no processo de otimização, tem se mostrado promissora tanto na melhoria da eficiência computacional como na qualidade das soluções que foram obtidas neste trabalho. Os campos de deslocamentos e as variáveis de projeto são relacionados por meio de um mapeamento. A arquitetura computacional proposta oferece uma abordagem genérica para a solução de problemas tridimensionais de otimização topológica usando poliedros, com potencial para ser explorada em outras aplicações que vão além do escopo deste trabalho. Finalmente, são apresentados diversos exemplos que demonstram os recursos e o potencial da abordagem proposta. / [en] Topology optimization has had an impact in various fields and has the potential to revolutionize several areas of engineering. This method can be implemented based on the finite element method, and there are several approaches of choice. When using an element-based approach, every finite element is a potential void or actual material, whereas every element in the domain is assigned to a constant design variable, namely, density. In an Eulerian setting, the obtained topology consists of a subset of initial elements. This approach, however, is subject to numerical instabilities such as one-node connections and rapid oscillations of solid and void material (the so-called checkerboard pattern). Undesirable designs might be obtained when standard low-order elements are used and no further regularization and/or restrictions methods are employed. Unstructured polyhedral meshes naturally address these issues and offer fl exibility in discretizing non-Cartesians domains. In this work we investigate topology optimization on polyhedra meshes through a mesh staggering approach. First, polyhedra meshes are generated based on the concept of centroidal Voronoi diagrams and further optimized for finite element computations. We show that the condition number of the associated system of equations can be improved by minimizing an energy function related to the element s geometry. Given the mesh quality and problem size, different types of solvers provide different performances and thus both direct and iterative solvers are addressed. Second, polyhedrons are decomposed into tetrahedrons by a tailored embedding algorithm. The polyhedra discretization carries the design variable and a tetrahedra subdiscretization is nested within the polyhedra for finite element analysis. The modular framework decouples analysis and optimization routines and variables, which is promising for software enhancement and for achieving high fidelity solutions. Fields such as displacement and design variables are linked through a mapping. The proposed mapping-based framework provides a general approach to solve three-dimensional topology optimization problems using polyhedrons, which has the potential to be explored in applications beyond the scope of the present work. Finally, the capabilities of the framework are evaluated through several examples, which demonstrate the features and potential of the proposed approach.
86

HIGHER ORDER OPTIMIZATION TECHNIQUES FOR MACHINE LEARNING

Sudhir B. Kylasa (5929916) 09 December 2019 (has links)
<div> <div> <div> <p>First-order methods such as Stochastic Gradient Descent are methods of choice for solving non-convex optimization problems in machine learning. These methods primarily rely on the gradient of the loss function to estimate descent direction. However, they have a number of drawbacks, including converging to saddle points (as opposed to minima), slow convergence, and sensitivity to parameter tuning. In contrast, second order methods that use curvature information in addition to the gradient, have been shown to achieve faster convergence rates, theoretically. When used in the context of machine learning applications, they offer faster (quadratic) convergence, stability to parameter tuning, and robustness to problem conditioning. In spite of these advantages, first order methods are commonly used because of their simplicity of implementation and low per-iteration cost. The need to generate and use curvature information in the form of a dense Hessian matrix makes each iteration of second order methods more expensive. </p><p><br></p> <p>In this work, we address three key problems associated with second order methods – (i) what is the best way to incorporate curvature information into the optimization procedure; (ii) how do we reduce the operation count of each iteration in a second order method, while maintaining its superior convergence property; and (iii) how do we leverage high-performance computing platforms to significant accelerate second order methods. To answer the first question, we propose and validate the use of Fisher information matrices in second order methods to significantly accelerate convergence. The second question is answered through the use of statistical sampling techniques that suitably sample matrices to reduce per-iteration cost without impacting convergence. The third question is addressed through the use of graphics processing units (GPUs) in distributed platforms to deliver state of the art solvers.</p></div></div></div><div><div><div> <p>Through our work, we show that our solvers are capable of significant improvement over state of the art optimization techniques for training machine learning models. We demonstrate improvements in terms of training time (over an order of magnitude in wall-clock time), generalization properties of learned models, and robustness to problem conditioning. </p> </div> </div> </div>
87

Modélisation et simulation Eulériennes des écoulements diphasiques à phases séparées et dispersées : développement d’une modélisation unifiée et de méthodes numériques adaptées au calcul massivement parallèle / Eulerian modeling and simulations of separated and disperse two-phase flows : development of a unified modeling approach and associated numerical methods for highly parallel computations

Drui, Florence 07 July 2017 (has links)
Dans un contexte industriel, l’utilisation de modèles diphasiques d’ordre réduit est nécessaire pour pouvoir effectuer des simulations numériques prédictives d’injection de combustible liquide dans les chambres de combustion automobiles et aéronautiques, afin de concevoir des équipements plus performants et moins polluants. Le processus d’atomisation du combustible, depuis sa sortie de l’injecteur sous un régime de phases séparées, jusqu’au brouillard de gouttelettes dispersées, est l’un des facteurs clés d’une combustion de bonne qualité. Aujourd’hui cependant, la prise en compte de toutes les échelles physiques impliquées dans ce processus nécessite une avancée majeure en termes de modélisation, de méthodes numériques et de calcul haute performance (HPC). Ces trois aspects sont abordés dans cette thèse. Premièrement, des modèles de mélange, dérivés par le principe variationnel de Hamilton et le second principe de la thermodynamique sont étudiés. Ils sont alors enrichis afin de pouvoir décrire des pulsations des interfaces au niveau de la sous-échelle. Des comparaisons avec des données expérimentales dans un contexte de milieux à bulles permettent de vérifier la cohérence physique des modèles et de valider la méthodologie. Deuxièmement, une stratégie de discrétisation est développée, basée sur une séparation d’opérateur, permettant la résolution indépendante de la partie convective des systèmes à l’aide de solveurs de Riemann approchés standards et les termes sources à l’aide d’intégrateurs d’équations différentielles ordinaires. Ces différentes méthodes répondent aux particularités des systèmes diphasiques compressibles, ainsi qu’au choix de l’utilisation de maillages adaptatifs (AMR). Pour ces derniers, une stratégie spécifique est développée : il s’agit du choix de critères de raffinement et de la projection de la solution d’une grille à une autre (plus fine ou plus grossière). Enfin, l’utilisation de l’AMR dans un cadre HPC est rendue possible grâce à la bibliothèque AMR p4est, laquelle a montré une excellente scalabilité jusqu’à plusieurs milliers de coeurs de calcul. Un code applicatif, CanoP, a été développé et permet de simuler des écoulements fluides avec des méthodes de volumes finis sur des maillages AMR. CanoP pourra être utilisé pour des futures simulations d’atomisation liquide. / In an industrial context, reduced-order two-phase models are used in predictive simulations of the liquid fuel injection in combustion chambers and help designing more efficient and less polluting devices. The combustion quality strongly depends on the atomization process, starting from the separated phase flow at the exit of the nozzle down to the cloud of fuel droplets characterized by a disperse-phase flow. Today, simulating all the physical scales involved in this process requires a major breakthrough in terms of modeling, numerical methods and high performance computing (HPC). These three aspects are addressed in this thesis. First, we are interested in mixture models, derived through Hamilton’s variational principle and the second principle of thermodynamics. We enrich these models, so that they can describe sub-scale pulsations mechanisms. Comparisons with experimental data in a context of bubbly flows enables to assess the models and the methodology. Based on a geometrical study of the interface evolution, new tracks are then proposed for further enriching the mixture models using the same methodology. Second, we propose a numerical strategy based on finite volume methods composed of an operator splitting strategy, approximate Riemann solvers for the resolution of the convective part and specific ODE solvers for the source terms. These methods have been adapted so as to handle several difficulties related to two-phase flows, like the large acoustic impedance ratio, the stiffness of the source terms and low-Mach issues. Moreover, a cell-based Adaptive Mesh Refinement (AMR) strategy is considered. This involves to develop refinement criteria, the setting of the solution values on the new grids and to adapt the standard methods for regular structured grids to non-conforming grids. Finally, the scalability of this AMR tool relies on the p4est AMR library, that shows excellent scalability on several thousands cores. A code named CanoP has been developed and enables to solve fluid dynamics equations on AMR grids. We show that CanoP can be used for future simulations of the liquid atomization.
88

Fast algorithms for compressing electrically large volume integral equations and applications to thermal and quantum science and engineering

Yifan Wang (13175469) 29 July 2022 (has links)
<p>Among computational electromagnetic methods, Integral Equation (IE) solvers have a great capability in solving open-region problems such as scattering and radiation, due to no truncation boundary condition required. Volume Integral Equation (VIE) solvers are especially capable of handling arbitrarily shaped geometries and inhomogeneous materials. However, the numerical system resulting from a VIE analysis is a dense system, having $N^2$ nonzero elements for a problem of $ N $ unknowns. The dense numerical system in conjunction with the large number of unknowns resulting from a volume discretization prevents a practical use of the VIE for solving large-scale problems.</p> <p>In this work, two fast algorithms of $ O(N \log N) $ complexity to generate an rank-minimized $ H^2 $-representation for electrically large VIEs are developed. The algorithms systematically compress the off-diagonal admissible blocks of full VIE matrix into low-rank forms of total storage of $O(N)$. Both algorithms are based on nested cross approximation, which are purely algebraic. The first one is a two-stage algorithm. The second one is optimized to only use one-stage, and has a significant speedup. Numerical experiments on electrically large examples with over 33 million unknowns demonstrate the efficiency and accuracy of the proposed algorithms. </p> <p>Important applications of VIEs in thermal and quantum engineering have also been explored in this work. Creating spin(circularly)-polarized infrared thermal radiation source without an external magnetic field is important in science and engineering. Here we study two materials, magnetic Weyl semimetals and manganese-bismuth(MnBi), which both have permittivity tensors of large gyrotropy, and can emit circularly-polarized thermal radiations without an external magnetic field. We also design symmetry-broken metasurfaces, which show strong circularly-polarized radiations in simulations and experiments. In spin qubit quantum computing systems, metallic gates and antennas are necessary for quantum gate operations. But their existence greatly enhances evanescent wave Johnson noise (EWJN), which induces the decay of spin qubits and limits the quantum gate operation fidelity. Here we first use VIE to accurately simulate realistic quantum gate designs and quantify the influence on gate fidelity due to this noise.</p>
89

A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

Cioaca, Alexandru 04 September 2013 (has links)
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration. / Ph. D.
90

Strengthening the heart of an SMT-solver : Design and implementation of efficient decision procedures / Renforcement du noyau d’un démonstrateur SMT : Conception et implantation de procédures de décisions efficaces

Iguernelala, Mohamed 10 June 2013 (has links)
Cette thèse s'intéresse à la démonstration automatique de la validité de formules mathématiques issues de la preuve de programmes. Elle se focalise tout particulièrement sur la Satisfiabilité Modulo Théories (SMT): un jeune domaine de recherche qui a connu de grands progrès durant la dernière décennie. Les démonstrateurs de cette famille ont des applications diverses dans la conception de microprocesseurs, la preuve de programmes, le model-checking, etc.Les démonstrateurs SMT offrent un bon compromis entre l'expressivité et l'efficacité. Ils reposent sur une coopération étroite d'un solveur SAT avec une combinaison de procédures de décision pour des théories spécifiques comme la théorie de l'égalité libre avec des symboles non interprétés, l'arithmétique linéaire sur les entiers et les rationnels, et la théorie des tableaux.L'objectif de cette thèse est d'améliorer l'efficacité et l'expressivité du démonstrateur SMT Alt-Ergo. Pour cela, nous proposons une nouvelle procédure de décision pour la théorie de l'arithmétique linéaire sur les entiers. Cette procédure est inspirée par la méthode de Fourier-Motzkin, mais elle utilise un simplexe sur les rationnels pour effectuer les calculs en pratique. Nous proposons également un nouveau mécanisme de combinaison, capable de raisonner dans l'union de la théorie de l'égalité libre, la théorie AC des symboles associatifs et commutatifs et une théorie arbitraire deShostak. Ce mécanisme est une extension modulaire et non intrusive de la procédure de completion close modulo AC avec la théorie de Shostak. Aussi, nous avons étendu Alt-Ergo avec des procédures de décision existantes pour y intégrer d'autres théories intéressantes comme la théorie de types de données énumérés et la théorie des tableaux. Enfin, nous avons exploré des techniques de simplification de formules en amont et l'amélioration de son solveur SAT. / This thesis tackles the problem of automatically proving the validity of mathematical formulas generated by program verification tools. In particular, it focuses on Satisfiability Modulo Theories (SMT): a young research topic that has seen great advances during the last decade. The solvers of this family have various applications in hardware design, program verification, model checking, etc.SMT solvers offer a good compromise between expressiveness and efficiency. They rely on a tight cooperation between a SAT solver and a combination of decision procedures for specific theories, such as the free theory of equality with uninterpreted symbols, linear arithmetic over integers and rationals, or the theory of arrays.This thesis aims at improving the efficiency and the expressiveness of the Alt-Ergo SMT solver. For that, we designed a new decision procedure for the theory of linear integer arithmetic. This procedure is inspired by Fourier-Motzkin's method, but it uses a rational simplex to perform computations in practice. We have also designed a new combination framework, capable of reasoning in the union of the free theory of equality, the AC theory of associative and commutativesymbols, and an arbitrary signature-disjoint Shostak theory. This framework is a modular and non-intrusive extension of the ground AC completion procedure with the given Shostak theory. In addition, we have extended Alt-Ergo with existing decision procedures to integrate additional interesting theories, such as the theory of enumerated data types and the theory of arrays. Finally, we have explored preprocessing techniques for formulas simplification as well as the enhancement of Alt-Ergo's SAT solver.

Page generated in 0.0411 seconds