• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 11
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 93
  • 20
  • 19
  • 18
  • 17
  • 17
  • 15
  • 15
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Análise da distribuição do número de operações de resolvedores SAT / Distribution\'s analysis of operations\'s number of SAT solvers

Reis, Poliana Magalhães 28 February 2012 (has links)
No estudo da complexidade de problemas computacionais destacam-se duas classes conhecidas como P e NP. A questao P=NP e um dos maiores problemas nao resolvidos em Ciencia da Compu- tacao teorica e Matematica contemporanea. O problema SAT foi o primeiro problema reconhecido como NP-completo e consiste em verificar se uma determinada formula da logica proposicional clas- sica e ou nao satisfazivel. As implementacoes de algoritmos para resolver problemas SAT sao conhe- cidas como resolvedores SAT (SAT Solvers). Existem diversas aplicacoes em Ciencia da Computacao que podem ser realizadas com SAT Solvers e com outros resolvedores de problemas NP-completos que podem ser reduzidos ao SAT como por exemplo problemas de coloracao de grafos, problemas de agendamento e problemas de planejamento. Dentre os mais eficientes algoritmos para resolvedores de SAT estao Sato, Grasp, Chaff, MiniSat e Berkmin. O Algoritmo Chaff e baseado no Algoritmo DPLL o qual existe a mais de 40 anos e e a estrategia mais utilizada para os Sat Solvers. Essa dissertacao apresenta um estudo aprofundado do comportamento do zChaff (uma implementacao muito eficiente do Chaff) para saber o que esperar de suas execucoes em geral . / In the study of computational complexity stand out two classes known as P and NP. The question P = NP is one of the greatest unsolved problems in theoretical computer science and contemporary mathematics. The SAT problem was first problem recognized as NP-complete and consists to check whether a certain formula of classical propositional logic is satisfiable or not. The implementations of algorithms to solve SAT problems are known as SAT solvers. There are several applications in computer science that can be performed with SAT solvers and other solvers NP- complete problems can be reduced to SAT problems such as graph coloring, scheduling problems and planning problems. Among the most efficient algorithms for SAT solvers are Sato, Grasp, Chaf, MiniSat and Berkmin. The Chaff algorithm is based on the DPLL algorithm which there is more than 40 years and is the most used strategy for Sat Solvers. This dissertation presents a detailed study of the behavior of zChaff (a very efficient implementation of the Chaff) to know what to expect from their performance in general.
12

Análise da distribuição do número de operações de resolvedores SAT / Distribution\'s analysis of operations\'s number of SAT solvers

Poliana Magalhães Reis 28 February 2012 (has links)
No estudo da complexidade de problemas computacionais destacam-se duas classes conhecidas como P e NP. A questao P=NP e um dos maiores problemas nao resolvidos em Ciencia da Compu- tacao teorica e Matematica contemporanea. O problema SAT foi o primeiro problema reconhecido como NP-completo e consiste em verificar se uma determinada formula da logica proposicional clas- sica e ou nao satisfazivel. As implementacoes de algoritmos para resolver problemas SAT sao conhe- cidas como resolvedores SAT (SAT Solvers). Existem diversas aplicacoes em Ciencia da Computacao que podem ser realizadas com SAT Solvers e com outros resolvedores de problemas NP-completos que podem ser reduzidos ao SAT como por exemplo problemas de coloracao de grafos, problemas de agendamento e problemas de planejamento. Dentre os mais eficientes algoritmos para resolvedores de SAT estao Sato, Grasp, Chaff, MiniSat e Berkmin. O Algoritmo Chaff e baseado no Algoritmo DPLL o qual existe a mais de 40 anos e e a estrategia mais utilizada para os Sat Solvers. Essa dissertacao apresenta um estudo aprofundado do comportamento do zChaff (uma implementacao muito eficiente do Chaff) para saber o que esperar de suas execucoes em geral . / In the study of computational complexity stand out two classes known as P and NP. The question P = NP is one of the greatest unsolved problems in theoretical computer science and contemporary mathematics. The SAT problem was first problem recognized as NP-complete and consists to check whether a certain formula of classical propositional logic is satisfiable or not. The implementations of algorithms to solve SAT problems are known as SAT solvers. There are several applications in computer science that can be performed with SAT solvers and other solvers NP- complete problems can be reduced to SAT problems such as graph coloring, scheduling problems and planning problems. Among the most efficient algorithms for SAT solvers are Sato, Grasp, Chaf, MiniSat and Berkmin. The Chaff algorithm is based on the DPLL algorithm which there is more than 40 years and is the most used strategy for Sat Solvers. This dissertation presents a detailed study of the behavior of zChaff (a very efficient implementation of the Chaff) to know what to expect from their performance in general.
13

Fast High-order Integral Equation Solvers for Acoustic and Electromagnetic Scattering Problems

Alharthi, Noha 18 November 2019 (has links)
Acoustic and electromagnetic scattering from arbitrarily shaped structures can be numerically characterized by solving various surface integral equations (SIEs). One of the most effective techniques to solve SIEs is the Nyström method. Compared to other existing methods,the Nyström method is easier to implement especially when the geometrical discretization is non-conforming and higher-order representations of the geometry and unknowns are desired. However,singularities of the Green’s function are more difficult to”manage”since they are not ”smoothened” through the use of a testing function. This dissertation describes purely numerical schemes to account for different orders of singularities that appear in acoustic and electromagnetic SIEs when they are solved by a high-order Nyström method utilizing a mesh of curved discretization elements. These schemes make use of two sets of basis functions to smoothen singular integrals: the grid robust high-order Lagrange and the high-order Silvester-Lagrange interpolation basis functions. Numerical results comparing the convergence of two schemes are presented. Moreover, an extremely scalable implementation of fast multipole method (FMM) is developed to efficiently (and iteratively) solve the linear system resulting from the discretization of the acoustic SIEs by the Nyström method. The implementation results in O(N log N) complexity for high-frequency scattering problems. This FMM-accelerated solver can handle N =2 billion on a 200,000-core Cray XC40 with 85% strong scaling efficiency. Iterative solvers are often ineffective for ill-conditioned problems. Thus, a fast direct (LU)solver,which makes use of low-rank matrix approximations,is also developed. This solver relies on tile low rank (TLR) data compression format, as implemented in the hierarchical computations on many corearchitectures (HiCMA) library. This requires to taskify the underlying SIE kernels to expose fine-grained computations. The resulting asynchronous execution permit to weaken the artifactual synchronization points,while mitigating the overhead of data motion. We compare the obtained performance results of our TLRLU factorization against the state-of-the-art dense factorizations on shared memory systems. We achieve up to a fourfold performance speedup on a 3D acoustic problem with up to 150 K unknowns in double complex precision arithmetics.
14

Algorithmes sur GPU de visualisation et de calcul pour des maillages non-structurés / Algorithms on the GPU for visualization and computations on unstructured grids

Buatois, Luc 16 May 2008 (has links)
De nombreux domaines utilisent à présent de nouveaux types de grilles composées de polyèdres arbitraires, autrement dit des grilles fortement non-structurées. La problématique de cette thèse concerne la définition de nouveaux outils de visualisation et de calcul sur de telles grilles. Pour la visualisation, cela pose à la fois le problème du stockage et de l'adaptativité des algorithmes à une géométrie et une topologie variables. Pour le calcul, cela pose le problème de la résolution de grands systèmes linéaires creux non-structurés. Pour aborder ces problèmes, l'augmentation incessante de la puissance de calcul parallèle des processeurs graphiques nous fournit de nouveaux outils. Toutefois, l'utilisation de ces GPU nécessite de définir de nouveaux algorithmes adaptés aux modèles de programmation parallèle qui leur sont spécifiques. Nos contributions sont les suivantes : (1) Une méthode générique de visualisation tirant partie de la puissance de calcul des GPU pour extraire des isosurfaces à partir de grandes grilles fortement non-structurées. (2) Une méthode de classification de cellules qui permet d'accélérer l'extraction d'isosurfaces grâce à une pré-sélection des seules cellules intersectées. (3) Un algorithme d'interpolation temporelle d'isosurfaces. Celui-ci permet de visualiser de manière continue dans le temps l'évolution d'isosurfaces. (4) Un algorithme massivement parallèle de résolution de grands systèmes linéaires non-structurés creux sur le GPU. L'originalité de celui-ci concerne son adaptation à des matrices de motif arbitraire, ce qui le rend applicable à n'importe quel système creux, dont ceux issus de maillages fortement non-structurés / This thesis proposes new tools for visualization and computation on strongly unstructured grids. Visualization of such grids that have variable geometry and topology, poses the problem of how to store data and how algorithms could handle such variability. Doing computations on such grids poses the problem of solving large sparse unstructured linear systems. The ever-growing parallel power of GPUs makes them more and more valuable for handling theses tasks. However, using GPUs calls for defining new algorithms highly adapted to their specific programming model. Most recent algorithms for Geometry Processing or Computational Fluid Dynamics (CFD) are using new types of grids made of arbitrary polyhedra, in other words strongly unstructured grids. In case of CFD simulations, these grids can be mapped with scalar or vector fields representing physical properties (for example : density, porosity, permeability). Our contributions are: (1) An efficient generic visualization method that uses GPU's power to accelerate isosurface extraction for large unstructured grids. (2) An adaptative cell classification method that accelerates isosurface extraction by pre-selecting only intersected cells. (3) An efficient algorithm for temporal interpolation of isosurfaces. This algrithm helps to visualize in a continuous maner the evolution of isosurfaces through time. (4) A massively parallel algorithm for solving large sparse unstructured linear systems on the GPU. Its originality comes from its adaptation to sparse matrices with random pattern, which enables to solve any sparse linear system, thus the ones that come from strongly unstructured grids
15

Modélisation et simulation numérique des écoulements diphasiques métastables / Modelling and numerical simulation of metastable two-phase flows

De lorenzo, Marco 28 May 2018 (has links)
Cette thèse de doctorat s’intéresse aux écoulements diphasiques métastables typiques de certains transitoires accidentels qui pourraient intervenir dans les centrales nucléaires. Ces phénomènes sont difficiles à traiter en raison de la complexité topologique de l’écoulement, des transferts entre phases et du couplage fort entre les caractéristiques thermodynamiques et les aspects mathématiques.Les méthodes aujourd’hui en usage dans l’industrie ne décrivent pas complétement la complexité de ces écoulements car elles s’appuient sur des modèles trop simples. En fait ces méthodes ne prennent pas en compte le déséquilibre thermo-chimique entre l’eau liquide et sa vapeur. Par ailleurs, les méthodes hyperboliques proposées récemment dans la littérature pour la simulation des écoulements métastables ne peuvent pas être appliquées dans l’industrie car elles utilisent des lois d’état simples qui ne sont pas adaptées pour les calculs industriels.Le but de cette thèse est de développer une nouvelle approche qui couple les méthodes hyperboliques modernes à des équations d’état précises. Le produit final de ce travail est un nouveau modèle pour l’analyse industrielle des écoulements diphasiques métastables qui associe de nouvelles techniques pour le calcul des transferts interfaciaux et des propriétés de l’eau et de sa vapeur. De plus, cette approche est d’un coût abordable pour les configurations industrielles.Les méthodes développées dans cette thèse ont été systématiquement vérifiées avec des solutions exactes et validées en utilisant des données expérimentales de la littérature. / This Ph.D. thesis deals with the metastable two-phase flows typical of accidental transients that could occur in nuclear power plants. Those phenomena are of difficult treatment due to the topological difficulty of the flow, the interphase transfers and the strong coupling between thermodynamic features and mathematical aspects.The methods today in use in industry do not fully describe the complexity of these flows because based on too simple models. In fact, they do not take into account the thermo-chemical disequilibrium between liquid and vapor water. On the other hand, the hyperbolic methods recently proposed in the literature for the simulation of metastable flows can not be used in the industry because based on simple equations of state that are not adequate for industrial calculations.The purpose of this Ph.D. thesis is to develop a new approach that couples the modern hyperbolic methods to accurate equations of state. The final product of this work is a new model for the industrial analysis of metastable two-phase flows that incorporates novel techniques for the calculation of interfacial transfers and of steam-water properties. Moreover, it is computationally affordable for its use in industrial configurations.The methods developed in this thesis have been sistematically verified against exact solutions and validated using experimental data of the literature.
16

Computation offloading for algorithms in absence of the Cloud

Sthapit, Saurav January 2018 (has links)
Mobile cloud computing is a way of delegating complex algorithms from a mobile device to the cloud to complete the tasks quickly and save energy on the mobile device. However, the cloud may not be available or suitable for helping all the time. For example, in a battlefield scenario, the cloud may not be reachable. This work considers neighbouring devices as alternatives to the cloud for offloading computation and presents three key contributions, namely a comprehensive investigation of the trade-off between computation and communication, Multi-Objective Optimisation based approach to offloading, and Queuing Theory based algorithms that present the benefits of offloading to neighbours. Initially, the states of neighbouring devices are considered to be known and the decision of computation offloading is proposed as a multi-objective optimisation problem. Novel Pareto optimal solutions are proposed. The results on a simulated dataset show up to 30% increment in performance even when cloud computing is not available. However, information about the environment is seldom known completely. In Chapter 5, a realistic environment is considered such as delayed node state information and partially connected sensors. The network of sensors is modelled as a network of queues (Open Jackson network). The offloading problem is posed as minimum cost problem and solved using Linear solvers. In addition to the simulated dataset, the proposed solution is tested on a real computer vision dataset. The experiments on the random waypoint dataset showed up to 33% boost on performance whereas in the real dataset, exploiting the temporal and spatial distribution of the targets, a significantly higher increment in performance is achieved.
17

Test generation and animation based on object-oriented specifications.

Krieger, Matthias 09 December 2011 (has links) (PDF)
The goal of this thesis is the development of support for test generation and animation based on object-oriented specifications. We aim particularly to take advantage of state-of-the-art satisfiability solving techniques by using an appropriate representation of object-oriented data. While automated test generation seeks a large set of data to execute an implementation on, animation performs computations that comply with a specification based on user-provided input data. Animation is a valuable technique for validating specifications.As a foundation of this work, we present clarifications and a partial formalization of the Object Constraint Language (OCL) as well as some extensions in order to allow for test generation and animation based on OCL specifications.For test generation, we have implemented several enhancements to HOL-TestGen, a tool built on top of the Isabelle theorem proving system that generates tests from specifications in Higher-Order Logic (HOL). We show how SMT solvers can be used to solve various types of constraints in HOL and present a modular approach to case splitting for deriving test cases. The latter facilitates the introduction of splitting rules that are tailored to object-oriented specifications.For animation, we implemented the tool OCLexec for animating OCL specifications. OCLexec generates from operation contracts corresponding Java implementations that call an SMT-based constraint solver at runtime.
18

An Immersed Interface Method for the Incompressible Navier-Stokes Equations in Irregular Domains

Le, Duc-Vinh, Khoo, Boo Cheong, Peraire, Jaime 01 1900 (has links)
We present an immersed interface method for the incompressible Navier Stokes equations capable of handling rigid immersed boundaries. The immersed boundary is represented by a set of Lagrangian control points. In order to guarantee that the no-slip condition on the boundary is satisfied, singular forces are applied on the fluid at the immersed boundary. The forces are related to the jumps in pressure and the jumps in the derivatives of both pressure and velocity, and are interpolated using cubic splines. The strength of singular forces is determined by solving a small system of equations at each time step. The Navier-Stokes equations are discretized on a staggered Cartesian grid by a second order accurate projection method for pressure and velocity. / Singapore-MIT Alliance (SMA)
19

Efficient Reasoning Techniques for Large Scale Feature Models

Mendonca, Marcilio January 2009 (has links)
In Software Product Lines (SPLs), a feature model can be used to represent the similarities and differences within a family of software systems. This allows describing the systems derived from the product line as a unique combination of the features in the model. What makes feature models particularly appealing is the fact that the constraints in the model prevent incompatible features from being part of the same product. Despite the benefits of feature models, constructing and maintaining these models can be a laborious task especially in product lines with a large number of features and constraints. As a result, the study of automated techniques to reason on feature models has become an important research topic in the SPL community in recent years. Two techniques, in particular, have significant appeal for researchers: SAT solvers and Binary Decision Diagrams (BDDs). Each technique has been applied successfully for over four decades now to tackle many practical combinatorial problems in various domains. Currently, several approaches have proposed the compilation of feature models to specific logic representations to enable the use of SAT solvers and BDDs. In this thesis, we argue that several critical issues related to the use of SAT solvers and BDDs have been consistently neglected. For instance, satisfiability is a well-known NP-complete problem which means that, in theory, a SAT solver might be unable to check the satisfiability of a feature model in a feasible amount of time. Similarly, it is widely known that the size of BDDs can become intractable for large models. At the same time, we currently do not know precisely whether these are real issues when feature models, especially large ones, are compiled to SAT and BDD representations. Therefore, in our research we provide a significant step forward in the state-of-the-art by examining deeply many relevant properties of the feature modeling domain and the mechanics of SAT solvers and BDDs and the sensitive issues related to these techniques when applied in that domain. Specifically, we provide more accurate explanations for the space and/or time (in)tractability of these techniques in the feature modeling domain, and enhance the algorithmic performance of these techniques for reasoning on feature models. The contributions of our work include the proposal of novel heuristics to reduce the size of BDDs compiled from feature models, several insights on the construction of efficient domain-specific reasoning algorithms for feature models, and empirical studies to evaluate the efficiency of SAT solvers in handling very large feature models.
20

Efficient Reasoning Techniques for Large Scale Feature Models

Mendonca, Marcilio January 2009 (has links)
In Software Product Lines (SPLs), a feature model can be used to represent the similarities and differences within a family of software systems. This allows describing the systems derived from the product line as a unique combination of the features in the model. What makes feature models particularly appealing is the fact that the constraints in the model prevent incompatible features from being part of the same product. Despite the benefits of feature models, constructing and maintaining these models can be a laborious task especially in product lines with a large number of features and constraints. As a result, the study of automated techniques to reason on feature models has become an important research topic in the SPL community in recent years. Two techniques, in particular, have significant appeal for researchers: SAT solvers and Binary Decision Diagrams (BDDs). Each technique has been applied successfully for over four decades now to tackle many practical combinatorial problems in various domains. Currently, several approaches have proposed the compilation of feature models to specific logic representations to enable the use of SAT solvers and BDDs. In this thesis, we argue that several critical issues related to the use of SAT solvers and BDDs have been consistently neglected. For instance, satisfiability is a well-known NP-complete problem which means that, in theory, a SAT solver might be unable to check the satisfiability of a feature model in a feasible amount of time. Similarly, it is widely known that the size of BDDs can become intractable for large models. At the same time, we currently do not know precisely whether these are real issues when feature models, especially large ones, are compiled to SAT and BDD representations. Therefore, in our research we provide a significant step forward in the state-of-the-art by examining deeply many relevant properties of the feature modeling domain and the mechanics of SAT solvers and BDDs and the sensitive issues related to these techniques when applied in that domain. Specifically, we provide more accurate explanations for the space and/or time (in)tractability of these techniques in the feature modeling domain, and enhance the algorithmic performance of these techniques for reasoning on feature models. The contributions of our work include the proposal of novel heuristics to reduce the size of BDDs compiled from feature models, several insights on the construction of efficient domain-specific reasoning algorithms for feature models, and empirical studies to evaluate the efficiency of SAT solvers in handling very large feature models.

Page generated in 0.0297 seconds