Spelling suggestions: "subject:"constraint atisfaction 3dproblem"" "subject:"constraint atisfaction 23problem""
31 |
Contribution à l'évaluation des risques liés au TMD (transport de matières dangereuses) en prenant en compte les incertitudes / Contribution to the risk assessment related to DGT (dangerous goods transportation) by taking into account uncertaintiesSafadi, El Abed El 09 July 2015 (has links)
Le processus d'évaluation des risques technologiques, notamment liés au Transport de Matières Dangereuses (TMD), consiste, quand un événement accidentel se produit, à évaluer le niveau de risque potentiel des zones impactées afin de pouvoir dimensionner et prendre rapidement des mesures de prévention et de protection (confinement, évacuation...) dans le but de réduire et maitriser les effets sur les personnes et l'environnement. La première problématique de ce travail consiste donc à évaluer le niveau de risque des zones soumises au transport des matières dangereuses. Pour ce faire, un certain nombre d'informations sont utilisées, comme la quantification de l'intensité des phénomènes qui se produisent à l'aide de modèles d'effets (analytique ou code informatique). Pour ce qui concerne le problème de dispersion de produits toxiques, ces modèles contiennent principalement des variables d'entrée liées à la base de données d'exposition, de données météorologiques,… La deuxième problématique réside dans les incertitudes affectant certaines entrées de ces modèles. Pour correctement réaliser une cartographie en déterminant la zone de de danger où le niveau de risque est jugé trop élevé, il est nécessaire d'identifier et de prendre en compte les incertitudes sur les entrées afin de les propager dans le modèle d'effets et ainsi d'avoir une évaluation fiable du niveau de risque. Une première phase de ce travail a consisté à évaluer et propager l'incertitude sur la concentration qui est induite par les grandeurs d'entrée incertaines lors de son évaluation par les modèles de dispersion. Deux approches sont utilisées pour modéliser et propager les incertitudes : l'approche ensembliste pour les modèles analytiques et l'approche probabiliste (Monte-Carlo) qui est plus classique et utilisable que le modèle de dispersion soit analytique ou défini par du code informatique. L'objectif consiste à comparer les deux approches pour connaitre leurs avantages et inconvénients en termes de précision et temps de calcul afin de résoudre le problème proposé. Pour réaliser les cartographies, deux modèles de dispersion (Gaussien et SLAB) sont utilisés pour évaluer l'intensité des risques dans la zone contaminée. La réalisation des cartographies a été abordée avec une méthode probabiliste (Monte Carlo) qui consiste à inverser le modèle d'effets et avec une méthode ensembliste générique qui consiste à formuler ce problème sous la forme d'un ensemble de contraintes à satisfaire (CSP) et le résoudre ensuite par inversion ensembliste. La deuxième phase a eu pour but d'établir une méthodologie générale pour réaliser les cartographies et améliorer les performances en termes de temps du calcul et de précision. Cette méthodologie s'appuie sur 3 étapes : l'analyse préalable des modèles d'effets utilisés, la proposition d'une nouvelle approche pour la propagation des incertitudes mixant les approches probabiliste et ensembliste en tirant notamment partie des avantages des deux approches précitées, et utilisable pour n'importe quel type de modèle d'effets spatialisé et statique, puis finalement la réalisation des cartographies en inversant les modèles d'effets. L'analyse de sensibilité présente dans la première étape s'adresse classiquement à des modèles probabilistes. Nous discutons de la validité d'utiliser des indices de type Sobol dans le cas de modèles intervalles et nous proposerons un nouvel indice de sensibilité purement intervalle cette fois-ci. / When an accidental event is occurring, the process of technological risk assessment, in particular the one related to Dangerous Goods Transportation (DGT), allows assessing the level of potential risk of impacted areas in order to provide and quickly take prevention and protection actions (containment, evacuation ...). The objective is to reduce and control its effects on people and environment. The first issue of this work is to evaluate the risk level for areas subjected to dangerous goods transportation. The quantification of the intensity of the occurring events needed to do this evaluation is based on effect models (analytical or computer code). Regarding the problem of dispersion of toxic products, these models mainly contain inputs linked to different databases, like the exposure data and meteorological data. The second problematic is related to the uncertainties affecting some model inputs. To determine the geographical danger zone where the estimated risk level is not acceptable, it is necessary to identify and take in consideration the uncertainties on the inputs in aim to propagate them in the effect model and thus to have a reliable evaluation of the risk level. The first phase of this work is to evaluate and propagate the uncertainty on the gas concentration induced by uncertain model inputs during its evaluation by dispersion models. Two approaches are used to model and propagate the uncertainties. The first one is the set-membership approach based on interval calculus for analytical models. The second one is the probabilistic approach (Monte Carlo), which is more classical and used more frequently when the dispersion model is described by an analytic expression or is is defined by a computer code. The objective is to compare the two approaches to define their advantages and disadvantages in terms of precision and computation time to solve the proposed problem. To determine the danger zones, two dispersion models (Gaussian and SLAB) are used to evaluate the risk intensity in the contaminated area. The risk mapping is achieved by using two methods: a probabilistic method (Monte Carlo) which consists in solving an inverse problem on the effect model and a set-membership generic method that defines the problem as a constraint satisfaction problem (CSP) and to resolve it with an set-membership inversion method. The second phase consists in establishing a general methodology to realize the risk mapping and to improve performance in terms of computation time and precision. This methodology is based on three steps: - Firstly the analysis of the used effect model. - Secondly the proposal of a new method for the uncertainty propagationbased on a mix between the probabilistic and set-membership approaches that takes advantage of both approaches and that is suited to any type of spatial and static effect model. -Finally the realization of risk mapping by inversing the effect models. The sensitivity analysis present in the first step is typically addressed to probabilistic models. The validity of using Sobol indices for interval models is discussed and a new interval sensitivity indiceis proposed.
|
32 |
CSP, grafy a algebry / Constraint satisfaction, graphs and algebrasBulín, Jakub January 2014 (has links)
Constraint satisfaction, graphs and algebras Jakub Bulín Abstract This thesis consists of three papers in the area of algebraic approach to the constraint satisfaction problem. In the first paper, a joint work with Deli'c, Jackson and Niven, we study the reduction of fixed template CSPs to digraphs. We construct, for every relational structure A, a digraph D(A) such that CSP(A) and CSP(D(A)) are logspace equivalent and most rele- vant properties carry over from A to D(A). As a consequence, the algebraic conjectures characterizing CSPs solvable in P, NL and L are equivalent to their restrictions to digraphs. In the second paper we prove that, given a core relational structure A of bounded width and B ⊆ A, it is decidable whether B is an absorbing subuniverse of the algebra of polymorphisms of A. As a by-product, we show that Jónsson absorption coincides with the usual absorption in this case. In the third paper we establish, using modern algebraic tools (e.g. absorption theory and pointing operations), the CSP dichotomy conjecture for so-called special oriented trees and in particular prove that all tractable core special trees have bounded width. Keywords: constraint satisfaction problem, algebra of polymorphisms, absorbing subuniverse, bounded width, oriented tree.
|
33 |
Constraint-based design : two-dimensional insulating panels configuration / Conception sous contraintes : configuration de panneaux isolants à deux dimensionsBarco Santa, Andrés Felipe 20 September 2016 (has links)
Les travaux de recherche présentés dans cette thèse se situent dans une problématique d’aide à la conception d’enveloppes isolantes pour la rénovation thermique de bâtiments résidentiels collectifs. Ces enveloppes isolantes sont composées de panneaux multifonctionnels rectangulaires, configurables et préfabriqués en usine. Leur conception repose sur les cinq caractéristiques suivantes. Premièrement, le nombre de panneaux nécessaires pour concevoir une enveloppe ainsi que leur taille respective ne sont pas
connus au début de la rénovation (mais leur taille est cependant bornée). Deuxièmement, en raison des contraintes de fabrication, chaque fenêtre et chaque porte présentes sur la façade à rénover doivent être insérées dans un et un seul panneau. Troisièmement, les panneaux sont fixés à des endroits spécifiques de la façade, assez résistants pour supporter leur poids, nommés zones d’accroche. Quatrièmement, ni trous (zone non couverte), ni chevauchements entre panneaux ne sont autorisés. Cinquièmement, afin de garantir une isolation thermique performante tout en minimisant son coût, les enveloppes doivent être
composées d’un nombre minimal de panneaux. Aux vues de la complexité de ce problème, nous restreignons nos travaux de recherche aux façades rectangulaires portant des menuiseries et des zones d’accroche rectangulaires. Compte tenu des cinq caractéristiques énoncées et de l’hypothèse de forme rectangulaire des éléments traités (panneaux, façades, menuiseries, zones d’accroche), la conception des enveloppes est à la fois un problème de découpe et de conditionnement à deux dimensions et un problème de configuration. Ce problème est formalisé et traité comme un problème de satisfaction de contraintes et a pour but d’aider la conception dédites enveloppes isolantes. En tant que tel, les travaux de cette thèse présentent deux contributions majeures. En raison des caractéristiques originales du problème de calepinage de façades, sa description et sa formalisation comme un problème de satisfaction de contraintes constituent la première contribution de ces travaux de thèse. Deuxièmement, les solutions algorithmiques basées sur les contraintes constituent notre seconde contribution. En particulier, ces travaux de thèse présentent deux solutions manuelles et trois automatiques pour le problème de conception d’enveloppes isolantes. / The research presented in this thesis falls within the problem of supporting the design of thermal insulating envelopes for the renovation of collective residential buildings. These insulating envelopes are composed of rectangular multi-functional panels, configurable and prefabricated in the factory. Their design is based on the following five characteristics. First, the number of panels needed to design an envelope and their size are not known at the beginning of the renovation (but their size is however bounded). Second, because of manufacturing constraints, every window and every door present on the facade to be renovated must be inserted into one and only one panel. Third, panels are attached to specific areas of the facade strong enough to support their weight, called supporting areas. Fourth, neither holes (uncovered area) or overlapping between panels are allowed. Fifth, to ensure efficient thermal insulation while minimizing cost, envelopes should be composed of a minimum number of panels. In view of the complexity of this problem, we restrict our research to rectangular facades with rectangular joinery and supporting areas. Given the five stated characteristics and the assumption of rectangular elements (panels, facades,
joinery, supporting areas), the envelopes design is both a two-dimensional Cutting & Packing problem as well as a configuration one. This problem is formalized and treated as a constraint satisfaction problem and aims to support the design of such insulating structures. As such, the thesis presents two major contributions. Given the original features of the building renovation problem, its description and its formalization as a constraint satisfaction problem are the first contribution of the work. Second, constraint-based algorithmic solution’s are our second contribution. In particular, the thesis presents two manual and three automatic solutions for the design problem of insulating envelopes.
|
34 |
Harnessing tractability in constraint satisfaction problems / Algorithmes paramétrés pour des problèmes de satisfaction de contraintes presque traitablesCarbonnel, Clément 07 December 2016 (has links)
Le problème de satisfaction de contraintes (CSP) est un problème NP-complet classique en intelligence artificielle qui a suscité un engouement important de la communauté scientifique grâce à la richesse de ses aspects pratiques et théoriques. Cependant, au fil des années un gouffre s'est creusé entre les praticiens, qui développent des méthodes exponentielles mais efficaces pour résoudre des instances industrielles, et les théoriciens qui conçoivent des algorithmes sophistiqués pour résoudre en temps polynomial certaines restrictions de CSP dont l'intérêt pratique n'est pas avéré. Dans cette thèse nous tentons de réconcilier les deux communautés en fournissant des méthodes polynomiales pour tester automatiquement l'appartenance d'une instance de CSP à une sélection de classes traitables majeures. Anticipant la possibilité que les instances réelles ne tombent que rarement dans ces classes traitables, nous analysons également de manière systématique la possibilité de décomposer efficacement une instance en sous-problèmes traitables en utilisant des méthodes de complexité paramétrée. Finalement, nous introduisons un cadre général pour exploiter dans les CSP les idées développées pour la kernelization, un concept fondamental de complexité paramétrée jusqu'ici peu utilisé en pratique. Ce dernier point est appuyé par des expérimentations prometteuses. / The Constraint Satisfaction Problem (CSP) is a fundamental NP-complete problem with many applications in artificial intelligence. This problem has enjoyed considerable scientific attention in the past decades due to its practical usefulness and the deep theoretical questions it relates to. However, there is a wide gap between practitioners, who develop solving techniques that are efficient for industrial instances but exponential in the worst case, and theorists who design sophisticated polynomial-time algorithms for restrictions of CSP defined by certain algebraic properties. In this thesis we attempt to bridge this gap by providing polynomial-time algorithms to test for membership in a selection of major tractable classes. Even if the instance does not belong to one of these classes, we investigate the possibility of decomposing efficiently a CSP instance into tractable subproblems through the lens of parameterized complexity. Finally, we propose a general framework to adapt the concept of kernelization, central to parameterized complexity but hitherto rarely used in practice, to the context of constraint reasoning. Preliminary experiments on this last contribution show promising results.
|
35 |
Acceleration of Cognitive Domain OntologiesAtahary, Tanvir 17 May 2016 (has links)
No description available.
|
36 |
The smallest hard treesBodirsky, Manuel, Bulín, Jakub, Starke, Florian, Wernthaler, Michael 08 November 2024 (has links)
We find an orientation of a tree with 20 vertices such that the corresponding fixed-template constraint satisfaction problem (CSP) is NP-complete, and prove that for every orientation of a tree with fewer vertices the corresponding CSP can be solved in polynomial time. We also compute the smallest tree that is NL-hard (assuming L≠NL), the smallest tree that cannot be solved by arc consistency, and the smallest tree that cannot be solved by Datalog. Our experimental results also support a conjecture of Bulín concerning a question of Hell, Nešetřil and Zhu, namely that ‘easy trees lack the ability to count’. Most proofs are computer-based and make use of the most recent universal-algebraic theory about the complexity of finite-domain CSPs. However, further ideas are required because of the huge number of orientations of trees. In particular, we use the well-known fact that it suffices to study orientations of trees that are cores and show how to efficiently decide whether a given orientation of a tree is a core using the arc-consistency procedure. Moreover, we present a method to generate orientations of trees that are cores that works well in practice. In this way we found interesting examples for the open research problem to classify finite-domain CSPs in NL.
|
37 |
AN EMPIRICAL STUDY OF DIFFERENT BRANCHING STRATEGIES FOR CONSTRAINT SATISFACTION PROBLEMSPark, Vincent Se-jin January 2004 (has links)
Many real life problems can be formulated as constraint satisfaction problems <i>(CSPs)</i>. Backtracking search algorithms are usually employed to solve <i>CSPs</i> and in backtracking search the choice of branching strategies can be critical since they specify how a search algorithm can instantiate a variable and how a problem can be reduced into subproblems; that is, they define a search tree. In spite of the apparent importance of the branching strategy, there have been only a few empirical studies about different branching strategies and they all have been tested exclusively for numerical constraints. In this thesis, we employ the three most commonly used branching strategies in solving finite domain <i>CSPs</i>. These branching strategies are described as follows: first, a branching strategy with strong commitment assigns its variables in the early stage of the search as in k-Way branching; second, 2-Way branching guides a search by branching one side with assigning a variable and the other with eliminating the assigned value; third, the domain splitting strategy, based on the least commitment principle, branches by dividing a variable's domain rather than by assigning a single value to a variable. In our experiments, we compared the efficiency of different branching strategies in terms of their execution times and the number of choice points in solving finite domain <i>CSPs</i>. Interestingly, our experiments provide evidence that the choice of branching strategy for finite domain problems does not matter much in most cases--provided we are using an effective variable ordering heuristic--as domain splitting and 2-Way branching end up simulating k-Way branching. However, for an optimization problem with large domain size, the branching strategy with the least commitment principle can be more efficient than the other strategies. This empirical study will hopefully interest other practitioners to take different branching schemes into consideration in designing heuristics.
|
38 |
3D Spatial Modeling of Stacked Containers based on Wireless Sensor Network : application to the physical internet / Modélisation 3D de l'arrangement d'un container basée sur des informations extraites d'un réseau de capteurs sans fil : application à l'internet physiqueTran-Dang, Hoa 26 June 2017 (has links)
Le paradigme de l’Internet Physique a été introduit il y a quelques années pour transformer globalement la manière dont les objets physiques seront manipulés, entreposés et transportés dans le cadre d’une logistique durable. L’une des caractéristiques importante de l’Internet Physique est liée à l’encapsulation des marchandises dans des conteneurs modulaires standardisés. Le modèle de fonctionnement proposé de l’Internet Physique, s’il rationnalise les transports, engendre des manutentions plus nombreuses, en particulier au sein des PI-hubs, où les opérations de routage, de déchargement et (re)chargement des conteneurs nécessitent une organisation et une gestion rationnelle. La multiplicité et la diversité des opérations (automatisées ou non) à mettre en œuvre simultanément ne peut être conduite de manière efficiente qu’en cas de parfaite synchronisation entre la réalité du système physique et de celle du système informationnel. Les propositions de cette thèse adressent cette problématique applicative et constituent à ce titre une contribution au concept de l’Internet Physique. Elles visent à l’obtention, en temps réel, d’une image ou d’un modèle spatial des PI-conteneurs, qui disposent chacun d’un nœud WSN. L’assemblage de ces différents conteneurs au sein d’un conteneur de plus haut niveau (ou conteneur composite) permet de constituer alors un réseau de capteurs ad-hoc. Ces conteneurs deviennent ainsi des éléments actifs de l’automatisation de la chaine logistique. A partir des seules informations de proximité issues de cette instrumentation, nous montrons dans cette thèse qu’il est possible de construire le modèle spatial des PI-conteneurs / The Physical Internet paradigm was introduced few years ago to transform globally how physical objects will be handled, stored and transported as part of a sustainable logistics. One of the important characteristics of the Physical Internet is the encapsulation of goods in standardized modular containers. Although the Physical Internet rationalizes transport, it generates more frequent handling, particularly within PI-hubs, where the operations of routing, unloading and (re) loading containers require an efficient organization and management. The multiplicity and the diversity of operations (automated or not) to be implemented simultaneously can only be carried out efficiently in the case of perfect synchronization between the reality of the physical system and that of the information system. The proposals of this thesis address this problem and constitute a contribution to the concept of the Physical Internet. They aim to obtain in real time, the spatial distribution (or layout) of the PI-containers when they are stacked in a higher-level container, so called composite container. To do this, we propose to exploit the intelligence and the activeness concepts of each PI container which is equipped with wireless sensor node. Hence, the composition of a composite PI containers constitutes an adhoc network of sensor nodes. From neighborhood relationships between these nodes, we show in this thesis that it is possible to construct the spatial 3D layout of the PI-containers and control at any time and at any place the effective compliance between the real composition and the data stored in the information system
|
39 |
Integrality Gaps for Strong Linear Programming and Semidefinite Programming RelaxationsGeorgiou, Konstantinos 17 February 2011 (has links)
The inapproximability for NP-hard combinatorial optimization problems lies in the heart of theoretical computer science. A negative result can be either conditional, where the starting point is a complexity assumption, or unconditional, where the inapproximability holds for a restricted model of computation.
Algorithms based on Linear Programming (LP) and Semidefinite Programming (SDP) relaxations are among the most prominent models of computation. The related and common measure of efficiency is the integrality gap, which sets the limitations of the associated algorithmic schemes. A number of systematic procedures, known as lift-and-project systems, have been proposed to improve the integrality gap of standard relaxations. These systems build strong hierarchies of either LP relaxations, such as the Lovasz-Schrijver (LS) and the Sherali-Adams (SA) systems, or SDP relaxations, such as the Lovasz-Schrijver SDP (LS+), the Sherali-Adams SDP (SA+) and the Lasserre (La) systems.
In this thesis we prove integrality gap lower bounds for the aforementioned lift-and-project systems and for a number of combinatorial optimization problems, whose inapproximability is yet unresolved. Given that lift-and-project systems produce relaxations that have given the best algorithms known for a series of combinatorial problems, the lower bounds can be read as strong evidence of the inapproximability of the corresponding optimization problems. From the results found in the thesis we highlight the following:
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) LS+ relaxation of the Vertex Cover polytope has integrality gap 2-epsilon.
The integrality gap of the standard SDP for Vertex Cover remains 2-o(1) even if all hypermetric inequalities are added to the relaxation. The resulting relaxations are incomparable to the SDP relaxations derived by the LS+ system. Finally, the addition of all ell1 inequalities eliminates all solutions not in the integral hull.
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) SA relaxation of Vertex Cover has integrality gap 2-epsilon. The integrality gap remains tight even for superconstant-level SA+ relaxations.
We prove a tight lower bound for the number of tightenings that the SA system needs in order to prove the Pigeonhole Principle. We also prove sublinear and linear rank bounds for the La and SA systems respectively for the Tseitin tautology.
Linear levels of the SA+ system treat highly unsatisfiable instances of fixed predicate-P constraint satisfaction problems over q-ary alphabets as fully satisfiable, when the satisfying assignments of the predicates P can be equipped with a balanced and pairwise independent distribution.
We study the performance of the Lasserre system on the cut polytope. When the input is the complete graph on 2d+1 vertices, we show that the integrality gap is at least 1+1/(4d(d+1)) for the level-d SDP relaxation.
|
40 |
AN EMPIRICAL STUDY OF DIFFERENT BRANCHING STRATEGIES FOR CONSTRAINT SATISFACTION PROBLEMSPark, Vincent Se-jin January 2004 (has links)
Many real life problems can be formulated as constraint satisfaction problems <i>(CSPs)</i>. Backtracking search algorithms are usually employed to solve <i>CSPs</i> and in backtracking search the choice of branching strategies can be critical since they specify how a search algorithm can instantiate a variable and how a problem can be reduced into subproblems; that is, they define a search tree. In spite of the apparent importance of the branching strategy, there have been only a few empirical studies about different branching strategies and they all have been tested exclusively for numerical constraints. In this thesis, we employ the three most commonly used branching strategies in solving finite domain <i>CSPs</i>. These branching strategies are described as follows: first, a branching strategy with strong commitment assigns its variables in the early stage of the search as in k-Way branching; second, 2-Way branching guides a search by branching one side with assigning a variable and the other with eliminating the assigned value; third, the domain splitting strategy, based on the least commitment principle, branches by dividing a variable's domain rather than by assigning a single value to a variable. In our experiments, we compared the efficiency of different branching strategies in terms of their execution times and the number of choice points in solving finite domain <i>CSPs</i>. Interestingly, our experiments provide evidence that the choice of branching strategy for finite domain problems does not matter much in most cases--provided we are using an effective variable ordering heuristic--as domain splitting and 2-Way branching end up simulating k-Way branching. However, for an optimization problem with large domain size, the branching strategy with the least commitment principle can be more efficient than the other strategies. This empirical study will hopefully interest other practitioners to take different branching schemes into consideration in designing heuristics.
|
Page generated in 0.1399 seconds