181 |
Developing a Bottom Up Cost Calculation Model and Methodology for Thermal Storage ApplicationsLi, Thomas January 2019 (has links)
Increasing storage for energy is one of the most important challenges today to overcome in order to enable higher penetration of renewable energy in the existing energy systems. Thermal storage is one category of energy storage that has been successfully demonstrated in a number of engineering projects and is showing promising potential in the future. However, a technology cannot be widespread if it is not economically feasible and sustainable in the long run. Bottom up cost analysis can be used to assess economic viability of a technology. For newer technologies, the top-down cost calculation is not always possible due to the limited amount of data. The aim of this thesis is to investigate the best practices in performing bottom up cost calculation and to propose a methodology with the purpose of enabling it to be implemented over thermal energy storage bottom up economic evaluations. To achieve this, two proven applications, molten salt storage for concentrated solar power and ice thermal storage for building cooling, were examined as the basis of the bottom up state of the art calculation models. It was found that in the ice storage case, the models were often done in a hybrid bottom up-top down approach which limits a fully detailed cost analysis. Instead these are referred as case studies instead because of the few elements needed in their calculation. The constructed specialized models and case studies are then compared against other external sources to validate the proposed economic analysis procedure. The numerical results showed some discrepancies when compared to external resources. A compilation of a general bottom up cost model with detailed step by step model to perform a bottom up calculation for thermal storages is finally proposed in this work. / Att förbättra möjligheterna att lagra energi är en av dagens viktigaste utmaningar för att kunna öka andelen förnybar energi i vårt energisystem. Termisk energilagring eller värmelagring är en typ av energilagring som använts framgångsrikt i flera områden och visar hög potential i ett flertal ytterligare teknologier. En teknologi kan dock inte få en omfattande påverkan om den inte är ekonomiskt hållbar. En bottom-up kostnadsanalys kan användas för att uppskatta genomförbarheten för teknologin. För nya teknologier är en top-down kostnadsanalys inte alltid möjlig, vilket är beroende på den begränsade tillgången till data. Syftet med den här uppsatsen är att undersöka de vanligaste tillvägagångssätten i att utföra en bottom-up kostnadsanalys och föreslå en metodologi som har ändamålet att användas i bottom-up kostnadsanalyser för värmelagringstekniker. För att uppnå detta har två beprövade tekniker, smält saltlagring i koncentrerad solkraft och islagring för kylning av byggnader, undersökts som modeller för moderna bottom-up kostnadsanalyser. Efter undersökning fann man att i islagringsteknologi genomförs kostnadsanalysen vanligen i ett hybrid bottom up-top down upplägg, vilket begränsar möjligheten att rekonstruera en fullt detaljerad bottom-up kostnadsanalys. Dessa kommer istället att refereras som fallstudier eftersom endast ett fåtal objekt behövdes i en kostnadsberäkning. Specialiserade kostnadsmodeller som konstruerats och fallstudier jämförs med externa källor för att bekräfta den föreslagna analysproceduren för kostnadsberäkningar. Jämförelsen med externa källor visade viss spridning i numeriska resultat. En sammanställning av en generell bottom-up kostnadsmodell med detaljerad steg för steg-beskrivning för att genomföra en bottom-up kostnadsanalys är dessutom föreslagen i detta arbete.
|
182 |
A feasibility study on utility-scale solar integration in the Kingdom of Saudi ArabiaKrishnamoorthy, Barthram 26 October 2010 (has links)
Due to the vast fossil fuel wealth, the country of Saudi Arabia is experiencing a dramatic growth in both population and GDP. Therefore there is a growing demand for water and energy to meet these needs. All of the electricity that is generated is sourced from crude oil and natural gas. All natural gas production is used domestically and there are no net imports or exports. Due to many constrains on the natural gas supply, there is a slow shift in the generation mix going towards crude oil based power generation. This study assessed the viability of utility scale solar integration into the Saudi Arabian electric mix to potentially relieve some demand pressure for natural gas consumption as well as reduce green house gas emissions. Parabolic trough concentrated solar power technology was chosen as the primary technology for utility scale integration. A total of five scenarios were calculated. The scenarios include the following, base case, 5%, 10%, 15%, 20% solar integration in terms of installed capacity. Two sets of net present values were calculated. The net present values of each scenario were calculated. A second set of net present values was calculated with a projected increase in electricity prices. The natural gas and crude oil offset from the four solar integration scenarios were calculated using the base case forecasted natural gas and crude oil consumption from power generation. As expected, natural gas and crude oil consumption decreased when there was an increase in solar integration. The expected carbon dioxide offsets were calculated for each scenario. There was a decrease in carbon dioxide emission as solar integration was increased. Finally, all of these analyses were used as criteria for a decision analysis using the analytical hierarchy process. Depending on the decision maker’s importance on the determined criteria, solar integration in the Kingdom of Saudi Arabia is achievable. / text
|
183 |
Hardness of Constraint Satisfaction and Hypergraph Coloring : Constructions of Probabilistically Checkable Proofs with Perfect CompletenessHuang, Sangxia January 2015 (has links)
A Probabilistically Checkable Proof (PCP) of a mathematical statement is a proof written in a special manner that allows for efficient probabilistic verification. The celebrated PCP Theorem states that for every family of statements in NP, there is a probabilistic verification procedure that checks the validity of a PCP proof by reading only 3 bits from it. This landmark theorem, and the works leading up to it, laid the foundation for many subsequent works in computational complexity theory, the most prominent among them being the study of inapproximability of combinatorial optimization problems. This thesis focuses on a broad class of combinatorial optimization problems called Constraint Satisfaction Problems (CSPs). In an instance of a CSP problem of arity k, we are given a set of variables taking values from some finite domain, and a set of constraints each involving a subset of at most k variables. The goal is to find an assignment that simultaneously satisfies as many constraints as possible. An alternative formulation of the goal that is commonly used is Gap-CSP, where the goal is to decide whether a CSP instance is satisfiable or far from satisfiable, where the exact meaning of being far from satisfiable varies depending on the problems.We first study Boolean CSPs, where the domain of the variables is {0,1}. The main question we study is the hardness of distinguishing satisfiable Boolean CSP instances from those for which no assignment satisfies more than some epsilon fraction of the constraints. Intuitively, as the arity increases, the CSP gets more complex and thus the hardness parameter epsilon should decrease. We show that for Boolean CSPs of arity k, it is NP-hard to distinguish satisfiable instances from those that are at most 2^{~O(k^{1/3})}/2^k-satisfiable. We also study coloring of graphs and hypergraphs. Given a graph or a hypergraph, a coloring is an assignment of colors to vertices, such that all edges or hyperedges are non-monochromatic. The gap problem is to distinguish instances that are colorable with a small number of colors, from those that require a large number of colors. For graphs, we prove that there exists a constant K_0>0, such that for any K >= K_0, it is NP-hard to distinguish K-colorable graphs from those that require 2^{Omega(K^{1/3})} colors. For hypergraphs, we prove that it is quasi-NP-hard to distinguish 2-colorable 8-uniform hypergraphs of size N from those that require 2^{(log N)^{1/4-o(1)}} colors. In terms of techniques, all these results are based on constructions of PCPs with perfect completeness, that is, PCPs where the probabilistic proof verification procedure always accepts a correct proof. Not only is this a very natural property for proofs, but it can also be an essential requirement in many applications. It has always been particularly challenging to construct PCPs with perfect completeness for NP statements due to limitations in techniques. Our improved hardness results build on and extend many of the current approaches. Our Boolean CSP result and GraphColoring result were proved by adapting the Direct Sum of PCPs idea by Siu On Chan to the perfect completeness setting. Our proof for hypergraph coloring hardness improves and simplifies the recent work by Khot and Saket, in which they proposed the notion of superposition complexity of CSPs. / Ett probabilistiskt verifierbart bevis (eng: Probabilistically Checkable Proof, PCP) av en matematisk sats är ett bevis skrivet på ett speciellt sätt vilket möjliggör en effektiv probabilistisk verifiering. Den berömda PCP-satsen säger att för varje familj av påståenden i NP finns det en probabilistisk verifierare som kontrollerar om en PCP bevis är giltigt genom att läsa endast 3 bitar från det. Denna banbrytande sats, och arbetena som ledde fram till det, lade grunden för många senare arbeten inom komplexitetsteorin, framförallt inom studiet av approximerbarhet av kombinatoriska optimeringsproblem. I denna avhandling fokuserar vi på en bred klass av optimeringsproblem i form av villkorsuppfyllningsproblem (engelska ``Constraint Satisfaction Problems'' CSPs). En instans av ett CSP av aritet k ges av en mängd variabler som tar värden från någon ändlig domän, och ett antal villkor som vart och ett beror på en delmängd av högst k variabler. Målet är att hitta ett tilldelning av variablerna som samtidigt uppfyller så många som möjligt av villkoren. En alternativ formulering av målet som ofta används är Gap-CSP, där målet är att avgöra om en CSP-instans är satisfierbar eller långt ifrån satisfierbar, där den exakta innebörden av att vara ``långt ifrån satisfierbar'' varierar beroende på problemet.Först studerar vi booleska CSPer, där domänen är {0,1}. Den fråga vi studerar är svårigheten av att särskilja satisfierbara boolesk CSP-instanser från instanser där den bästa tilldelningen satisfierar högst en andel epsilon av villkoren. Intuitivt, när ariten ökar blir CSP mer komplexa och därmed bör svårighetsparametern epsilon avta med ökande aritet. Detta visar sig vara sant och ett första resultat är att för booleska CSP av aritet k är det NP-svårt att särskilja satisfierbara instanser från dem som är högst 2^{~O(k^{1/3})}/2^k-satisfierbara. Vidare studerar vi färgläggning av grafer och hypergrafer. Givet en graf eller en hypergraf, är en färgläggning en tilldelning av färger till noderna, så att ingen kant eller hyperkant är monokromatisk. Problemet vi analyserar är att särskilja instanser som är färgbara med ett litet antal färger från dem som behöver många färger. För grafer visar vi att det finns en konstant K_0>0, så att för alla K >= K_0 är det NP-svårt att särskilja grafer som är K-färgbara från dem som kräver minst 2^{Omega(K^{1/3})} färger. För hypergrafer visar vi att det är kvasi-NP-svårt att särskilja 2-färgbara 8-likformiga hypergrafer som har N noder från dem som kräv minst 2^{(log N)^{1/4-o(1)}} färger. Samtliga dessa resultat bygger på konstruktioner av PCPer med perfekt fullständighet. Det vill säga PCPer där verifieraren alltid accepterar ett korrekt bevis. Inte bara är detta en mycket naturlig egenskap för PCPer, men det kan också vara ett nödvändigt krav för vissa tillämpningar. Konstruktionen av PCPer med perfekt fullständighet för NP-påståenden ger tekniska komplikationer och kräver delvis utvecklande av nya metoder. Vårt booleska CSPer resultat och vårt Färgläggning resultat bevisas genom att anpassa ``Direktsumman-metoden'' introducerad av Siu On Chan till fallet med perfekt fullständighet. Vårt bevis för hypergraffärgningssvårighet förbättrar och förenklar ett färskt resultat av Khot och Saket, där de föreslog begreppet superpositionskomplexitet av CSP. / <p>QC 20150916</p>
|
184 |
Techniques and tools for the verification of concurrent systemsPalikareva, Hristina January 2012 (has links)
Model checking is an automatic formal verification technique for establishing correctness of systems. It has been widely used in industry for analysing and verifying complex safety-critical systems in application domains such as avionics, medicine and computer security, where manual testing is infeasible and even minor errors could have dire consequences. In our increasingly parallelised world, concurrency has become pivotal and seamlessly woven within programming paradigms, however, extremely challenging when it comes to modelling and establishing correctness of intended behaviour. Tools for model checking concurrent systems face severe limitations due to scalability problems arising from the need to examine all possible interleavings (schedules) of executions of parallel components. Moreover, concurrency poses additional challenges to model checking, giving rise to phenomena such as nondeterminism, deadlock, livelock, etc. In this thesis we focus on adapting and developing novel model-checking techniques for concurrent systems in the setting of the process algebra CSP and its primary model checker FDR. CSP allows for a compact modelling and precise analysis of event-based concurrency, grounded on synchronous message passing as a fundamental mechanism of inter-component communication. In particular, we investigate techniques based on symbolic model checking, static analysis and abstraction, all of them exploiting the compositionality inherent in CSP and targeting to increase the scale of systems that can be tractably analysed. Firstly, we investigate symbolic model-checking techniques based on Boolean satisfiability (SAT), which we adapt for the traces model of CSP. We tailor bounded model checking (BMC), that can be used for bug detection, and temporal k-induction, which aims at establishing inductiveness of properties and is capable of both bug finding and establishing the correctness of systems. Secondly, we propose a static analysis framework for establishing livelock freedom of CSP processes, with lessons for other concurrent formalisms. As opposed to traditional exhaustive state-space exploration, our framework employs a system of rules on the syntax of a process to calculate a sound approximation of its fair/co-fair sets of events. The rules either safely classify a process as livelock-free or report inconclusiveness, thereby trading accuracy for speed. Finally, we develop a series of abstraction/refinement schemes for the traces, stable-failures and failures-divergences models of CSP and embed them into a fully automated and compositional CEGAR framework. For each of those techniques we present an implementation and an experimental evaluation on a set of CSP benchmarks.
|
185 |
The exchange: reprogramming vacant built landscapes to increase social equity and create identityPumphrey, Jared T. January 1900 (has links)
Master of Landscape Architecture / Department of Landscape Architecture/Regional and Community Planning / Blake Belanger / This master’s project and report examines the correlation between social inequity and vacancy to develop a phased revitalization strategy for Raytown, Missouri. The perception of vacant built landscapes cause people to interpret places as having no productive use (Corbin 2003). Vacant spaces appear void of opportunities and are fueled by a capitalist society where markets move toward the urban fringe in order to remain competitive (Fainstein 2010). Vacancy creates a cultural response that “erodes the local social fabric, [signifying] the ills of neglect, [and] communicating to people the futility of inner-city living” (Jakle and Wilson 1992, 175). As a result, people passing through a community dismiss these vacant spaces because what they see is a place of little value. The perception of vacancy can lead to severe social inequity as society’s affluent members move from inner-city cores. Economic viability and the overall quality of life begins to decrease.
Building on the Creating Sustainable Places Initiative for the Kansas City region and planning efforts for redeveloping the currently unused Rock Island Rail Corridor, this project explores how vacant built landscapes within Raytown’s Central Business District can be reprogrammed to establish place identity. Through critical mapping, key equity dilemmas at the metropolitan level are brought forth to identify issues that can be addressed through corridor redevelopment in Raytown. Mapping vacancies in the Raytown CBD identifies current vacant parcels. Together, the identification of vacant parcels with parcel size indicates primary redevelopment sites that can readily support higher density development in anticipation of a potential rail transit system.
Using a phased approach, temporary design solutions regain public interest in the community, while working to develop mixed-use neighborhoods, pedestrian oriented streetscapes, and improved open space amenities at future build out. Strategies at each phase provide opportunities for community gathering and living choices that accommodate a variety of people. Studying social inequity and vacancy allows landscape architecture professionals the opportunity to better understand this phenomenon and promote community revitalization through the creation of welcoming places for all people.
|
186 |
Configuration automatique d’un solveur générique intégrant des techniques de décomposition arborescente pour la résolution de problèmes de satisfaction de contraintes / Automatic configuration of generic solver embedding tree-decomposition techniques for solving constraint satisfaction problemsBlet, Loïc 30 September 2015 (has links)
La programmation par contraintes intègre des algorithmes de résolution génériques dans des langages de modélisation déclaratifs basés sur les contraintes : ces langages permettent de décrire des problèmes combinatoires sous la forme d’un ensemble de variables devant prendre leurs valeurs dans des domaines en satisfaisant des contraintes. Nous introduisons dans cette thèse un algorithme de résolution générique paramétré par : — une stratégie d’exploration de l’espace de recherche, à choisir parmi, chronological backtracking, conflict directed backjumping, conflict directed backjumping with reordering, dynamic backtracking, decision repair, et backtracking with tree decomposition ; — une heuristique de choix de variables, à choisir parmi, min-domain/ddeg et min-domain/wdeg ; — une technique de propagation de contraintes, à choisir parmi, forward checking et maintaining arc consistency. Ainsi, cet algorithme générique s’instancie en vingt-quatre configurations différentes ; certaines correspondant à des algorithmes connus, d’autres étant nouvelles. Ces vingt- quatre configurations ont été comparées expérimentalement sur un benchmark de plus de mille instances, chaque configuration étant exécutée plusieurs fois sur chaque instance pour tenir compte du non déterminisme des exécutions. Des tests statistiques sont utilisés pour comparer les performances. Cette évaluation expérimentale a permis de mieux comprendre la complémentarité des différents mécanismes de résolution, avec une attention particulière portée sur la capacité à tirer parti de la structure des instances pour accélérer la résolution. Nous identifions notamment treize configurations complémentaires telles que chaque instance de notre benchmark est bien résolue par au moins une des treize configurations. Une deuxième contribution de la thèse est d’introduire un sélecteur capable de choisir automatiquement la meilleure configuration de notre algorithme générique pour chaque nouvelle instance à résoudre : nous décrivons chaque instance par un ensemble de descripteurs et nous utilisons des techniques d’apprentissage automatique pour construire un modèle de choix de configuration à partir de ces descripteurs. Sachant que l’apprentissage est généralement plus difficile quand il y a beaucoup de configurations, nous exprimons le problème du choix du sous-ensemble de configurations pouvant être sélectionnées comme un problème de couverture d’ensemble et nous comparons deux critères de choix : le premier vise à maximiser le nombre d’instances résolues par au moins une configuration et le second vise à maximiser le nombre d’instances pour lesquelles il y a une bonne configuration disponible. Nous montrons expérimentalement que la seconde stratégie obtient généralement de meilleurs résultats, et que le sélecteur obtient de meilleures performances que chacune de nos vingt-quatre configurations initiales. / Constraint programming integrates generic solving algorithms within declarative languages based on constraints : these languages allow us to describe combinatorial problems as a set of variables which have to take their values in domains while satisfying constraints. Numerous real-life problems can be modelled in such a way, as for instance, planification problems, scheduling problems, . . . These problems are NP-complete in the general case of finite domains. We introduce in this work a generic solving algorithm parameterized by : — a strategy for exploring the search space, to be chosen from the following six, chronological backtracking, conflict directed backjumping, conflict directed backjumping with reordering, dynamic backtracking, decision repair, and backtracking with tree decomposition ; — a variable ordering heuristic, to be chosen from the following two, min-domain/ddeg and min-domain/wdeg ; — a constraint propagation technique, to be chosen from the following two, forward checking and maintaining arc consistency. Thus, this algorithm leads to 24 different configurations ; some corresponding to already known algorithms, others being new. These 24 configurations have been com- pared experimentally on a benchmark of more than a thousand instances, each configuration being executed several times to take into account the non-determinism of the executions, and a statistical test has been used to compare performances. This experimental evaluation allowed us to better understand the complementarity of the different solving mechanisms, with a focus on the ability to exploit the structure of the instances to speed up the solving process. We identify 13 complementary configurations such that every instance of our benchmark is well solved by at least one of the 13 configurations. A second contribution of this work is to introduce a selector able to choose automatically the best configuration of our generic solver for each new instance to be solved : we describe each instance by a set of features and we use machine learning techniques to build a model to choose a configuration based on these features. Knowing that the learning process is generally harder when there are many configurations to choose from, we state the problem of choosing a subset of configurations that can be picked as a set covering problem and we compare two criterion : the first one aims to maximize the number of instances solved by at least one configuration and the second one aims to maximize the number of instances for which there is a good configuration available. We show experimentally that the second strategy obtains generally better results and that the selector obtains better performances than each of the 24 initial configurations.
|
187 |
Une approche déclarative pour la génération de modèles / A Declarative Approach for Model GenerationFerdjoukh, Adel 20 October 2016 (has links)
Disposer de données dans le but de valider ou tester une approche ou un concept est d'une importance primordiale dans beaucoup de domaines différents. Malheureusement, ces données ne sont pas toujours disponibles, sont coûteuses à obtenir, ou bien ne répondent pas à certaines exigences de qualité ce qui les rend inutiles dans certains cas de figure.Un générateur automatique de données est un bon moyen pour obtenir facilement et rapidement des données valides, de différentes tailles, pertinentes et diversifiées. Dans cette thèse, nous proposons une nouvelle approche complète, dirigée par les modèles et basée sur la programmation par contraintes pour la génération de données. / Owning data is useful in many different fields. Data can be used to test and to validate approaches, algorithms and concepts. Unfortunately, data is rarely available, is cost to obtain, or is not adapted to most of cases due to a lack of quality.An automated data generator is a good way to generate quickly and easily data that are valid, in different sizes, likelihood and diverse.In this thesis, we propose a novel and complete model driven approach, based on constraint programming for automated data generation.
|
188 |
Function and Evolution of Putative Odorant Carriers in the Honey Bee (Apis mellifera)Foret, Sylvain, sylvain.foret@anu.edu.au January 2007 (has links)
The remarkable olfactory power of insect species is thought to be generated by a combinatorial action of G-protein-coupled olfactory receptors (ORs) and olfactory carriers. Two such carrier gene families are found in insects: the odorant binding proteins (OBPs) and the chemosensory proteins (CSPs). In olfactory sensilla, OBPs and CSPs are believed to deliver hydrophobic air-borne molecules to ORs, but their expression in non-olfactory tissues suggests that they also may function as general carriers in other developmental and physiological processes.
¶
Bioinformatics and experimental approaches were used to characterise the OBP and CSP gene families in a highly social insect, the western honey bee (Apis mellifera). Comparison with other insects reveals that the honey bee has the smallest set of these genes, consisting of only 21 OBPs and 6 CSPs. These numbers stand in stark contrast to the 66 OBPs and 7 CSPs in the mosquito Anopheles gambiae and the 46 OBPs and 20 CSPs in the beetle Tribolium castaneum. The genes belonging to both families are often organised in clusters, and evolve by lineage specic expansions. Positive selection has been found to play a role in generating a greater sequence diversication in the OBP family in contrast to the CSP gene family that is more conserved, especially in the binding pocket. Expression proling under a wide range of conditions shows that, in the honey, bee only a minority of these genes are antenna-specic. The remaining genes are expressed either ubiquitously, or are tightly regulated in specialized tissues or during development. These findings support the view that OBPs and CSPs are not restricted to olfaction, and are likely to be involved in broader physiological functions.
¶
Finally, the detailed expression study and the functional characterization of a member of the CSP family, uth (unable-to-hatch), is reported. This gene is expressed in a maternal-zygotic fashion, and is restricted to the egg and embryo. Blocking the zygotic expression of uth with double-stranded RNA causes abnormalities in all body parts where this gene is highly expressed.
The treated embryos are `unable-to-hatch' and cannot progress to the larval stages. Our ndings reveal a novel, essential role for this gene family and suggest that uth is an ectodermal gene involved in embryonic cuticle formation.
|
189 |
Modularité et symétrie pour les systèmes répartis; application au langage CSPBougé, Luc 30 March 1987 (has links) (PDF)
L'évaluation des systèmes répartis est habituellement fondée sur des critères numériques relatifs à la quantité d'information échangée au cours des calculs. Nous montrons que ces critères ne sont pas suffisants pour évaluer le degré de répartition des algorithmes répartis usuels. Des critères qualitatifs, spécifiques de la répartition, sont nécessaires.<br /><br />La modularité exprime que les processeurs du système n'ont initialement aucune connaissance concernant globalement le réseau dans lequel ils sont plongés. La symétrie exprime que les processeurs avec des positions topologiquement équivalentes dans le réseau ont aussi des rôles équivalents dans les calculs.<br /><br />Nous définissons ces propriétés dans le cadre du langage CSP des processus séquentiels communicants de Hoare. Nous proposons une définition syntaxique pour la modularité. Nous montrons qu'une définition syntaxique de la symétrie n'est pas suffisante. Nous en proposons une définition sémantique. Cette définition se réfère implicitement à une sémantique partiellement ordonnée de CSP. <br /><br />Nous étudions l'existence d'algorithmes de diffusion et d'élection dans les réseaux de processus communicants, qui soient modulaires et symétriques. Nous obtenons de nombreux résultats positifs et négatifs. Ceci conduit en particulier à une évaluation précise du pouvoir expressif de CSP. Nous montrons par exemple qu'il n'existe pas d'implantation des gardes d'émission par des gardes de réception seulement, si la symétrie doit être préservée.<br /><br />Ces résultats sont enfin utilisés pour proposer une solution modulaire, symétrique et bornée au problème de la détection de la terminaison répartie proposé par Francez.
|
190 |
Contraintes globales et heuristiques de recherche pour les CSPs continusBatnini, Heikel 01 December 2005 (has links) (PDF)
Les systèmes de contraintes de distance euclidienne apparaissent dans de nombreux domaines d'applications, comme en robotique, en biochimie<br />moléculaire ou en CAO. Les techniques issues de la programmation par contraintes permettent de résoudre ces problèmes en combinant une technique de bissection avec des méthodes de réduction des domaines (consistances locales ou partielles). Or, ces consistances sont des méthodes systématiques qui ne prennent pas en compte les propriétés spécifiques des contraintes.<br /><br />Nous présentons dans cette thèse deux approches pour la conception d'une contrainte globale pour la résolution de systèmes de contraintes de distance. La première approche est basée sur l'inférence de contraintes<br />redondantes directement issues de propriétés géométriques du système.<br />La deuxième approche est basée sur l'introduction d'un algorithme de filtrage global dédié aux systèmes d'équations de distance.<br />Ces travaux ont débouché sur la conception d'une<br />technique de décomposition de domaines qui exploite la structure particulière des contraintes de distance. Enfin, nous présentons une généralisation de cette heuristique de recherche à des contraintes numériques quelconques.
|
Page generated in 0.0638 seconds