• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 28
  • 14
  • 11
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 207
  • 207
  • 96
  • 85
  • 48
  • 47
  • 47
  • 33
  • 32
  • 32
  • 31
  • 28
  • 20
  • 19
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Time-Cost Optimization of Large-Scale Construction Projects Using Constraint Programming

Golzarpoor, Behrooz January 2012 (has links)
Optimization of time and cost in construction projects has been subject to extensive research since the development of the Critical Path Method (CPM). Many researchers have investigated various versions of the well-known Time-Cost Trade-off (TCT) problem including linear, convex, concave, and also the discrete (DTCT) version. Traditional methods in the literature for optimizing time and cost of construction projects range from mathematical methods to evolutionary-based ones, such as genetic algorithms, particle swarm, ant-colony, and leap frog optimization. However, none of the existing research studies has dealt with the optimization of large-scale projects in which any small saving would be significant. Traditional approaches have all been applied to projects of less than 100 activities which are far less than what exists in real-world construction projects. The objective of this study is to utilize recent developments in computation technology and novel optimization techniques such as Constraint Programming (CP) to improve the current limitations in solving large-scale DTCT problems. Throughout the first part of this research, an Excel-based TCT model has been developed to investigate the performance of traditional optimization methods, such as mathematical programming and genetic algorithms, for solving large TCT problems. The result of several experimentations confirms the inefficiency of traditional methods for optimizing large TCT problems. Subsequently, a TCT model has been developed using Optimization Programming Language (OPL) to implement the Constraint Programming (CP) technique. CP Optimizer of IBM ILOG Optimization Studio has been used to solve the model and to successfully optimize several projects ranging from a small project of 18 activities to very large projects consisting of more than 10,000 activities. Constraint programming proved to be very efficient in solving large-scale TCT problems, generating substantially better results in terms of solution quality and processing speed. While traditional optimization methods have been used to optimize projects consisting of less than one hundred activities, constraint programming demonstrated its capability of solving TCT problems comprising of thousands of activities. As such, the developed model represents a significant improvement in optimization of time and cost of large-scale construction projects and can greatly enhance the level of planning and control in such projects.
132

Planejamento integrado da cadeia de suprimentos da indústria do petróleo baseado em agentes holônicos. / Holonic agents-based integrated planning of the oil industry supply chain.

Fernando José de Moura Marcellino 24 May 2013 (has links)
A área do petróleo é uma das que mais podem se beneficiar da melhoria de eficiência da gestão da cadeia de suprimentos. Entretanto, o comportamento dinâmico de tais cadeias é muito complexo para ser modelado de forma analítica. Por outro lado, estas cadeias mostram várias características intrínsecas em comum com sistemas multiagentes, que oferecem a flexibilidade necessária para modelar as complexidades e a dinâmica das cadeias de suprimentos reais sem a necessidade de premissas muito simplificadoras. Como o problema de gerenciamento da cadeia de suprimentos apresenta uma estrutura recursiva, torna-se ainda mais conveniente usar um modelo baseado em agentes holônicos, que mostram uma estrutura do tipo fractal. Além disso, o tipo de relacionamento entre as entidades da cadeia e a necessidade de uma otimização global sugerem modelar suas interações na forma de restrições. Por esta razão, esta tese propõe um modelo distribuído de otimização através da definição de um novo problema denominado Problema de Satisfação de Restrições Holônico com Otimização (HCOP), que é baseado nos conceitos do Problema de Satisfação de Restrições Distribuído com Otimização (DCOP) e agentes holônicos. Além disso, foi desenvolvido um meta-algoritmo baseado no algoritmo DTREE para solucionar este tipo de problema, onde vários algoritmos disponíveis de otimização centralizados podem ser embutidos e integrados de tal forma a obter a configuração mais adequada para cada caso. Assim, uma típica cadeia de suprimentos da indústria do petróleo foi modelada como um HCOP, e foi desenvolvido um protótipo que implementa o meta-algoritmo proposto em um ambiente que integra sistemas de otimização de produção e logística, que são representativos em relação a situações reais. Finalmente foram realizados experimentos sobre um estudo de caso da empresa PETROBRAS, que permitiram a verificação da viabilidade deste modelo e a comprovação de suas vantagens em relação às abordagens convencionais. / The oil area is one of those that may most benefit from the improved efficiency of supply chain management. However, the dynamic behavior of such chains is too complex to be modeled analytically. Moreover, these chains show several intrinsic characteristics in common with multiagent systems, which offer the required flexibility to model the complexities and dynamics of real supply chains without rather simplifying assumptions. As the problem of managing the supply chain has a recursive structure, it becomes more convenient to use a model based on holonic agents, which show a fractal-type structure. Furthermore, the type of relationship between entities in the chain and the need for global optimization suggest to model their interactions in the form of constraints. For this reason, this thesis proposes an optimization distributed model by defining a new problem called Holonic Constraint Optimization Problem (HCOP), which is based on concepts from Distributed Constraint Satisfaction Optimization Problem (DCOP) and holonic agents. In addition we developed a meta-algorithm based on DTREE algorithm for solving this type of problem, where several algorithms available for centralized optimization algorithms can be embedded and integrated so as to obtain the most suitable configuration for each case. Thus, a typical supply chain of the petroleum industry was modeled as a HCOP, and we developed a prototype that implements the meta-algorithm in an environment that integrates the optimization systems for production and logistics, which are representative in relation to actual situations. Finally experiments were performed on a case study of the company PETROBRAS, which allowed the verification of the feasibility of this model and the proof of their advantages over conventional approaches.
133

Constrained clustering by constraint programming / Classification non supervisée sous contrainte utilisateurs par la programmation par contraintes

Duong, Khanh-Chuong 10 December 2014 (has links)
La classification non supervisée, souvent appelée par le terme anglais de clustering, est une tâche importante en Fouille de Données. Depuis une dizaine d'années, la classification non supervisée a été étendue pour intégrer des contraintes utilisateur permettant de modéliser des connaissances préalables dans le processus de clustering. Différents types de contraintes utilisateur peuvent être considérés, des contraintes pouvant porter soit sur les clusters, soit sur les instances. Dans cette thèse, nous étudions le cadre de la Programmation par Contraintes (PPC) pour modéliser les tâches de clustering sous contraintes utilisateur. Utiliser la PPC a deux avantages principaux : la déclarativité, qui permet d'intégrer aisément des contraintes utilisateur et la capacité de trouver une solution optimale qui satisfait toutes les contraintes (s'il en existe). Nous proposons deux modèles basés sur la PPC pour le clustering sous contraintes utilisateur. Les modèles sont généraux et flexibles, ils permettent d'intégrer des contraintes d'instances must-link et cannot-link et différents types de contraintes sur les clusters. Ils offrent également à l'utilisateur le choix entre différents critères d'optimisation. Afin d'améliorer l'efficacité, divers aspects sont étudiés. Les expérimentations sur des bases de données classiques et variées montrent qu'ils sont compétitifs par rapport aux approches exactes existantes. Nous montrons que nos modèles peuvent être intégrés dans une procédure plus générale et nous l'illustrons par la recherche de la frontière de Pareto dans un problème de clustering bi-critère sous contraintes utilisateur. / Cluster analysis is an important task in Data Mining with hundreds of different approaches in the literature. Since the last decade, the cluster analysis has been extended to constrained clustering, also called semi-supervised clustering, so as to integrate previous knowledge on data to clustering algorithms. In this dissertation, we explore Constraint Programming (CP) for solving the task of constrained clustering. The main principles in CP are: (1) users specify declaratively the problem in a Constraint Satisfaction Problem; (2) solvers search for solutions by constraint propagation and search. Relying on CP has two main advantages: the declarativity, which enables to easily add new constraints and the ability to find an optimal solution satisfying all the constraints (when there exists one). We propose two models based on CP to address constrained clustering tasks. The models are flexible and general and supports instance-level constraints and different cluster-level constraints. It also allows the users to choose among different optimization criteria. In order to improve the efficiency, different aspects have been studied in the dissertation. Experiments on various classical datasets show that our models are competitive with other exact approaches. We show that our models can easily be embedded in a more general process and we illustrate this on the problem of finding the Pareto front of a bi-criterion optimization process.
134

Optimisation des plans de test des charges utiles des satellites de télécommunication / Optimisation of telecommunication satellite payload test plans

Maillet, Caroline 25 April 2012 (has links)
La validation des charges utiles des satellites de télécommunication nécessite des opérations coûteuses en temps et en personnel. Ce coût augmente régulièrement du fait de la complexité croissante des charges utiles. Il est donc crucial pour Astrium d'optimiser la réalisation des opérations de test. L'objectif de cette thèse CIFRE menée en collaboration entre Astrium et l'Onera est de développer une suite logicielle d'aide à la génération de plans de test. Le problème de génération de plan de test a été modélisé sous forme de graphe orienté à états. La NP-complétude de ce problème a été établie. Des modèles mathématiques ont été construits en programmation linéaire en nombres entiers et en programmation par contraintes en vue d'une résolution par des solveurs génériques. Cependant, ces solveurs génériques se sont heurtés à des problèmes d'insuffisance de mémoire liés à la très grande taille des instances à traiter. Ceci nous a conduits à développer un solveur spécialisé à base de recherche arborescente faisant appel à des mécanismes spécifiques de choix de variables et de valeurs, de propagation de contraintes, de calcul de borne, de retour-arrière, d'apprentissage et de redémarrage. Un solveur spécialisé à base de recherche locale a été développé en parallèle. Les résultats obtenus par ces différents solveurs avec différents paramétrages ont pu être comparés. / Telecommunication satellite payload validation requires operations which are expensive in terms of time and manpower. This cost is constantly increasing as the payloads become more and more complex. It is crucial for Astrium to optimise the testing phase to keep these costs under control. The objective of this CIFRE thesis, conducted in collaboration with Astrium and Onera, is to develop a software suite to help generate test plans for the payloads.The problem of generating test plans was modeled using the form of a directed graph with states. The NP-completeness of this problem was proven. Mathematical models were built using integer linear programming and constraint programming with a view to solving the problems using generic solvers. However, these generic solvers had problems due to insufficient memory on account of the large size of instances to be handled. These problems led us to develop a specialised solver using a tree search, with special mechanisms for choosing variables and values, propagating constraints, computing bounds, backjumping,learning, and restarting. A specialised solver based on local search was developed in parallel.The results obtained by these different solvers with different settings were compared.
135

Une approche basée sur le modèle de couverture d'usages pour l'évaluation de la conception d'une famille de produits / A usage coverage based approach for assessing product family design

Wang, Jiliang 30 January 2012 (has links)
En adoptant un point de vue utilitariste du consommateur sur certains produits orientés service, nous avons d'abord contribué à la proposition d’un modèle de contextes d’usage que se doit de couvrir au mieux un produit. Le modèle conduit à une meilleure intégration des analyses de marketing et d’ingénierie de la conception amenant à une optimisation d'un produit paramétré plus orientée vers les besoins du marché ou à un meilleur étagement d'une famille de produits. Nous proposons une série d'indices qui révèlent l'adéquation entre les usages couverts par un produit de dimensions données ou une famille de produits donnée avec un espace d'usages cible qu’il s’agit de couvrir dans sa totalité ou en partie mais d'une manière suffisamment dominante par rapport à la concurrence. En premier lieu, l'indice de couverture d’usage (UCI) pour un produit unique est introduit par la cartographie du produit relativement à un ensemble d’utilisateurs représentatifs définis par des usages attendus. Sur cette base, l'UCI pour une famille de produits est construite pour évaluer la composition de la famille et la redondance des produits qui la composent. Les avantages par rapport à la traditionnelle estimation de la demande en marketing sont de réduire la complexité de l'enquête et de l'analyse des données et de pouvoir estimer le niveau de compétitivité d’une offre innovante sans nécessiter de retour d’expérience du marché. Nous expérimentons nos propositions sur un problème de reconception d’une famille de scies sauteuses. L'approche proposée permet d'évaluer l'adaptabilité, pour une famille de produits de tailles croissantes, à divers scénarios dans le contexte d'usage d'un marché cible. Les concepteurs peuvent s'appuyer sur les résultats pour éliminer les produits redondants au sein d'une famille. Des configurations de produits de tailles croissantes peuvent aussi être rapidement simulées et comparées de manière à aboutir à une famille minimale de produits idéalement étagée. / Adopting a utilitarian viewpoint of consumers on some service-oriented goods, we have first contributed to the proposal of a usage contexts model that a product should cover at most. The model leads to a higher integration of design engineering and marketing analyses which results in a more market-oriented optimization of a parameterized product or a better sampling of a product family. We propose a series of usage coverage indices that reveal the adequacy of a dimensioned product or a given product family to a targeted usage space to cover in its whole or for a part but sufficiently in a dominant way compared to competing products. First, the Usage Coverage Index (UCI) for single product is introduced by mapping the given product with a set of representative users defined by expected usages. On that basis, the UCI for a product family is constructed to evaluate the composition and redundancy of the family. The advantage compared to traditional demand estimation in marketing research is to reduce the complexity of survey and data analysis and to assess the competitiveness level of an innovative service offer without needing any return of experience from the market. We experiment our proposals on a jigsaw product family redesign problem. The proposed analysis approach helps to evaluate the adaptability, for a given scale-based product family, to diverse usage context scenarios in a target market. Designers can rely on the results to filter out redundant products within a family. Scale-based configurations of the products can also be rapidly simulated and compared to find out an appropriate sampled series of products.
136

Programação por restrições aplicada a problemas de rearranjo de genomas / Constraint programming applied to genome rearrangement problems

Iizuka, Victor de Abreu, 1987- 21 August 2018 (has links)
Orientador: Zanoni Dias / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-21T22:58:04Z (GMT). No. of bitstreams: 1 Iizuka_VictordeAbreu_M.pdf: 1453681 bytes, checksum: 1fec01321d56a93084d2597366b44422 (MD5) Previous issue date: 2012 / Resumo: A teoria da seleção natural de Darwin afirma que os seres vivos atuais descendem de ancestrais, e ao longo da evolução, mutações genéticas propiciaram o aparecimento de diferentes espécies de seres vivos. Muitas mutações são pontuais, alterando a cadeia de DNA, o que pode impedir que a informação seja expressa, ou pode expressá-la de um modo diferente. A comparação de sequências é o método mais usual de se identificar a ocorrência de mutações pontuais, sendo um dos problemas mais abordados em Biologia Computacional. Rearranjo de Genomas tem como objetivo encontrar o menor número de operações que transformam um genoma em outro. Essas operações podem ser, por exemplo, reversões, transposições, fissões e fusões. O conceito de distância pode ser definido para estes eventos, por exemplo, a distância de reversão é o número mínimo de reversões que transformam um genoma em outro [9] e a distância de transposição é o número mínimo de transposições que transformam um genoma em outro [10]. Nós trataremos os casos em que os eventos de reversão e transposição ocorrem de forma isolada e os casos quando os dois eventos ocorrem simultaneamente, com o objetivo de encontrar o valor exato para a distância. Nós criamos modelos de Programação por Restrições para ordenação por reversões e ordenação por reversões e transposições, seguindo a linha de pesquisa utilizada por Dias e Dias [16]. Nós apresentaremos os modelos de Programação por Restrições para ordenação por reversões, ordenação por transposições e ordenação por reversões e transposições, baseados na teoria do Problema de Satisfação de Restrições e na teoria do Problema de Otimização com Restrições. Nós fizemos comparações com os modelos de Programação por Restrições para ordenação por transposições, descrito por Dias e Dias [16], e com as formulações de Programação Linear Inteira para ordenação por reversões, ordenação por transposições e ordenação por reversões e transposições, descritas por Dias e Souza [17] / Abstract: The Darwin's natural selection theory states that living beings of nowadays are descended from ancestors, and through evolution, genetic mutations led to the appearance of different kinds of living beings. Many mutations are point mutations, modifying the DNA sequence, which may prevent the information from being expressed, or may express it in another way. The sequence comparison is the most common method to identify the occurrence of point mutations, and is one of the most discussed problems in Computational Biology. Genome Rearrangement aims to find the minimum number of operations required to change one sequence into another. These operations may be, for example, reversals, transpositions, fissions and fusions. The concept of distance may be defined for these events, for example, the reversal distance is the minimum number of reversals required to change one sequence into another [9] and the transposition distance is the minimum number of transpositions required to change one sequence into another [10]. We will deal with the cases in which reversals and transpositions events occur separately and the cases in which both events occur simultaneously, aiming to find the exact value for the distance. We have created Constraint Programming models for sorting by reversals and sorting by reversals and transpositions, following the research line used by Dias and Dias [16]. We will present Constraint Logic Programming models for sorting by reversals, sorting by transpositions and sorting by reversals and transpositions, based on Constraint Satisfaction Problems theory and Constraint Optimization Problems theory. We made a comparison between the Constraint Logic Programming models for sorting by transpositions, described in Dias and Dias [16], and with the Integer Linear Programming formulations for sorting by reversals, sorting by transpositions and sorting by reversals and transpositions, described in Dias and Souza [17] / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
137

Combiner la programmation par contraintes et l’apprentissage machine pour construire un modèle éco-énergétique pour petits et moyens data centers / Combining constraint programming and machine learning to come up with an energy aware model for small/medium size data centers

Madi wamba, Gilles 27 October 2017 (has links)
Au cours de la dernière décennie les technologies de cloud computing ont connu un essor considérable se traduisant par la montée en flèche de la consommation électrique des data center. L’ampleur du problème a motivé de nombreux travaux de recherche s’articulant autour de solutions de réduction statique ou dynamique de l’enveloppe globale de consommation électrique d’un data center. L’objectif de cette thèse est d’intégrer les sources d’énergie renouvelables dans les modèles d’optimisation dynamique d’énergie dans un data center. Pour cela nous utilisons la programmation par contraintes ainsi que des techniques d’apprentissage machine. Dans un premier temps, nous proposons une contrainte globale d’intersection de tâches tenant compte d’une ressource à coûts variables. Dans un second temps, nous proposons deux modèles d’apprentissage pour la prédiction de la charge de travail d’un data center et pour la génération de telles courbes. Enfin, nous formalisons le problème de planification énergiquement écologique (PPEE) et proposons un modèle global à base de PPC ainsi qu’une heuristique de recherche pour le résoudre efficacement. Le modèle proposé intègre les différents aspects inhérents au problème de planification dynamique dans un data center : machines physiques hétérogènes, types d’applications variés (i.e., applications interactives et applications par lots), opérations et coûts énergétiques de mise en route et d’extinction des machines physiques, interruption/reprise des applications par lots, consommation des ressources CPU et RAM des applications, migration des tâches et coûts énergétiques relatifs aux migrations, prédiction de la disponibilité d’énergie verte, consommation énergétique variable des machines physiques. / Over the last decade, cloud computing technologies have considerably grown, this translates into a surge in data center power consumption. The magnitude of the problem has motivated numerous research studies around static or dynamic solutions to reduce the overall energy consumption of a data center. The aim of this thesis is to integrate renewable energy sources into dynamic energy optimization models in a data center. For this we use constraint programming as well as machine learning techniques. First, we propose a global constraint for tasks intersection that takes into account a ressource with variable cost. Second, we propose two learning models for the prediction of the work load of a data center and for the generation of such curves. Finally, we formalize the green energy aware scheduling problem (GEASP) and propose a global model based on constraint programming as well as a search heuristic to solve it efficiently. The proposed model integrates the various aspects inherent to the dynamic planning problem in a data center : heterogeneous physical machines, various application types (i.e., ractive applications and batch applications), actions and energetic costs of turning ON/OFF physical machine, interrupting/resuming batch applications, CPU and RAM ressource consumption of applications, migration of tasks and energy costs related to the migrations, prediction of green energy availability, variable energy consumption of physical machines.
138

Parallélisations de méthodes de programmation par contraintes / Parallelizations of constraint programming methods

Menouer, Tarek 26 June 2015 (has links)
Dans le cadre du projet PAJERO, nous présentons dans cette thèse une parallélisation externe d'un solveur de Programmation Par Contraintes (PPC) basée sur des méthodes de parallélisation de la search et Portfolio. Cela, afin d'améliorer la performance de la résolution des problèmes de satisfaction et d'optimisation sous contraintes. La parallélisation de la search que nous proposons est adaptée pour une exécution en mode opportuniste et déterministe, suivant les besoins des clients. Le principe consiste à partitionner à la demande l'arbre de recherche unique généré par une seule stratégie de recherche en un ensemble de sous-arbres, pour ensuite affecter chaque sous-arbre à un coeur de calcul. Une stratégie de recherche est un algorithme qui choisit pour chaque noeud dans l'arbre de recherche la variable à assigner et choisi également l'ordonnancement de la recherche. En PPC, il existe plusieurs stratégies de recherche, certaines sont plus efficaces que d'autres, mais cela dépend généralement de la nature des problèmesde contraintes. Cependant la difficulté reste de choisir la bonne stratégie. Pour bénéficier de la variété des stratégies et de la disponibilité des ressources de calcul, un autre type de parallélisation est proposé, appelé Portfolio. La parallélisationPortfolio consiste à exécuter en parallèle N stratégies de recherche, ensuite la première stratégie qui trouve une solution met fin à toutes les autres. La nouveauté que nous proposons dans la parallélisation Portfolio consiste à adapterl'ordonnancement des N stratégies entre elles afin de privilégier la stratégie la plus prometteuse. Cela en lui donnant plus de coeurs que les autres. Pour ceci nous appliquons soit une fonction d'estimation pour chaque stratégie afin de sélectionner la stratégie qui a le plus petit arbre de recherche, soit un algorithme d'apprentissage qui permet de prédire quelle est la meilleure stratégie suivant le résultat d'un apprentissage effectué sur des instances déjà résolues. Afin d'ordonnancer plusieurs applications de PPC, nous proposons également un nouveau système d'allocation de ressources basé sur une stratégie d'ordonnancement combinée avec un modèle économique. Les applications de PPC sont résolues avec des solveurs parallèles dans une infrastructure cloud computing. L'originalité du system d'allocation est qu'il détermine automatiquement le nombre de ressources à affecter pour chaque application suivant la classe économique du client. Les performances obtenues avec nos méthodes de parallélisation sont illustrées par la résolution des problèmes de contraintes en portant le solveur Google OR-Tools au-dessus de notre framework parallèle Bobpp / In the context of the PAJERO project, we propose in this thesis an external parallelization of a Constraint Programming (CP) solver, using both search and Portfolio parallelizations, in order to solve constraint satisfaction and optimization problems. In our work the search parallelization is adapted for deterministic and non-deterministic executions, according to the needs of the user. The principle is to partition the unique search tree generated by one search strategy into a set of sub-trees, then assign each sub-tree to one computing core. A search strategy herein means an algorithm to decide which variable is selected to be assigned in each node of the search tree, and decide also the scheduling of the search. In CP, several search strategies exist and each one could be better than others for solving a specific problem. The difficulty lies in how to choose the best strategy. To benefit from the variety of strategies and the availability of computationalresources, another parallelization exists called the Portfolio parallelization. The principle of this Portfolio parallelization is to execute N search strategies in parallel. The first strategy which find a solution stops the others. The noveltyof our work in the context of the Portfolio is to adapt the schedule of the N strategies in order to favour the most promising strategy, which is a candidate to find a solution first, by giving it more cores than others. The promising strategyis selected using two methods. The first method is to use an estimation function which select the strategy with the smallest search tree. The second method is to use a learning algorithm which automatically determines the number of cores thatwill be allocated to each strategy according to the previous experiment. We have also proposed a new resource allocation system based on a scheduling strategy used with an economic model in order to execute several PPC applications. Thisapplications are solved using parallel solvers in the cloud computing infrastructure. The originality of this system is that the number of resources allocated to each PPC application is determined automatically according the economic classesof the users. The performances obtained by our parallelization methods are illustrated by solving the CP problems using the Google OR-Tools solver on top of the parallel Bobpp framework.
139

La substituabilité et la cohérence de tuples pour les réseaux de contraintes pondérées / The substitutability and the tuples consistency for weighted constraint networks

Dehani, Djamel-Eddine 13 February 2014 (has links)
Cette thèse se situe dans le domaine de la programmation par contraintes (CP). Plus précisément, nous nous intéressons au problème de satisfaction de contraintes pondérées (WCSP), qui est un problème d'optimisation pour lequel plusieurs formes de cohérences locales souples telles que, par exemple, la cohérence d’arc existentielle directionnelle (EDAC*) et la cohérence d’arc virtuelle (VAC) ont été proposées durant ces dernières années. Dans ce cadre, nous adoptons une perspective différente en revisitant la propriété bien connue de la substituabilité. Tout d’abord, nous précisons les relations existant entre la substituabilité de voisinage souple (SNS) et une propriété appelée pcost qui est basée sur le concept de surcoût de valeurs (par le biais de l'utilisation de paires de surcoût). Nous montrons que sous certaines hypothèses, pcost est équivalent à SNS, mais que dans le cas général, elle est plus faible que SNS prouvée être coNP-difficile. Ensuite, nous montrons que SNS conserve la propriété VAC, mais pas la propriété EDAC. Enfin, nous introduisons un algorithme optimisé et nous montrons sur diverses séries d’instances WCSP l’intérêt pratique du maintien de pcost avec AC*, FDAC* ou EDAC*, au cours de la recherche. Nous introduisons un algorithme optimisé et nous étudions la relation existante entre SNS et les différentes cohérences. Nous présentons aussi un nouveau type de propriétés pour les WCSPs. Il s'agit de la cohérence de tuples (TC) dont l'établissement sur un WCN est effectué grâce à une nouvelle opération appelée TupleProject. Nous proposons également une version optimale de cette propriété, OTC, qui peut être perçue comme une généralisation de OSAC (Optimal Soft Arc Consistency). Enfin, nous étendons la notion de substituabilité souple aux tuples. / This thesis is in the field of constraint programming (CP). More precisely, we focus on the weighted constraint satisfaction problem (WCSP), which is an optimization problem for which many forms of soft local (arc) consistencies have been proposed such as, for example, existential directional arc consistency (EDAC) and virtual arc consistency (VAC) in recent years. In this context, we adopt a different perspective by revisiting the well-known property of (soft) substitutability. First, we provide a clear picture of the relationships existing between soft neighborhood substitutability (SNS) and a tractable property called $pcost$ which allows us to compare the cost of two values (through the use of so-called cost pairs). We prove that under certain assumptions, $pcost$ is equivalent to SNS but weaker than SNS in the general case since we show that SNS is coNP-hard. We also show that SNS preserves the property VAC but not the property EDAC. Finally, we introduce an optimized algorithm and we show on various series of WCSP instances, the practical interest of maintaining $pcost$ together with AC*, FDAC* or EDAC*, during search. We also present a new type of properties for WCSPs called tuples consistency (TC). Enforcing TC is done through a new operation called TupleProject. Moreover, we propose an optimal version of this property, OTC, which can be seen as a generalization of OSAC (Optimal Soft Arc Consistency). Finally, we extend soft substitutability concept to tuples.
140

Programação por restrições e escalonamento baseado em restrições: Um estudo de caso na programação de recursos para o desenvolvimento de poços de petróleo / Constraint programming and constraint-based scheduling: A case study in the scheduling of resources for developing offshore oil wells

Thiago Serra Azevedo Silva 23 May 2012 (has links)
O objetivo dessa dissertação é apresentar um problema de otimização do uso de recursos críticos no desenvolvimento de poços de petróleo marítimos e a técnica empregada para a abordagem proposta ao problema. A revisão da técnica de Programação por Restrições é feita analisando aspectos relevantes de modelagem, propagação, busca e paradigmas de programação. A especialização da técnica para problemas de escalonamento, o Escalonamento Baseado em Restrições, é descrita com ênfase nos paradigmas descritivos e nos mecanismos de propagação de restrições. Como subsídio ao uso da técnica em outros problemas, a linguagem comercial de modelagem OPL é apresentada no Apêndice. O objetivo da abordagem ao problema é obter um escalonador para maximizar a produção de óleo obtida no curto prazo. O escalonador proposto baseia-se na declaração de um modelo empregando variáveis de intervalo. Um algoritmo e um modelo de Programação Linear Inteira abordando relaxações do problema são apresentados para que se obtenha um limitante superior ao valor de produção ótimo. Para o cenário real no qual a análise experimental foi feita, foram obtidas soluções a menos de 16% do ótimo após uma hora de execução; e os testes em instâncias de tamanhos variados evidenciaram a robustez do escalonador. Direções para trabalhos futuros são apresentadas ponderando os resultados obtidos. / The aim of this work is to present a problem of optimizing the use of critical resources to develop offshore oil wells and the technique used to approach the problem. The review of the Constraint Programming technique is made by analyzing relevant aspects of modeling, propagation, search and programming paradigms. The specialization of the technique to scheduling problems, known as Constraint-Based Scheduling, is described with emphasis on descriptive paradigms and constraint propagation mechanisms. In order to support the use of the technique to tackle other problems, the commercial modeling language OPL is presented in the appendix. The aim of the approach to the problem is to obtain a scheduler that maximizes the short-term production of oil. The scheduler presented relies on the description of a model using interval variables. An algorithm and an Integer Linear Programming model approaching relaxations of the problem are presented in order to obtain an upper bound for the optimal production value. For the real scenario upon which the experimental analysis was done, there were found solutions within 16% of the optimal after one hour of execution; and the tests on instances of varied sizes gave evidence of the robustness of the scheduler. Directions for future work are presented based on the results achieved.

Page generated in 0.1275 seconds