• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 15
  • 12
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 47
  • 34
  • 19
  • 18
  • 15
  • 15
  • 15
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Monophonic convexity in classes of graphs / Convexidade MonofÃnica em Classes de Grafos

Eurinardo Rodrigues Costa 06 February 2015 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / In this work, we study some parameters of monophonic convexity in some classes of graphs and we present our results about this subject. We prove that decide if the $m$-interval number is at most 2 and decide if the $m$-percolation time is at most 1 are NP-complete problems even on bipartite graphs. We also prove that the $m$-convexity number is as hard to approximate as the maximum clique problem, which is, $O(n^{1-varepsilon})$-unapproachable in polynomial-time, unless P=NP, for each $varepsilon>0$. Finally, we obtain polynomial time algorithms to compute the $m$-convexity number on hereditary graph classes such that the computation of the clique number is polynomial-time solvable (e.g. perfect graphs and planar graphs). / Neste trabalho, estudamos alguns parÃmetros para a convexidade monofÃnica em algumas classes de grafos e apresentamos nossos resultados acerca do assunto. Provamos que decidir se o nÃmero de $m$-intervalo à no mÃximo 2 e decidir se o tempo de $m$-percolaÃÃo à no mÃximo 1 sÃo problemas NP-completos mesmo em grafos bipartidos. TambÃm provamos que o nÃmero de $m$-convexidade à tÃo difÃcil de aproximar quanto o problema da Clique MÃxima, que Ã, $O(n^{1-varepsilon})$-inaproximÃvel em tempo polinomial, a menos que P=NP, para cada $varepsilon>0$. Finalmente, apresentamos um algoritmo de tempo polinomial para determinar o nÃmero de $m$-convexidade em classes hereditÃrias de grafos onde a computaÃÃo do tamanho da clique mÃxima à em tempo polinomial (como grafos perfeitos e grafos planares).
82

De grafos a emparelhamentos : uma possibilidade viável de encantar-se com a matemática

Ferreira, Verônica Craveiro de Santana 10 April 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This thesis aims to show that the theory of graphs, especially matching, can be studied in high school and gradually as the implementation of this theory in the classroom can foster in students interest in mathematics. Thus, this paper aims to demystify the idea that mathematics content ends with high school approaching students the theories recently developed in academy. The graph theory is considered an e cient tool to solve problems in various areas. There are numerous situations that can be modeled by that enable develop a range of skills, so it becomes so appealing to anyone who comes into contact with it. For the development of this thesis began our study addressing basic concepts of graph theory useful for understanding this work then present some problems that can be worked in high school and nalized with a speci c topic of this theory, matchings, with many applications that can be modeled as contextualized and practical problems of everyday life. / A presente disserta ção tem como objetivo mostrar que a teoria de grafos, sobretudo emparelhamentos, pode ser abordada no ensino m édio de forma gradativa. E como a implementa ção desta teoria em sala de aula pode despertar nos estudantes o interesse pela matem atica. Dessa forma, este trabalho pretende desmitifi car a ideia de que a matem atica se encerra com o conte udo do ensino m édio aproximando os estudantes das teorias desenvolvidas recentemente na academia. A teoria dos grafos é considerada uma ferramenta e ficiente para resolver problemas em diferentes áreas. São in úmeras situa ções que podem ser modeladas por grafos que possibilitam desenvolver uma s érie de habilidades, por isso ela se torna tao atraente para quem entra em contato com a mesma. Para o desenvolvimento desta disserta ção, iniciamos nosso estudo abordando conceitos b ásicos da teoria de grafos úteis a compreensão deste trabalho, em seguida apresentamos alguns problemas que podem ser trabalhados no ensino m édio e a nalisamos com um t ópico específi co desta teoria, emparelhamentos, com muitas aplica coes que podem ser contextualizadas e modeladas como problemas pr áticos do nosso cotidiano.
83

Inspection automatisée d’assemblages mécaniques aéronautiques par vision artificielle : une approche exploitant le modèle CAO / Automated inspection of mechanical parts by computer vision : an approach based on CAD model

Viana do Espírito Santo, Ilísio 12 December 2016 (has links)
Les travaux présentés dans ce manuscrit s’inscrivent dans le contexte de l’inspection automatisée d’assemblages mécaniques aéronautiques par vision artificielle. Il s’agit de décider si l’assemblage mécanique a été correctement réalisé (assemblage conforme). Les travaux ont été menés dans le cadre de deux projets industriels. Le projet CAAMVis d’une part, dans lequel le capteur d’inspection est constitué d’une double tête stéréoscopique portée par un robot, le projet Lynx© d’autre part, dans lequel le capteur d’inspection est une caméra Pan/Tilt/Zoom (vision monoculaire). Ces deux projets ont pour point commun la volonté d’exploiter au mieux le modèle CAO de l’assemblage (qui fournit l’état de référence souhaité) dans la tâche d’inspection qui est basée sur l’analyse de l’image ou des images 2D fournies par le capteur. La méthode développée consiste à comparer une image 2D acquise par le capteur (désignée par « image réelle ») avec une image 2D synthétique, générée à partir du modèle CAO. Les images réelles et synthétiques sont segmentées puis décomposées en un ensemble de primitives 2D. Ces primitives sont ensuite appariées, en exploitant des concepts de la théorie de graphes, notamment l’utilisation d’un graphe biparti pour s’assurer du respect de la contrainte d’unicité dans le processus d’appariement. Le résultat de l’appariement permet de statuer sur la conformité ou la non-conformité de l’assemblage. L’approche proposée a été validée à la fois sur des données de simulation et sur des données réelles acquises dans le cadre des projets sus-cités. / The work presented in this manuscript deals with automated inspection of aeronautical mechanical parts using computer vision. The goal is to decide whether a mechanical assembly has been assembled correctly i.e. if it is compliant with the specifications. This work was conducted within two industrial projects. On one hand the CAAMVis project, in which the inspection sensor consists of a dual stereoscopic head (stereovision) carried by a robot, on the other hand the Lynx© project, in which the inspection sensor is a single Pan/Tilt/Zoom camera (monocular vision). These two projects share the common objective of exploiting as much as possible the CAD model of the assembly (which provides the desired reference state) in the inspection task which is based on the analysis of the 2D images provided by the sensor. The proposed method consists in comparing a 2D image acquired by the sensor (referred to as "real image") with a synthetic 2D image generated from the CAD model. The real and synthetic images are segmented and then decomposed into a set of 2D primitives. These primitives are then matched by exploiting concepts from the graph theory, namely the use of a bipartite graph to guarantee the respect of the uniqueness constraint required in such a matching process. The matching result allows to decide whether the assembly has been assembled correctly or not. The proposed approach was validated on both simulation data and real data acquired within the above-mentioned projects.
84

Generalizations Of The Popular Matching Problem

Nasre, Meghana 08 1900 (has links) (PDF)
Matching problems arise in several real-world scenarios like assigning posts to applicants, houses to trainees and room-mates to one another. In this thesis we consider the bipartite matching problem where one side of the bipartition specifies preferences over the other side. That is, we are given a bipartite graph G = (A ∪ P,E) where A denotes the set of applicants, P denotes the set of posts, and the preferences of applicants are specified by ranks on the edges. Several notions of optimality like pareto-optimality, rank-maximality, popularity have been studied in the literature; we focus on the notion of popularity. A matching M is more popular than another matching M′ if the number of applicants that prefer M to M′ exceeds the number of applicants that prefer M′ to M. A matching M is said to be popular if there exists no matching that is more popular than M. Popular matchings have the desirable property that no applicant majority can force a migration to another matching. However, popular matchings do not provide a complete answer since there exist simple instances that do not admit any popular matching. Abraham et al. (SICOMP 2007) characterized instances that admit a popular matching and also gave efficient algorithms to find one when it exists. We present several generalizations of the popular matchings problem in this thesis. Majority of our work deals with instances that do not admit any popular matching. We propose three different solution concepts for such instances. A reasonable solution when an instance does not admit a popular matching is to output a matching that is least unpopular amongst the set of unpopular matchings. McCutchen (LATIN 2008) introduced and studied measures of unpopularity, namely the unpopularity factor and unpopularity margin. He proved that computing either a least unpopularity factor matching or a least unpopularity margin matching is NP-hard. We build upon this work and design an O(km√n) time algorithm which produces matchings with bounded unpopularity provided a certain subgraph of G admits an A-complete matching (a matching that matches all the applicants). Here n and m denote the number of vertices and the number of edges in G respectively, and k, which is bounded by |A|, is the number of iterations taken by our algorithm to terminate. We also show that if a certain subgraph of G admits an A-complete matching, then we have computed a matching with the least unpopularity factor. Another feasible solution for instances without any popular matching is to output a mixed matching that is popular. A mixed matching is simply a probability distribution over the set of matchings. A mixed matching Q is popular if no mixed matching is more popular than Q. We seek to answer the existence and computation of popular mixed matchings in a given instance G. We begin with a linear programming formulation to compute a mixed matching with the least unpopularity margin. We show that although the linear program has exponentially many constraints, we have a polynomial time separation oracle and hence a least unpopularity margin mixed matching can be computed in polynomial time. By casting the popular mixed matchings problem as a two player zero-sum game, it is possible to prove that every instance of the popular matchings problem admits a popular mixed matching. Therefore, the matching returned by our linear program is indeed a popular mixed matching. Finally, we propose augmentation of the input graph for instances that do not admit any popular matching. Assume that we are dealing with a set of items B (say, DVDs/books) instead of posts and it is possible to make duplicates of these items. Our goal is to make duplicates of appropriate items such that the augmented graph admits a popular matching. However, since allowing arbitrarily many copies for items is not feasible in practice, we impose restrictions in two forms – (i) associating costs with items, and (ii) bounding the number of copies. In the first case, we assume that we pay a price of cost(b) for every extra copy of b that we make; the first copy is assumed to be given to us at free. The total cost of the augmented instance is the sum of costs of all the extra copies that we make. Our goal is to compute a minimum cost augmented instance which admits a popular matching. In the second case, along with the input graph G = (A ∪ B,E), we are given a vector hc1, c2, . . . , c|B|i denoting upper bounds on the number of copies of every item. We seek to answer whether there exists a vector hx1, x2, . . . , x|B|i such that having xi copies of item bi where 1 ≤ xi ≤ ci enables the augmented graph to admit a popular matching. We prove that several problems under both these models turn out to be NP-hard – in fact they remain NP-hard even under severe restrictions on the preference lists. Our final results deal with instances that admit popular matchings. When the input instance admits a popular matching, there may be several popular matchings – in fact there may be several maximum cardinality popular matchings. Hence one may not be content with returning any maximum cardinality popular matching and instead ask for an optimal popular matching. Assuming that the notion of optimality is specified as a part of the problem, we present an O(m + n21 ) time algorithm for computing an optimal popular matching in G. Here m denotes the number of edges in G and n1 denotes the number of applicants. We also consider the problem of computing a minimum cost popular matching where with every post p, a price cost(p) and a capacity cap(p) are associated. A post with capacity cap(p) can be matched with up to cap(p) many applicants. We present an O(mn1) time algorithm to compute a minimum cost popular matching in such instances. We believe that the work provides interesting insights into the popular matchings problem and its variants.
85

Learning with Complex Performance Measures : Theory, Algorithms and Applications

Narasimhan, Harikrishna January 2016 (has links) (PDF)
We consider supervised learning problems, where one is given objects with labels, and the goal is to learn a model that can make accurate predictions on new objects. These problems abound in applications, ranging from medical diagnosis to information retrieval to computer vision. Examples include binary or multiclass classi cation, where the goal is to learn a model that can classify objects into two or more categories (e.g. categorizing emails into spam or non-spam); bipartite ranking, where the goal is to learn a model that can rank relevant objects above the irrelevant ones (e.g. ranking documents by relevance to a query); class probability estimation (CPE), where the goal is to predict the probability of an object belonging to different categories (e.g. probability of an internet ad being clicked by a user). In each case, the accuracy of a model is evaluated in terms of a specified `performance measure'. While there has been much work on designing and analyzing algorithms for different supervised learning tasks, we have complete understanding only for settings where the performance measure of interest is the standard 0-1 or a loss-based classification measure. These performance measures have a simple additive structure, and can be expressed as an expectation of errors on individual examples. However, in many real-world applications, the performance measure used to evaluate a model is often more complex, and does not decompose into a sum or expectation of point-wise errors. These include the binary or multiclass G-mean used in class-imbalanced classification problems; the F1-measure and its multiclass variants popular in text retrieval; and the (partial) area under the ROC curve (AUC) and precision@ employed in ranking applications. How does one design efficient learning algorithms for such complex performance measures, and can these algorithms be shown to be statistically consistent, i.e. shown to converge in the limit of infinite data to the optimal model for the given measure? How does one develop efficient learning algorithms for complex measures in online/streaming settings where the training examples need to be processed one at a time? These are questions that we seek to address in this thesis. Firstly, we consider the bipartite ranking problem with the AUC and partial AUC performance measures. We start by understanding how bipartite ranking with AUC is related to the standard 0-1 binary classification and CPE tasks. It is known that a good binary CPE model can be used to obtain both a good binary classification model and a good bipartite ranking model (formally, in terms of regret transfer bounds), and that a binary classification model does not necessarily yield a CPE model. However, not much is known about other directions. We show that in a weaker sense (where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution), a good bipartite ranking model for AUC can indeed be used to construct a good binary classification model, and also a good binary CPE model. Next, motivated by the increasing number of applications (e.g. biometrics, medical diagnosis, etc.), where performance is measured, not in terms of the full AUC, but in terms of the partial AUC between two false positive rates (FPRs), we design batch algorithms for optimizing partial AUC in any given FPR range. Our algorithms optimize structural support vector machine based surrogates, which unlike for the full AUC; do not admit a straightforward decomposition into simpler terms. We develop polynomial time cutting plane solvers for solving the optimization, and provide experiments to demonstrate the efficacy of our methods. We also present an application of our approach to predicting chemotherapy outcomes for cancer patients, with the aim of improving treatment decisions. Secondly, we develop algorithms for optimizing (surrogates for) complex performance mea-sures in the presence of streaming data. A well-known method for solving this problem for standard point-wise surrogates such as the hinge surrogate, is the stochastic gradient descent (SGD) method, which performs point-wise updates using unbiased gradient estimates. How-ever, this method cannot be applied to complex objectives, as here one can no longer obtain unbiased gradient estimates from a single point. We develop a general stochastic method for optimizing complex measures that avoids point-wise updates, and instead performs gradient updates on mini-batches of incoming points. The method is shown to provably converge for any performance measure that satis es a uniform convergence requirement, such as the partial AUC, precision@ and F1-measure, and in experiments, is often several orders of magnitude faster than the state-of-the-art batch methods, while achieving similar or better accuracies. Moreover, for specific complex binary classification measures, which are concave functions of the true positive rate (TPR) and true negative rate (TNR), we are able to develop stochastic (primal-dual) methods that can indeed be implemented with point-wise updates, by using an adaptive linearization scheme. These methods admit convergence rates that match the rate of the SGD method, and are again several times faster than the state-of-the-art methods. Finally, we look at the design of consistent algorithms for complex binary and multiclass measures. For binary measures, we consider the practically popular plug-in algorithm that constructs a classifier by applying an empirical threshold to a suitable class probability estimate, and provide a general methodology for proving consistency of these methods. We apply this technique to show consistency for the F1-measure, and under a continuity assumption on the distribution, for any performance measure that is monotonic in the TPR and TNR. For the case of multiclass measures, a simple plug-in method is no longer tractable, as in the place of a single threshold parameter, one needs to tune at least as many parameters as the number of classes. Using an optimization viewpoint, we provide a framework for designing learning algorithms for multiclass measures that are general functions of the confusion matrix, and as an instantiation, provide an e cient and provably consistent algorithm based on the bisection method for multiclass measures that are ratio-of-linear functions of the confusion matrix (e.g. micro F1). The algorithm outperforms the state-of-the-art SVMPerf method in terms of both accuracy and running time. Overall, in this thesis, we have looked at various aspects of complex performance measures used in supervised learning problems, leading to several new algorithms that are often significantly better than the state-of-the-art, to improved theoretical understanding of the performance measures studied, and to novel real-life applications of the algorithms developed.
86

Rotulação de símbolos matemáticos manuscritos via casamento de expressões / Labeling of Handwritten Mathematical Symbols via Expression Matching

Willian Yukio Honda 23 January 2013 (has links)
O problema de reconhecimento de expressões matemáticas manuscritas envolve três subproblemas importantes: segmentação de símbolos, reconhecimento de símbolos e análise estrutural de expressões. Para avaliar métodos e técnicas de reconhecimento, eles precisam ser testados sobre conjuntos de amostras representativos do domínio de aplicação. Uma das preocupações que tem sido apontada ultimamente é a quase inexistência de base de dados pública de expressões matemáticas, o que dificulta o desenvolvimento e comparação de diferentes abordagens. Em geral, os resultados de reconhecimento apresentados na literatura restringem-se a conjuntos de dados pequenos, não disponíveis publicamente, e muitas vezes formados por dados que visam avaliar apenas alguns aspectos específicos do reconhecimento. No caso de expressões online, para treinar e testar reconhecedores de símbolos, as amostras são em geral obtidas solicitando-se que as pessoas escrevam uma série de símbolos individualmente e repetidas vezes. Tal tarefa é monótona e cansativa. Uma abordagem alternativa para obter amostras de símbolos seria solicitar aos usuários a transcrição de expressões modelo previamente definidas. Dessa forma, a escrita dos símbolos seria realizada de forma natural, menos monótona, e várias amostras de símbolos poderiam ser obtidas de uma única expressão. Para evitar o trabalho de anotar manualmente cada símbolo das expressões transcritas, este trabalho propõe um método para casamento de expressões matemáticas manuscritas, no qual símbolos de uma expressão transcrita por um usuário são associados aos correspondentes símbolos (previamente identificados) da expressão modelo. O método proposto é baseado em uma formulação que reduz o problema a um problema de associação simples, no qual os custos são definidos em termos de características dos símbolos e estrutura da expressão. Resultados experimentais utilizando o método proposto mostram taxas médias de associação correta superiores a 99%. / The problem of recognizing handwritten mathematical expressions includes three important subproblems: symbol segmentation, symbol recognition, and structural analysis of expressions. In order to evaluate recognition methods and techniques, they should be tested on representative sample sets of the application domain. One of the concerns that are being repeatedly pointed recently is the almost non-existence of public representative datasets of mathematical expressions, which makes difficult the development and comparison of distinct approaches. In general, recognition results reported in the literature are restricted to small datasets, not publicly available, and often consisting of data aiming only evaluation of some specific aspects of the recognition. In the case of online expressions, to train and test symbol recognizers, samples are in general obtained asking users to write a series of symbols individually and repeatedly. Such task is boring and tiring. An alternative approach for obtaining samples of symbols would be to ask users to transcribe previously defined model expressions. By doing so, writing would be more natural and less boring, and several symbol samples could be obtained from one transcription. To avoid the task of manually labeling the symbols of the transcribed expressions, in this work a method for handwritten expression matching, in which symbols of a transcribed expression are assigned to the corresponding ones in the model expression, is proposed. The proposed method is based on a formulation that reduces the matching problem to a linear assignment problem, where costs are defined based on symbol features and expression structure. Experimental results using the proposed method show that mean correct assignment rate superior to 99% is achieved.
87

Metody pro komparativní analýzu metagenomických dat / Methods for Comparative Analysis of Metagenomic Data

Sedlář, Karel January 2018 (has links)
Moderní výzkum v environmentální mikrobiologii využívá k popisu mikrobiálních komunit genomická data, především sekvenaci DNA. Oblast, která zkoumá veškerý genetický materiál přítomný v environmentálním vzorku, se nazývá metagenomika. Tato doktorská práce se zabývá metagenomikou z pohledu bioinformatiky, která je nenahraditelná při výpočetním zpracování dat. V teoretické části práce jsou popsány dva základní přístupy metagenomiky, včetně jejich základních principů a slabin. První přístup, založený na cíleném sekvenování, je dobře rozpracovanou oblastí s velkou řadou bioinformatických technik. Přesto mohou být metody pro porovnávání vzorků z několika prostředí podstatně vylepšeny. Přístup představený v této práci používá unikátní transformaci dat do podoby bipartitního grafu, kde je jedna partita tvořena taxony a druhá vzorky, případně různými prostředími. Takový graf plně reflektuje kvalitativní i kvantitativní složení analyzované mikrobiální sítě. Umožňuje masivní redukci dat pro jednoduché vizualizace bez negativních vlivů na automatickou detekci komunit, která dokáže odhalit shluky podobných vzorků a jejich typických mikrobů. Druhý přístup využívá sekvenace celého metagenomu. Tato strategie je novější a příslušející bioinformatické nástroje jsou méně propracované. Hlavní výzvou přitom zůstává rychlá klasifikace sekvencí, v metagenomice označovaná jako „binning“. Metoda představená v této práci využívá přístupu zpracování genomických signálů. Tato unikátní metodologie byla navržena na základě podrobné analýzy redundance genetické informace uložené v genomických signálech. Využívá transformace znakových sekvencí do několika variant fázových signálů. Navíc umožňuje přímé zpracování dat ze sekvenace nanopórem v podobě nativních proudových signálů.
88

Modely matematického programování pro směšovací úlohy / Mathematical Programs for Blending Problems

Kalenský, Vít January 2018 (has links)
This diploma thesis deals with optimization models with design of a new waste management infrastructure in the Czech Republic, such that combustible waste, which is not utilized by the material recovering, can be used by energy recovering. This task is handled by optimization models, including trac and mixing problems. First of all, the concepts of graph theory and optimization are presented in this paper. Subsequently, some of the GAMS functions are discussed, and later the VBA programming language used to handle the larger data quickly is presented. In the main part, three gradually expanding models are developed. At the end the data from the waste management information system are implemented into them.
89

Neki prilozi teoriji turnira / Some contributions to the theory of tournaments

Petrović Vojislav 04 December 1987 (has links)
<p>Turniri su najvi&scaron;e istraživana klasa orijentisanih grafova. U tezi su prezentovana dva tipa rezultata. Prvi se odnosi na tzv. neizbežne podgrafove. Obuhvata Hamiltonove bajpase, podgrafove C(<em>n, i</em>) i alternativne Hamiltonove konture. Drugi se bavi problemima frekvencija skorova u običnim, bipartitnim i 3-partitnim turnirima.</p> / <p>Tournaments are the most investigated class of oriented graphs. Two type of results are presented in the thesis. First one is related to so called unavoidable subgraphs. It discusses Hamiltonian bypasses, subgraphs C(n, i) and antidirected Hamiltonian cycles. The second deals with problems of score frequencies in ordinary, bipartite and 3-partite tournaments.</p>
90

d-extensibles, d-bloqueurs et d-transversaux de problèmes d'optimisation combinatoire / d-extensible sets, d-blockers and d-transversals of combinatorial optimization problems

Cotté, Grégoire 09 June 2016 (has links)
Dans cette thèse, nous étudions trois catégories de problèmes : les d-extensibles, les d-bloqueurs et les d-transversaux.Les d-extensibles de stables optimaux sont des ensembles de sommets d'un graphe G tels que tout stable de cardinal d du sous-graphe induit par un d-extensible peut être étendu à un stable optimal de G à l'aide de sommets qui n'appartiennent pas au d-extensible. Nous étudions les d-extensibles de cardinal maximal de stables dans les graphes bipartis. Nous démontrons quelques propriétés structurelles puis nous déterminons une borne inférieure du cardinal maximal d'un d-extensible. Nous étudions quelques classes de graphes dans lesquelles déterminer un d-extensible optimal de stables est un problème polynomial. Nous nous intéressons ensuite aux d-extensibles de stables dans les arbres. Nous prouvons plusieurs propriétés structurelles, déterminons une autre borne inférieure du cardinal maximal d'un d-extensible et étudions quelques classes d'arbres dans lesquelles déterminer un d-extensible optimal de stables est un problème polynomial.Les d-bloqueurs de stables sont des ensembles de sommets d'un graphe G tels que, si on retire les sommets d'un d-bloqueur, le cardinal maximal d'un stable du graphe induit par les sommets restants est inférieur d'au moins d au cardinal maximal d'un stable du graphe initial. Nous nous intéressons ici aux d-bloqueurs de coût minimal de stables dans les arbres. Après avoir prouvé une caractérisation des d-bloqueurs de stables dans les arbres, nous démontrons que déterminer un d-bloqueur de coût minimal de stable est un problème polynomial dans une classe d'arbres particulière.Soit Pi un problème d'optimisation sur un ensemble d'éléments fini. Un d-transversal de Pi est un ensembles d'éléments tel que l'intersection entre le d-transversal et toute solution optimale au problème Pi est de cardinal supérieur égal à d. Nous proposons ici une approche de génération de contraintes pour déterminer des d-transversaux de cardinal maximal de problèmes modélisés par des programmes mathématiques en variables binaires. Nous étudions deux variantes de cette approche que nous testons sur des instances de graphes générés aléatoirement pour déterminer des d-transversaux de stables optimaux et des d-transversaux de couplages optimaux / In this thesis, we study three types of problems : the d-extensibles sets, the d-blockers and the d-transversals.In a graph G, a d-extensible set of maximum independent sets is a subset of vertices of G such that every stable set of cardinality d in the subgraph restricted to the d-extensible set can be extented to a maximum stable set of G using only vertices that do not belong to the d-extensible set. We study d-extensible sets of mxaimum cardinality of stable sets in bipartite graphs. We show some structural properties and we determine a lower bound of the maximum cardinality of a d-extensible set. We consider some classes of graph where finding an optimum d-extensible set can be done in polynomial time. Then, we study the d-extensibles sets of stable sets in trees. We prove some properties on the structures of the d-extensibles sets and we determine another lower bound of the maximum cardinality of a d-extensible set. Finaly, we study somme classes of tree where a d-extensible sets of maximum cardinality can be done in polynomial time.In a graph G, a d-blocker is a subset of vertices such that, if removed, a maximum stable set of the resulting subgraph is of cardinality at most the cardinality of a maximum stable set of G minus d. We study d-blocker of minimal cost of stable sets in tree.We prove a caracterisation of d-blockers in tree and we study a particular classe of trees where computing a d-blocker of minimal cost of stable sets can be done in polynomial time.Let Pi be an optimisation problem on a finite set of elements. A d-transversal of Pi is a subset of elements such that the intersection between the d-transversal and every optimal solution of Pi contains at lest d elements. We propose an approach to compute d-transversal of any optimisation problem modelised by mathematical program with binary variables. We use a contraints generation approach. We compare two variations of this approach on randomly generated graph by computing d-transversals of stables sets and d-transversals of matching

Page generated in 0.1326 seconds