71 |
Approximation algorithms for covering problems in dense graphsLevy, Eythan 06 March 2009 (has links)
We present a set of approximation results for several covering problems in dense graphs. These results show that for several problems, classical algorithms with constant approximation ratios can be analyzed in a finer way, and provide better constant approximation ratios under some density constraints. In particular, we show that the maximal matching heuristic approximates VERTEX COVER (VC) and MINIMUM MAXIMAL MATCHING (MMM) with a constant ratio strictly smaller than 2 when the proportion of edges present in the graph (weak density) is at least 3/4, or when the normalized minimum degree (strong density) is at least 1/2. We also show that this result can be improved by a greedy algorithm which provides a constant ratio smaller than 2 when the weak density is at least 1/2. We also provide tight families of graphs for all these approximation ratios. We then looked at several algorithms from the literature for VC and SET COVER (SC). We present a unified and critical approach to the Karpinski/Zelikovsky, Imamura/Iwama and Bar-Yehuda/Kehat algorithms, identifying the general the general scheme underlying these algorithms.<p>Finally, we look at the CONNECTED VERTEX COVER (CVC) problem,for which we proposed new approximation results in dense graphs. We first analyze Carla Savage's algorithm, then a new variant of the Karpinski-Zelikovsky algorithm. Our results show that these algorithms provide the same approximation ratios for CVC as the maximal matching heuristic and the Karpinski-Zelikovsky algorithm did for VC. We provide tight examples for the ratios guaranteed by both algorithms. We also introduce a new invariant, the "price of connectivity of VC", defined as the ratio between the optimal solutions of CVC and VC, and showed a nearly tight upper bound on its value as a function of the weak density. Our last chapter discusses software aspects, and presents the use of the GRAPHEDRON software in the framework of approximation algorithms, as well as our contributions to the development of this system.<p><p>/<p><p>Nous présentons un ensemble de résultats d'approximation pour plusieurs problèmes de couverture dans les graphes denses. Ces résultats montrent que pour plusieurs problèmes, des algorithmes classiques à facteur d'approximation constant peuvent être analysés de manière plus fine, et garantissent de meilleurs facteurs d'aproximation constants sous certaines contraintes de densité. Nous montrons en particulier que l'heuristique du matching maximal approxime les problèmes VERTEX COVER (VC) et MINIMUM MAXIMAL MATCHING (MMM) avec un facteur constant inférieur à 2 quand la proportion d'arêtes présentes dans le graphe (densité faible) est supérieure à 3/4 ou quand le degré minimum normalisé (densité forte) est supérieur à 1/2. Nous montrons également que ce résultat peut être amélioré par un algorithme de type GREEDY, qui fournit un facteur constant inférieur à 2 pour des densités faibles supérieures à 1/2. Nous donnons également des familles de graphes extrémaux pour nos facteurs d'approximation. Nous nous somme ensuite intéressés à plusieurs algorithmes de la littérature pour les problèmes VC et SET COVER (SC). Nous avons présenté une approche unifiée et critique des algorithmes de Karpinski-Zelikovsky, Imamura-Iwama, et Bar-Yehuda-Kehat, identifiant un schéma général dans lequel s'intègrent ces algorithmes.<p>Nous nous sommes finalement intéressés au problème CONNECTED VERTEX COVER (CVC), pour lequel nous avons proposé de nouveaux résultats d'approximation dans les graphes denses, au travers de l'algorithme de Carla Savage d'une part, et d'une nouvelle variante de l'algorithme de Karpinski-Zelikovsky d'autre part. Ces résultats montrent que nous pouvons obtenir pour CVC les mêmes facteurs d'approximation que ceux obtenus pour VC à l'aide de l'heuristique du matching maximal et de l'algorithme de Karpinski-Zelikovsky. Nous montrons également des familles de graphes extrémaux pour les ratios garantis par ces deux algorithmes. Nous avons également étudié un nouvel invariant, le coût de connectivité de VC, défini comme le rapport entre les solutions optimales de CVC et de VC, et montré une borne supérieure sur sa valeur en fonction de la densité faible. Notre dernier chapitre discute d'aspects logiciels, et présente l'utilisation du logiciel GRAPHEDRON dans le cadre des algorithmes d'approximation, ainsi que nos contributions au développement du logiciel. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
72 |
From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems: From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization ProblemsPlociennik, Kai 27 January 2011 (has links)
In theoretical computer science, various notions of efficiency are used for algorithms. The most commonly used notion is worst-case efficiency, which is defined by requiring polynomial worst-case running time. Another commonly used notion is average-case efficiency for random inputs, which is roughly defined as having polynomial expected running time with respect to the random inputs. Depending on the actual notion of efficiency one uses, the approximability of a combinatorial optimization problem can be very different.
In this dissertation, the approximability of three classical combinatorial optimization problems, namely Independent Set, Coloring, and Shortest Common Superstring, is investigated for different notions of efficiency. For the three problems, approximation algorithms are given, which guarantee approximation ratios that are unachievable by worst-case efficient algorithms under reasonable complexity-theoretic assumptions. The algorithms achieve polynomial expected running time for different models of random inputs. On the one hand, classical average-case analyses are performed, using totally random input models as the source of random inputs. On the other hand, probabilistic analyses are performed, using semi-random input models inspired by the so called smoothed analysis of algorithms.
Finally, the expected performance of well known greedy algorithms for random inputs from the considered models is investigated. Also, the expected behavior of some properties of the random inputs themselves is considered.
|
73 |
Optimizing the imbalances in a graph / Optimiser les déséquilibres dans un grapheGlorieux, Antoine 19 June 2017 (has links)
Le déséquilibre d'un sommet dans un graphe orienté est la valeur absolue de la différence entre son degré sortant et son degré entrant. Nous étudions le problème de trouver une orientation des arêtes du graphe telle que l'image du vecteur dont les composantes sont les déséquilibres des sommets par une fonction objectif f est maximisée. Le premier cas considéré est le problème de maximiser le minimum des déséquilibres sur toutes les orientations possibles. Nous caractérisons les graphes dont la valeur objective optimale est nulle. Ensuite nous donnons plusieurs résultats concernant la complexité du problème. Enfin, nous introduisons différentes formulations du problème et présentons quelques résultats numériques. Par la suite, nous montrons que le cas f=1/2 | |·| |₁ mène au célèbre problème de la coupe de cardinalité maximale. Nous introduisons de nouvelles formulations ainsi qu'un nouveau majorant qui domine celui de Goemans et Williamson. Des résultats théoriques et numériques concernant la performance des approches sont présentés. Pour finir, dans le but de renforcer certaines des formulations des problèmes étudiés, nous étudions une famille de polyèdres spécifique consistant en l'enveloppe convexe des matrices d'affectation 0/1 (où chaque colonne contient exactement une composante égale à 1) annexée avec l'indice de leur ligne non-identiquement nulle la plus basse. Nous donnons une description complète de ce polytope ainsi que certaines de ses variantes qui apparaissent naturellement dans le contexte de divers problèmes d'optimisation combinatoire. Nous montrons également que résoudre un programme linéaire sur un tel polytope peut s'effectuer en temps polynomial / The imbalance of a vertex in a directed graph is the absolute value of the difference between its outdegree and indegree. In this thesis we study the problem of orienting the edges of a graph in such a way that the image of the vector which components are the imbalances of the vertices of the graph under an objective function f is maximized. The first case considered is the problem of maximizing the minimum imbalance of all the vertices over all the possible orientations of the input graph. We first characterize graphs for which the optimal objective value is zero. Next we give several results concerning the computational complexity of the problem. Finally, we deal with several mixed integer programming formulations for this problem and present some numerical experiments. Next, we show that the case for f=1/2 | |·| |₁ leads to the famous unweighted maximum cut problem. We introduce some new formulations along with a new bound shown to be tighter than Michel Goemans & David Williamson's. Theoretical and computational results regarding bounds quality and performance are also reported. Finally, in order to strengthen some formulations of the studied problems, we study a specific class of polytopes. Consider the polytope consisting in the convex hull of the 0/1 assignment matrices where each column contains exactly one coefficient equal to 1 appended with their index of the lowest row that is not identically equal to the zero row. We give a full description of this polytope and some of its variants which naturally appear in the context of several combinatorial optimization problems. We also show that linear optimization over those polytopes can be done in polynomial time
|
74 |
A Novel Methodology for Timely Brain Formations of 3D Spatial Information with Application to Visually Impaired NavigationManganas, Spyridon 06 September 2019 (has links)
No description available.
|
75 |
Online Learning and Simulation Based Algorithms for Stochastic OptimizationLakshmanan, K January 2012 (has links) (PDF)
In many optimization problems, the relationship between the objective and parameters is not known. The objective function itself may be stochastic such as a long-run average over some random cost samples. In such cases finding the gradient of the objective is not possible. It is in this setting that stochastic approximation algorithms are used. These algorithms use some estimates of the gradient and are stochastic in nature. Amongst gradient estimation techniques, Simultaneous Perturbation Stochastic Approximation (SPSA) and Smoothed Functional(SF) scheme are widely used. In this thesis we have proposed a novel multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for unconstrained as well as constrained optimization. The algorithm uses the smoothed functional scheme for estimating the gradient and the quasi-Newton method to solve the optimization problem. The algorithm is shown to converge with probability one.
We have also provided here experimental results on the problem of optimal routing in a multi-stage network of queues. Policies like Join the Shortest Queue or Least Work Left assume knowledge of the queue length values that can change rapidly or hard to estimate. If the only information available is the expected end-to-end delay as with our case, such policies cannot be used. The QN-SF based probabilistic routing algorithm uses only the total end-to-end delay for tuning the probabilities. We observe from the experiments that the QN-SF algorithm has better performance than the gradient and Jacobi versions of Newton based smoothed functional algorithms. Next we consider constrained routing in a similar queueing network. We extend the QN-SF algorithm to this case. We study the convergence behavior of the algorithm and observe that the constraints are satisfied at the point of convergence. We provide experimental results for the constrained routing setup as well.
Next we study reinforcement learning algorithms which are useful for solving Markov Decision Process(MDP) when the precise information on transition probabilities is not known. When the state, and action sets are very large, it is not possible to store all the state-action tuples. In such cases, function approximators like neural networks have been used. The popular Q-learning algorithm is known to diverge when used with linear function approximation due to the ’off-policy’ problem. Hence developing stable learning algorithms when used with function approximation is an important problem.
We present in this thesis a variant of Q-learning with linear function approximation that is based on two-timescale stochastic approximation. The Q-value parameters for a given policy in our algorithm are updated on the slower timescale while the policy parameters themselves are updated on the faster scale. We perform a gradient search in the space of policy parameters. Since the objective function and hence the gradient are not analytically known, we employ the efficient one-simulation simultaneous perturbation stochastic approximation(SPSA) gradient estimates that employ Hadamard matrix based deterministic perturbations. Our algorithm has the advantage that, unlike Q-learning, it does not suffer from high oscillations due to the off-policy problem when using function approximators. Whereas it is difficult to prove convergence of regular Q-learning with linear function approximation because of the off-policy problem, we prove that our algorithm which is on-policy is convergent. Numerical results on a multi-stage stochastic shortest path problem show that our algorithm exhibits significantly better performance and is more robust as compared to Q-learning. Future work would be to compare it with other policy-based reinforcement learning algorithms.
Finally, we develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process(MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multistage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
|
Page generated in 0.9259 seconds