• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 17
  • 16
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 46
  • 34
  • 27
  • 26
  • 26
  • 25
  • 23
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Algoritmos para problemas de empacotamento e roteamento / Algorithms for packing and routing problems

Silveira, Jefferson Luiz Moisés da, 1986- 10 February 2013 (has links)
Orientador: Eduardo Candido Xavier / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T00:15:42Z (GMT). No. of bitstreams: 1 Silveira_JeffersonLuizMoisesda_D.pdf: 2236708 bytes, checksum: 8e569408c2f068347058e36031689c3a (MD5) Previous issue date: 2013 / Resumo: Neste trabalho estamos interessados em problemas de empacotamento e roteamento. Assumindo a hipótese de que P ? NP, sabemos que não existem algoritmos eficientes para resolver tais problemas. Além de algoritmos exatos, duas das abordagens para resolver tais problemas são Algoritmos Aproximados e Heurísticas. Nesta tese mostramos algoritmos baseados nestas três abordagens para ambos os problemas, de empacotamento e roteamento. Os dois primeiros problemas atacados foram generalizações de problemas clássicos de empacotamento: O problema da mochila bidimensional e o problema de empacotamento em faixas. Estes foram generalizados adicionando restrições na forma de carregamento e descarregamento dos itens no recipiente (restrições estas, que aparecem no contexto de problemas de roteamento). O terceiro problema é uma combinação de problemas de empacotamento e roteamento. Neste caso, atacamos uma generalização do clássico Pickup and Delivery Problem. Propomos os primeiros resultados de aproximação para algumas versões dos problemas de empacotamento supracitados. Além disto, apresentamos algumas abordagens práticas para o terceiro problema. As heurísticas foram avaliadas através de experimentos computacionais comparando os seus resultados com algoritmos exatos / Abstract: In this work we are interested in packing and routing problems. Assuming P ? NP, we have that there are no efficient algorithms to deal with such problems. Besides exact algorithms, two approaches to solve such problems are Approximation Algorithms and Heuristics. In this thesis we show algorithms using these three approaches for both packing and routing problems. The first two addressed problems are generalizations of classical packing problems: The Two Dimensional Knapsack problem and the Strip Packing problem. These problems were generalized by adding constraints on the way the items can be inserted/removed into/from the bin (These constraints appear in the context of routing problems). The third problem is combination of packing and routing problems. It is a generalization of the classical Pickup and Delivery problem. We propose the first approximation results for some packing problems. Besides that, we present some practical algorithms for the third problem. The heuristics were assessed through computational experiments by comparing their results with exact algorithms / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
92

Scheduling coal handling processes using metaheuristics

Conradie, David Gideon 21 April 2008 (has links)
The operational scheduling at coal handling facilities is of the utmost importance to ensure that the coal consuming processes are supplied with a constant feed of good quality coal. Although the Sasol Coal Handling Facility (CHF) were not designed to perform coal blending during the coal handling process, CHF has to blend the different sources to ensure that the quality of the feed supplied is of a stable nature. As a result, the operation of the plant has become an extremely complex process. Consequently, human intelligence is no longer sufficient to perform coal handling scheduling and therefore a scheduling model is required to ensure optimal plant operation and optimal downstream process performance. After various attempts to solve the scheduling model optimally, i.e. with exact solution methods, it was found that it is not possible to accurately model the complexities of CHF in such a way that the currently available exact solvers can solve it in an acceptable operational time. Various alternative solution approaches are compared, in terms of solution quality and execution speed, using a simplified version of the CHF scheduling problem. This investigation indicates that the Simulated Annealing (SA) metaheuristic is the most efficient solution method to provide approximate solutions. The metaheuristic solution approach allows one to model the typical sequential thoughts of a control room operator and sequential operating procedures. Thus far, these sequential rules could not be modelled in the simultaneous equation environment required for exact solution methods. An SA metaheuristic is developed to solve the practical scheduling model. A novel SA approach is applied where, instead of the actual solution being used for neighbourhood solution representation, the neighbours are indirectly represented by the rules used to generate neighbourhood solutions. It is also found that the initial temperature should not be a fixed value, but should be a multiple of the objective function value of the initial solution. An inverse arctan-based cooling schedule function outperforms traditional cooling schedules as it provides the required diversification and intensification behaviour of the SA. The scheduling model solves within 45 seconds and provides good, practically executable results. The metaheuristic approach to scheduling is therefore successful as the plant complexities and intricate operational philosophies can be accurately modelled using the sequential nature of programming languages and provides good approximate optimal solutions in a short solution time. Tests done with live CHF data indicate that the metaheuristic solution outperforms the current scheduling methodologies applied in the business. The implementation of the scheduler will lead to a more stable factory feed, which will increase production yields and therefore increase company profits. By reducing the amount of coal re-handling (in terms of throw-outs and load-backs at mine bunkers), the scheduler will reduce the coal handling facility’s annual operating cost by approximately R4.6 million (ZAR). Furthermore, the approaches discussed in this document can be applied to any continuous product scheduling environment. Additional information available on a CD stored at Level 3 of the Merensky Library. / Dissertation (MEng (Industrial Engineering))--University of Pretoria, 2011. / Industrial and Systems Engineering / unrestricted
93

Optimisation de déploiement et localisation de cible dans les réseaux de capteurs / Deployment optimization and target tracking in sensor networks

Le Berre, Matthieu 05 June 2014 (has links)
Au cours de cette thèse, nous avons abordé des problématiques liées à l’optimisation de déploiement et la localisation de cible dans les réseaux de capteurs. Nous avons tout d'abord proposé un premier modèle pour l’optimisation de deux objectifs contradictoires : le nombre de capteurs déployés ainsi que la précision de la localisation. Quatre algorithmes multi-objectifs classiques ont été implémentés, et des versions hybrides ont également été proposées.Une variante du précédent problème est également étudiée, dédiée aux applications de localisation indoor. Les algorithmes proposés pour le premier problème n'ont montré qu'une efficacité relative au cours des premières expérimentations. Une nouvelle heuristique est alors développée, et les résultats ont montré de très bonnes performances sur les instances de taille réduite, ainsi que de bien meilleures performances que les autres algorithmes implémentés sur des instances de grande taille.Enfin, la notion de connectivité et de couverture est également traitée et intégrée dans un modèle linéaire de déploiement. Un algorithme Branch and Bound a été développé afin de traiter ce problème, puis des tests ont été effectués afin de le comparer aux solveurs linéaires actuels / In this thesis, a joint approach for deployment optimization and target tracking in sensor networks is developed. First, we have proposed a linear model to minimize the number of deployed sensors and maximize the accuracy of the localization. We have also implemented several multi-objective methods and proposed hybridization for some of them.We have also proposed a modification of the previous model, taking into account the indoor localization constraints. Two methods of the previous problem have been used, and a specific heuristic has been developed.Finally, two linear models taking into account coverage and connectivity have been proposed. A Branch and Bound algorithm has also been developed, considering a geometric lower bound and two properties to reduce the number of fathomed nodes
94

Approximation Algorithms for Network Connectivity Problems

Cameron, Amy January 2012 (has links)
In this dissertation, we examine specific network connectivity problems, and achieve improved approximation algorithm and integrality gap results for them. We introduce an important new, highly useful and applicable, network connectivity problem - the Vital Core Connectivity Problem (VCC). Despite its many practical uses, this problem has not been previously studied. We present the first constant factor approximation algorithm for VCC, and provide an upper bound on the integrality gap of its linear programming relaxation. We also introduce a new, useful, extension of the minimum spanning tree problem, called the Extended Minimum Spanning Tree Problem (EMST), that is based on a special case of VCC; and provide both a polynomial-time algorithm and a complete linear description for it. Furthermore, we show how to generalize this new problem to handle numerous disjoint vital cores, providing the first complete linear description of, and polynomial-time algorithm for, the generalized problem. We examine the Survivable Network Design Problem (SNDP) with multiple copies of edges allowed in the solution (multi-SNDP), and present a new approximation algorithm for which the approximation guarantee is better than that of the current best known for certain cases of multi-SNDP. With our method, we also obtain improved bounds on the integrality gap of the linear programming relaxation of the problem. Furthermore, we show the application of these results to variations of SNDP. We investigate cases where the optimal values of multi-SNDP and SNDP are equal; and we present an improvement on the previously best known integrality gap bound and approximation guarantee for the special case of SNDP with metric costs and low vertex connectivity requirements, as well as for the similar special case of the Vertex Connected Survivable Network Design Problem (VC-SNDP). The quality of the results that one can obtain for a given network design problem often depends on its integer linear programming formulation, and, in particular, on its linear programming relaxation. In this connection, we investigate formulations for the Steiner Tree Problem (ST). We propose two new formulations for ST, and investigate their strength in terms of their associated integrality gaps.
95

Theoretical and Experimental Studies on the Minimum Size 2-edge-connected Spanning Subgraph Problem

Sun, Yu January 2013 (has links)
A graph is said to be 2-edge-connected if it remains connected after the deletion of any single edge. Given an unweighted bridgeless graph G with n vertices, the minimum size 2-edge-connected spanning subgraph problem (2EC) is that of finding a 2-edge-connected spanning subgraph of G with the minimum number of edges. This problem has important applications in the design of survivable networks. However, because the problem is NP-hard, it is unlikely that efficient methods exist for solving it. Thus efficient methods that find solutions that are provably close to optimal are sought. In this thesis, an approximation algorithm is presented for 2EC on bridgeless cubic graphs which guarantees to be within 5/4 of the optimal solution value, improving on the previous best proven approximation guarantee of 5/4+ε for this problem. We also focus on the linear programming (LP) relaxation of 2EC, which provides important lower bounds for 2EC in useful solution techniques like branch and bound. The “goodness” of this lower bound is measured by the integrality gap of the LP relaxation for 2EC, denoted by α2EC. Through a computational study, we find the exact value of α2EC for graphs with small n. Moreover, a significant improvement is found for the lower bound on the value of α2EC for bridgeless subcubic graphs, which improves the known best lower bound on α2EC from 9/8 to 8/7.
96

Escalonamento de tarefas com localidade de dados em grids / Task scheduling with data locality in grids

Póvoa, Marcelo Galvão, 1990- 02 April 2015 (has links)
Orientador: Eduardo Candido Xavier / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-27T04:49:46Z (GMT). No. of bitstreams: 1 Povoa_MarceloGalvao_M.pdf: 1965830 bytes, checksum: 7509ae1701df384bfdc3d415ecd4eda8 (MD5) Previous issue date: 2015 / Resumo: Sistemas computacionais conhecidos como Data Grids fornecem uma infraestrutura computacional distribuída para processamento e armazenamento de dados, com várias aplicações envolvendo computação em larga escala. Devido ao uso de um grande volume de dados, é necessário não apenas um escalonamento eficiente de tarefas, mas também uma distribuição inteligente de réplicas dos dados para se atingir o melhor desempenho. Esses dois problemas já foram extensivamente estudados de forma independente na literatura, mas estamos concentrados em um formulação integrada em um problema estático, de forma a otimizar uma única função objetivo. Primeiramente, mostramos que este problema não pode admitir um algoritmo aproximado. Porém, considerando uma versão restrita do problema, apresentamos um algoritmo aproximado original com fator de aproximação constante. Também fazemos um estudo de algoritmos aproximados para problemas relacionados disponíveis na literatura. Sob um aspecto mais prático, introduzimos duas heurísticas originais para o problema. A primeira é baseada no agrupamento de máquinas próximas em clusters, enquanto a segunda procura identificar grupos de dados frequentemente acessados em conjunto. Comparamos esses algoritmos com duas abordagens adaptadas da literatura, através de simulações computacionais em um grande conjunto de instâncias baseadas em grids reais. Mostramos que nossa primeira heurística costuma obter melhores soluções que as outras com boa eficiência de tempo, enquanto a segunda heurística é ainda mais rápida e ainda obtém soluções competitivas / Abstract: Computational systems known as Data Grids provide a flexible, distributed computing infrastructure for processing and storage and has many applications in large-scale computing. Due to the use of great amounts of data, not only efficient task scheduling but also thorough file replication are crucial for achieving the best performance. Both these problems have already been studied independently in the literature, but we are interested in a combined formulation as a static problem, in order to minimize a single objective function. First, we show that this problem does not admit an approximation algorithm. However, considering a restricted version of the problem, we provide a constant ratio approximation algorithm. We also conduct a study of approximation algorithms for related problems avaliable in the literature. On a more practical side, we introduce two novel heuristics for the problem. The first is based on grouping neighbor nodes into clusters, while the second tries to identify groups of files frequently accessed together. We compare these algorithms with two adapted approaches from other works in the literature by doing computational simulations using an extensive set of instances based on real grids. We show that our first heuristic often obtains the best solutions with good time efficiency, while the second is even faster and still provides competitive solutions / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
97

Caminhos mais longos em grafos / Longest paths in graphs

Susanna Figueiredo De Rezende 30 May 2014 (has links)
O tema central deste trabalho é o estudo de problemas sobre caminhos mais longos em grafos, de pontos de vista tanto estrutural como algorítmico. A primeira parte tem como foco o estudo de problemas motivados pela seguinte questão levantada por T. Gallai em 1966: é verdade que em todo grafo conexo existe um vértice comum a todos os seus caminhos mais longos? Hoje, já se conhecem diversos grafos conexos cuja intersecção de todos os seus caminhos mais longos é vazia. Entretanto, existem classes de grafos para as quais a resposta à pergunta de Gallai é afirmativa. Nessa linha, apresentamos alguns resultados da literatura e duas novas classes que obtivemos: os grafos exoplanares e as 2-árvores. Motivado por esse problema, nos anos 80, T. Zamfirescu formulou a seguinte pergunta que permanece em aberto: é verdade que em todo grafo conexo existe um vértice comum a quaisquer três de seus caminhos mais longos? Apresentamos, além de alguns resultados conhecidos, uma prova de que a resposta é afirmativa para grafos em que todo bloco não trivial é hamiltoniano. Notamos que esse último resultado e o acima mencionado para grafos exoplanares generalizam um teorema de M. Axenovich (2009) que afirma que quaisquer três caminhos mais longos em um grafo exoplanar têm um vértice em comum. Finalmente, mencionamos alguns outros resultados da literatura relacionados com o tema. Na segunda parte, investigamos o problema de encontrar um caminho mais longo em um grafo. Este problema é NP-difícil para grafos arbitrários. Isto motiva investigações em duas linhas a respeito da busca de tais caminhos. Pode-se procurar classes especiais de grafos para as quais existem algoritmos polinomiais, ou pode-se abrir mão da busca de um caminho mais longo, e projetar um algoritmo eficiente que encontra um caminho cujo comprimento esteja próximo do comprimento de um mais longo. Nesse trabalho estudamos ambas as abordagens e apresentamos alguns resultados da literatura. / The central theme of this thesis is the study of problems related to longest paths in graphs, both from a structural and an algorithmic point of view. The first part focuses on the study of problems motivated by the following question raised by T. Gallai in 1966: is it true that every connected graph has a vertex common to all its longest paths? Today, many connected graphs in which all longest paths have empty intersection are known. However, there are classes of graphs for which Gallais question has a positive answer. In this direction, we present some results from the literature, as well as two new classes we obtained: outerplanar graphs and 2-trees. Motivated by this problem, T. Zamfirescu, in the 80s, proposed the following question which remains open: is it true that every connected graph has a vertex common to any three of its longest paths? We present, in addition to some known results, a proof that the answer to this question is positive for graphs in which all non-trivial blocks are Hamiltonian. We note that this result and the one mentioned above for outerplanar graphs generalize a theorem of M. Axenovich (2009) that states that any three longest paths in an outerplanar graph have a common vertex. Finally, we mention some other related results from the literature. In the second part, we investigate the problem of finding a longest path in a graph. This problem is NP-hard for arbitrary graphs. This motivates investigations in two directions with respect to the search for such paths. We can look for special classes of graphs for which the problem is polynomially solvable, or we can relinquish the search for a longest path and design an efficient algorithm that finds a path whose length is close to that of a longest path. In this thesis we study both approaches and present some results from the literature.
98

On the Study of Fitness Landscapes and the Max-Cut Problem

Rodriguez Fernandez, Angel Eduardo 14 December 2021 (has links)
The goal of this thesis is to study the complexity of NP-Hard problems, using the Max-Cut and the Max-k-Cut problems, and the study of fitness landscapes. The Max-Cut and Max-k-Cut problems are well studied NP-hard problems specially since the approximation algorithm of Goemans and Williamson (1995) which introduced the use of SDP to solve relaxed problems. In order to prove the existence of a performance guarantee, the rounding step from the SDP solution to a Max-Cut solution is simple and randomized. For the Max-k-Cut problem, there exist several approximation algorithms but many of them have been proved to be equivalent. Similarly as in Max-Cut, these approximation algorithms use a simple randomized rounding to be able to get a performance guarantee. Ignoring for now the performance guarantee, one could ask if there is a rounding process that takes into account the structure of the relaxed solution since it is the result of an optimization problem. In this thesis we answered this question positively by using clustering as a rounding method. In order to compare the performance of both algorithms, a series of experiments were performed using the so-called G-set benchmark for the Max-Cut problem and using the Random Graph Benchmark of Goemans1995 for the Max-k-Cut problem. With this new rounding, larger cut values are found both for the Max-Cut and the Max-k-Cut problems, and always above the value of the performance guarantee of the approximation algorithm. This suggests that taking into account the structure of the problem to design algorithms can lead to better results, possibly at the cost of a worse performance guarantee. An example for the vertex k-center problem can be seen in Garcia-Diaz et al. (2017), where a 3-approximation algorithm performs better than a 2-approximation algorithm despite having a worse performance guarantee. Landscapes over discrete configurations spaces are an important model in evolutionary and structural biology, as well as many other areas of science, from the physics of disordered systems to operations research. A landscape is a function defined on a very large discrete set V that carries an additional metric or at least topological structure into the real numbers R. We will consider landscapes defined on the vertex set of undirected graphs. Thus let G=G(V,E) be an undirected graph and f an arbitrary real-valued function taking values from V . We will refer to the triple (V,E,f) as a landscape over G. We say two configurations x,y in V are neutral if f(x)=f(y). We colloquially refer to a landscape as 'neutral'' if a substantial fraction of adjacent pairs of configurations are neutral. A flat landscape is one where f is constant. The opposite of flatness is ruggedness and it is defined as the number of local optima or by means of pair correlation functions. These two key features of a landscape, ruggedness and neutrality, appear to be two sides of the same coin. Ruggedness can be measured either by correlation properties, which are sensitive to monotonic transformation of the landscape, and by combinatorial properties such as the lengths of downhill paths and the number of local optima, which are invariant under monotonic transformations. The connection between the two views has remained largely unexplored and poorly understood. For this thesis, a survey on fitness landscapes is presented, together with the first steps in the direction to find this connection together with a relation between the covariance matrix of a random landscape model and its ruggedness.
99

Optimal resource allocation strategies for electric vehicles in smart grids / Stratégies optimales d'allocation des ressources pour véhicules électriques dans les réseaux intelligents

Alinia, Bahram 10 July 2018 (has links)
Avec les préoccupations environnementales croissantes liées aux émissions de carbone et la chute rapide des prix des batteries, la part de marché des véhicules électriques (EV) augmente rapidement. Le nombre croissant de EV ainsi que les progrès sans précédent dans la capacité de la batterie et de la technologie entraîne une augmentation drastique de la demande totale d'énergie destinée aux véhicules électriques. Cette forte demande de charge rend complexe le problème de planification de la charge. Même en prenant avantage de la propriété reportable des demandes de charge et d'une planification adéquate, la demande globale pourrait dépasser le taux de charge tolérable des stations, étant donné les contraintes physiques des dispositifs de charge et des transformateurs. Le principal défi est la nécessité de concevoir des solutions en ligne puisque, dans la pratique, l'ordonnanceur ne dispose d'aucune information sur les arrivées futures d'EV. Cette thèse étudie le problème d'ordonnancement des EV en ligne et fournit trois contributions principales. Premièrement, nous démontrons que le problème classique de la programmation en ligne des tâches sensibles aux échéances avec des valeurs partielles est similaire au problème d'ordonnancement EV et étudions l'extension de la programmation des charges EV en prenant en compte de la limite de traitement des travaux. Le problème réside dans la catégorie des problèmes d'ordonnancement en ligne couplés dans le temps sans disponibilité d'informations futures. Le premier algorithme proposé est déterministe, tandis que le second est randomisé et bénéficie d'une complexité de calcul plus faible. Deuxièmement, nous formulons un problème de maximisation du bien-être social pour la planification de la charge des EV avec une contrainte de capacité de charge. Nous avons conçu des algorithmes d'ordonnancement de charge qui non seulement fonctionnent dans un scénario en ligne, mais aussi qui répondent aux deux principaux défis suivants : (i) fournir un engagement à l'arrivée ; (ii) garantir la résistance aux stratégies (de groupe). Des simulations approfondies utilisant des traces réelles démontrent l'efficacité de nos algorithmes d'ordonnancement en ligne par rapport à la solution hors-ligne optimale non-engagée. La troisième contribution concerne la planification en ligne des véhicules électriques dans un réseau de recharge adaptatif (ACN) avec des contraintes de pics locaux et globaux. Nous avons conçu un algorithme d'ordonnancement primal-dual de faible complexité qui atteint un rapport d'approximation borné. Des résultats expérimentaux détaillés basés sur des traces montrent que les performances des algorithmes en ligne proposés sont proches de l'optimum hors ligne et surpassent les solutions existantes / With the increased environmental concerns related to carbon emission, and rapid drop in battery prices (e.g., 35% drop in 2017), the market share of Electric Vehicles (EVs) is rapidly growing. The growing number of EVs along with the unprecedented advances in battery capacity and technology results in drastic increase in the total energy demand of EVs. This large charging demand makes the EV charging scheduling problem challenging. The critical challenge is the need for online solution design since in practical scenario the scheduler has no information of future arrivals of EVs in a time-coupled underlying problem. This thesis studies online EV scheduling problem and provides three main contributions. First, we demonstrate that the classical problem of online scheduling of deadlinesensitive jobs with partial values is similar to the EV scheduling problem and study the extension to EV charging scheduling by taking into account the processing rate limit of jobs as an additional constraint to the original problem. The problem lies in the category of time-coupled online scheduling problems without availability of future information. Using competitive ratio, as a well-established performance metric, two online algorithms, both of which are shown to be (2 − 1/U)-competitive are proposed, where U is the maximum scarcity level, a parameter that indicates demand-to-supply ratio. Second, we formulate a social welfare maximization problem for EV charging scheduling with charging capacity constraint. We devise charging scheduling algorithms that not only work in online scenario, but also they address the following two key challenges: (i) to provide on-arrival commitment; respecting the capacity constraint may hinder fulfilling charging requirement of deadline-constrained EVs entirely. Therefore, committing a guaranteed charging amount upon arrival of each EV is highly required; (ii) to guarantee (group)-strategy-proofness as a salient feature to promote EVs to reveal their true type and do not collude with other EVs. Third, we tackle online scheduling of EVs in an adaptive charging network (ACN) with local and global peak constraints. Two alternatives in resource-limited scenarios are to maximize the social welfare by partially charging the EVs (fractional model) or selecting a subset of EVs and fully charge them (integral model). For the fractional model, both offline and online algorithms are devised. We prove that the offline algorithm is optimal. We prove the online algorithm achieves a competitive ratio of 2. The integral model, however, is more challenging since the underlying problem is NP-hard due to 0/1 selection criteria of EVs. Hence, efficient solution design is challenging even in offline setting. We devise a low-complexity primal-dual scheduling algorithm that achieves a bounded approximation ratio. Built upon the offline approximate algorithm, we propose an online algorithm and analyze its competitive ratio in special cases
100

FFRU: A Time- and Space-Efficient Caching Algorithm

Garrett, Benjamin, 0000-0003-1587-6585 January 2021 (has links)
Cache replacement policies have applications that are nearly ubiquitous in technology. Among these is an interesting subset which occurs when referentially transparent functions are memoized, eg. in compilers, in dynamic programming, and other software caches. In many applications the least recently used (LRU) approach likely preserves items most needed by memoized function calls. However, despite its popularity LRU is expensive to implement, which has caused a spate of research initiatives aimed at approximating its cache miss performance in exchange for faster and more memory efficient implementations. We present a novel caching algorithm, Far From Recently Used (FFRU), which offers a simple, but highly configurable mechanism for providing lower bounds on the usage recency of items evicted from the cache. This algorithm preserves the constant time amortized cost of insertions and updates and minimizes the memory overhead needed to administer the eviction guarantees. We study the cache miss performance of several memoized optimization problems which vary in the number of subproblems generated and the access patterns exhibited by their recursive calls. We study their cache miss performance using LRU cache replacement, then show the performance of FFRU in these same problem scenarios. We show that for equivalent minimum eviction age guarantees, FFRU incurs fewer cache misses than LRU, and does so using less memory. We also present some variations of the algorithms studied (Fibonacci, KMP, LCS, and Euclidean TSP) which exploit the characteristics of the cache replacement algorithms being employed, further resulting in improved cache miss performance. We present a novel implementation of a well known approximation algorithm for the Euclidean Traveling Salesman Problem due to Sanjeev Arora. Our implementation of this algorithm outperforms the currently known implementations of the same. It has long remained an open question whether or not algorithms relying on geometric divisions of space can be implemented into practical tools, and our powerful implementation of Arora's algorithm establishes a new benchmark in that arena. / Computer and Information Science

Page generated in 0.0308 seconds