• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ant Colony Optimization and Local Search for the Probabilistic Traveling Salesman Problem: A Case Study in Stochastic Combinatorial Optimization

Bianchi, Leonora 29 June 2006 (has links)
In this thesis we focus on Stochastic combinatorial Optimization Problems (SCOPs), a wide class of combinatorial optimization problems under uncertainty, where part of the information about the problem data is unknown at the planning stage, but some knowledge about its probability distribution is assumed. Optimization problems under uncertainty are complex and difficult, and often classical algorithmic approaches based on mathematical and dynamic programming are able to solve only very small problem instances. For this reason, in recent years metaheuristic algorithms such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others, are emerging as successful alternatives to classical approaches. In this thesis, metaheuristics that have been applied so far to SCOPs are introduced and the related literature is thoroughly reviewed. In particular, two properties of metaheuristics emerge from the survey: they are a valid alternative to exact classical methods for addressing real-sized SCOPs, and they are flexible, since they can be quite easily adapted to solve different SCOPs formulations, both static and dynamic. On the base of the current literature, we identify the following as the key open issues in solving SCOPs via metaheuristics: (1) the design and integration of ad hoc, fast and effective objective function approximations inside the optimization algorithm; (2) the estimation of the objective function by sampling when no closed-form expression for the objective function is available, and the study of methods to reduce the time complexity and noise inherent to this type of estimation; (3) the characterization of the efficiency of metaheuristic variants with respect to different levels of stochasticity in the problem instances. We investigate the above issues by focusing in particular on a SCOP belonging to the class of vehicle routing problems: the Probabilistic Traveling Salesman Problem (PTSP). For the PTSP, we consider the Ant Colony Optimization metaheuristic and we design efficient local search algorithms that can enhance its performance. We obtain state-of-the-art algorithms, but we show that they are effective only for instances above a certain level of stochasticity, otherwise it is more convenient to solve the problem as if it were deterministic. The algorithmic variants based on an estimation of the objective function by sampling obtain worse results, but qualitatively have the same behavior of the algorithms based on the exact objective function, with respect to the level of stochasticity. Moreover, we show that the performance of algorithmic variants based on ad hoc approximations is strongly correlated with the absolute error of the approximation, and that the effect on local search of ad hoc approximations can be very degrading. Finally, we briefly address another SCOP belonging to the class of vehicle routing problems: the Vehicle Routing Problem with Stochastic Demands (VRPSD). For this problem, we have implemented and tested several metaheuristics, and we have studied the impact of integrating in them different ad hoc approximations.
2

Rozvozný problém s delenou dodávkou / Split delivery vehicle routing problem

Marcinko, Tomáš January 2008 (has links)
This thesis focuses on a description of the split delivery vehicle routing problem (SDVRP), in which the restriction that each customer has to be visited exactly once is not assumed, contrary to the classical vehicle routing problem, and split deliveries are allowed. Considering the fact that the split delivery vehicle routing problem in NP-hard, a number of heuristic algorithms proposed in the literature are presented. Computational experiments are reported and the results show that the largest benefits of split deliveries are obtained in case of instances with fairly specific characteristics and also several drawbacks of implemented Tabu Search algorithm (SPLITABU) are point out.
3

Condução de Experimentos Computacionais com Métodos Heurísticos / Conduction of Computational Experiments whit Heuristic Methods

COSTA, Carine Rodrigues da 30 March 2011 (has links)
Made available in DSpace on 2014-07-29T14:57:48Z (GMT). No. of bitstreams: 1 Dissertacao Carine Rodrigues da Costa.pdf: 991478 bytes, checksum: 516faf301aac129df1d69068892a5ea9 (MD5) Previous issue date: 2011-03-30 / The necessity of solving optimization problems in a reasonable computational time limit makes the development of heuristics be a large research area. Usually, developed heuristics for optimization problems are empirically evaluated by its application to a set of specific instances, comparing to quality solution and computational efforts. Besides, when presenting a new heuristic, the contributions should be scientifically evaluated and reported in an objective way. The quality of a computational experiment report may become evident the difficulty to reproduce the experiment or compare the results with those of other experiments. Part of the origin of these issues comes from the fact that there is no standard for reporting experiments in Computer Science. Therefore, the focus of this work is to investigate methods of conducting experimental research with heuristics, to examine what methods are more favorable and consistent in evaluating these. Thus, the investigation resulted in a compilation with contribution of several authors, which consisted in identifying a set of recommendations, including the formulation of a checklist representing the summary form of all the items that were seen in this study. The results of this review served as the basis for definitining the research and leading a sample study, which consisted in analysis of articles that deal with the Quadratic Assignment Problem (QAP), by checking the necessary items for understanding, reproduction and comparison of the performed experiments. / A necessidade de resolver problemas de otimização em um limite razoável de tempo computacional faz com que o desenvolvimento de heurísticas seja uma grande área de pesquisa. Usualmente, heurísticas desenvolvidas para problemas de otimização são avaliadas empiricamente, pela sua aplicação a um conjunto de instâncias específicas, comparando qualidade da solução e esforços computacionais. Além disso, ao se apresentar uma nova heurística, as contribuições devem ser avaliadas cientificamente e relatadas de uma maneira objetiva. Ao descrever um experimento computacional e relatar os resultados obtidos do mesmo, pode ficar evidente a dificuldade de reproduzir o experimento ou comparar os resultados obtidos com os de outros experimentos. Parte da origem dessas questões vem do fato de que não há padrão para o relato de experimentos na área de Computação. Portanto, o foco deste trabalho é investigar métodos de condução de pesquisa experimental com heurísticas, para analisar quais são os mais favoráveis e consistentes na avaliação destas. Desta forma, a investigação resultou em uma compilação com a contribuição de diversos autores, em que consistiu na identificação de um conjunto de recomendações, com a elaboração de um checklist, representando de forma sumarizada todos os itens vistos nesta pesquisa. Os resultados dessa revisão serviram como base para a definição da pesquisa e condução de um estudo exemplo, que consistiu na análise de artigos que tratam do Problema de Atribuição Quadrática (PAQ), com a verificação dos itens necessários para compreensão, reprodução e comparação dos experimentos realizados.
4

Optimization Algorithms for Clique Problems / Algorithmes d’Optimisation pour quelques Problèmes de Clique.

Zhou, Yi 29 June 2017 (has links)
Cette thèse présente des algorithmes de résolution de quatre problèmes de clique : clique de poids maximum (MVWCP), s-plex maximum (MsPlex), clique maximum équilibrée dans un graphe biparti (MBBP) et clique partition (CPP). Les trois premiers problèmes sont des généralisations ou relaxations du problème de la clique maximum, tandis que le dernier est un problème de couverture. Ces problèmes, ayant de nombreuses applications pratiques, sont NP-difficiles, rendant leur résolution ardue dans le cas général. Nous présentons ici des algorithmes de recherche locale, principalement basés sur la recherche tabou, permettant de traiter efficacement ces problèmes ; chacun de ces algorithmes emploie des composants originaux et spécifiquement adaptés aux problèmes traités, comme de nouveaux opérateurs ou mécanismes perturbatifs. Nous y intégrons également des stratégies telles que la réduction de graphe ou la propagation afin de traiter des réseaux de plus grande taille. Des expérimentations basées sur des jeux d’instances nombreux et variés permettent de montrer la compétitivité de nos algorithmes en comparaison avec les autres stratégies existantes. / This thesis considers four clique problems: the maximum vertex weight clique problem (MVWCP), the maximum s-plex problem (MsPlex), the maximum balanced biclique problem (MBBP) and the clique partitioning problem (CPP). The first three are generalization and relaxation of the classic maximum clique problem (MCP), while the last problem belongs to a clique grouping problem. These combinatorial problems have numerous practical applications. Given that they all belong to the NP-Hard family, it is computationally difficult to solve them in the general case. For this reason, this thesis is devoted to develop effective algorithms to tackle these challenging problems. Specifically, we propose two restart tabu search algorithms based on a generalized PUSH operator for MVWCP, a frequency driven local search algorithms for MsPlex, a graph reduction based tabu search as well as effective exact branch and bound algorithms for MBBP and lastly, a three phase local search algorithm for CPP. In addition to the design of efficient move operators for local search algorithms, we also integrate components like graph reduction or upper bound propagation in order to deal deal with very large real-life networks. The experimental tests on a wide range of instances show that our algorithms compete favorably with the main state-of-the-art algorithms.
5

Ant colony optimization and local search for the probabilistic traveling salesman problem: a case study in stochastic combinatorial optimization

Bianchi, Leonora 29 June 2006 (has links)
In this thesis we focus on Stochastic combinatorial Optimization Problems (SCOPs), a wide class of combinatorial optimization problems under uncertainty, where part of the information about the problem data is unknown at the planning stage, but some knowledge about its probability distribution is assumed.<p><p>Optimization problems under uncertainty are complex and difficult, and often classical algorithmic approaches based on mathematical and dynamic programming are able to solve only very small problem instances. For this reason, in recent years metaheuristic algorithms such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others, are emerging as successful alternatives to classical approaches.<p><p>In this thesis, metaheuristics that have been applied so far to SCOPs are introduced and the related literature is thoroughly reviewed. In particular, two properties of metaheuristics emerge from the survey: they are a valid alternative to exact classical methods for addressing real-sized SCOPs, and they are flexible, since they can be quite easily adapted to solve different SCOPs formulations, both static and dynamic. On the base of the current literature, we identify the following as the key open issues in solving SCOPs via metaheuristics: <p>(1) the design and integration of ad hoc, fast and effective objective function approximations inside the optimization algorithm;<p>(2) the estimation of the objective function by sampling when no closed-form expression for the objective function is available, and the study of methods to reduce the time complexity and noise inherent to this type of estimation;<p>(3) the characterization of the efficiency of metaheuristic variants with respect to different levels of stochasticity in the problem instances. <p><p>We investigate the above issues by focusing in particular on a SCOP belonging to the class of vehicle routing problems: the Probabilistic Traveling Salesman Problem (PTSP). For the PTSP, we consider the Ant Colony Optimization metaheuristic and we design efficient local search algorithms that can enhance its performance. We obtain state-of-the-art algorithms, but we show that they are effective only for instances above a certain level of stochasticity, otherwise it is more convenient to solve the problem as if it were deterministic.<p>The algorithmic variants based on an estimation of the objective function by sampling obtain worse results, but qualitatively have the same behavior of the algorithms based on the exact objective function, with respect to the level of stochasticity. Moreover, we show that the performance of algorithmic variants based on ad hoc approximations is strongly correlated with the absolute error of the approximation, and that the effect on local search of ad hoc approximations can be very degrading.<p><p>Finally, we briefly address another SCOP belonging to the class of vehicle routing problems: the Vehicle Routing Problem with Stochastic Demands (VRPSD). For this problem, we have implemented and tested several metaheuristics, and we have studied the impact of integrating in them different ad hoc approximations.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
6

Plataformes avançades en el Núvol per a la reproductibilitat d'experiments computacionals

Giménez Alventosa, Vicent 07 July 2022 (has links)
Tesis por compendio / [ES] La tesis presentada se enmarca dentro del ámbito de la ciencia computacional. Dentro de esta, se centra en el desarrollo de herramientas para la ejecución de experimentación científica computacional, el impacto de la cual es cada vez mayor en todos los ámbitos de la ciencia y la ingeniería. Debido a la creciente complejidad de los cálculos realizados, cada vez es necesario un mayor conocimiento de las técnicas y herramientas disponibles para llevar a cabo este tipo de experimentos, ya que pueden requerir, en general, una gran infraestructura computacional para afrontar los altos costes de cómputo. Más aún, la reciente popularización del cómputo en la Nube ofrece una gran variedad de posibilidades para configurar nuestras propias infraestructuras con requisitos específicos. No obstante, el precio a pagar es la complejidad de configurar dichas infraestructuras en este tipo de entornos. Además, el aumento en la complejidad de configuración de los entornos en la nube no hace más que agravar un problema ya existente en el ámbito científico, y es el de la reproducibilidad de los resultados publicados. La falta de documentación, como las versiones de software que se han usado para llevar a cabo el cómputo, o los datos requeridos, provocan que una parte significativa de los resultados de experimentos computacionales publicados no sean reproducibles por otros investigadores. Como consecuencia, se produce un derroche de recursos destinados a la investigación. Como respuesta a esta situación, existen, y continúan desarrollándose, diferentes herramientas para facilitar procesos como el despliegue y configuración de infraestructura, el acceso a los datos, el diseño de flujos de cómputo, etc. con el objetivo de que los investigadores puedan centrarse en el problema a abordar. Precisamente, esta es la base de los trabajos desarrollados en la presente tesis, el desarrollo de herramientas para facilitar que el cómputo científico se beneficie de entornos de computación en la Nube de forma eficiente. El primer trabajo presentado empieza con un estudio exhaustivo de las prestaciones d'un servicio relativamente nuevo, la ejecución serverless de funciones. En este, se determinará la conveniencia de usar este tipo de entornos en el cálculo científico midiendo tanto sus prestaciones de forma aislada, como velocidad de CPU y comunicaciones, como en conjunto mediante el desarrollo de una aplicación de procesamiento MapReduce para entornos serverless. En el siguiente trabajo, se abordará una problemática diferente, y es la reproducibilidad de experimentos computacionales. Para conseguirlo, se presentará un entorno, basado en Jupyter, donde se encapsule tanto el proceso de despliegue y configuración de infraestructura computacional como el acceso a datos y la documentación de la experimentación. Toda esta información quedará registrada en el notebook de Jupyter donde se ejecuta el experimento, permitiendo así a otros investigadores reproducir los resultados simplemente compartiendo el notebook correspondiente. Volviendo al estudio de las prestaciones del primer trabajo, teniendo en cuenta las medidas y bien estudiadas fluctuaciones de éstas en entornos compartidos, como el cómputo en la Nube, en el tercer trabajo se desarrollará un sistema de balanceo de carga diseñado expresamente para este tipo de entornos. Como se mostrará, este componente es capaz de gestionar y corregir de forma precisa fluctuaciones impredecibles en las prestaciones del cómputo en entornos compartidos. Finalmente, y aprovechando el desarrollo anterior, se diseñará una plataforma completamente serverless encargada de repartir y balancear tareas ejecutadas en múltiples infraestructuras independientes. La motivación de este último trabajo viene dada por los altos costes computacionales de ciertos experimentos, los cuales fuerzan a los investigadores a usar múltiples infraestructuras que, en general, pertenecen a diferentes organizaciones. / [CA] La tesi presentada a aquest document s'emmarca dins de l'àmbit de la ciència computacional. Dintre d'aquesta, es centra en el desenvolupament d'eines per a l'execució d'experimentació científica computacional, la qual té un impacte cada vegada major en tots els àmbits de la ciència i l'enginyeria. Donada la creixent complexitat dels càlculs realitzats, cada vegada és necessari un major coneixement sobre les tècniques i eines disponibles per a dur a terme aquestes experimentacions, ja que poden requerir, en general, una gran infraestructura computacional per afrontar els alts costos de còmput. Més encara, la recent popularització del còmput en el Núvol ofereix una gran varietat de possibilitats per a configurar les nostres pròpies infraestructures amb requisits específiques. No obstant, el preu a pagar és la complexitat de configurar les esmenades infraestructures a aquest tipus d'entorns. A més, l'augment de la complexitat de configuració dels entorns de còmput no ha fet més que agreujar un problema ja existent a l'àmbit científic, i és la reproductibilitat de resultats publicats. La manca de documentació, com les versions del programari emprat per a dur a terme el còmput, o les dades requerides ocasionen que una part no negligible dels resultats d'experiments computacionals publicats no siguen reproduïbles per altres investigadors. Com a conseqüència, es produeix un malbaratament dels recursos destinats a la investigació. Com a resposta a aquesta situació, existeixen, i continuen desenvolupant-se, diverses eines per facilitar processos com el desplegament i configuració d'infraestructura, l'accés a les dades, el disseny de fluxos de còmput, etc. amb l'objectiu de que els investigadors puguen centrar-se en el problema a abordar. Precisament, aquesta és la base dels treballs desenvolupats durant la tesi que segueix, el desenvolupar eines per a facilitar que el còmput científic es beneficiar-se d'entorns de computació en el Núvol d'una forma eficient. El primer treball presentat comença amb un estudi exhaustiu de les prestacions d'un servei relativament nou, l'execució serverless de funcions. En aquest, es determinarà la conveniència d'emprar este tipus d'entorns en el càlcul científic mesurant tant les seues prestacions de forma aïllada, com velocitat de CPU i la velocitat de les comunicacions, com en conjunt a través del desenvolupament d'una aplicació de processament MapReduce per a entorns serverless. Al següent treball, s'abordarà una problemàtica diferent, i és la reproductibilitat dels experiments computacionals. Per a aconseguir-ho, es presentarà una entorn, basat en Jupyter, on s'englobe tant el desplegament i configuració d'infraestructura computacional, com l'accés a les dades requerides i la documentació de l'experimentació. Tota aquesta informació quedarà registrada al notebook de Jupyter on s'executa l'experiment, permetent així a altres investigadors reproduir els resultats simplement compartint el notebook corresponent. Tornant a l'estudi de les prestacions del primer treball, donades les mesurades i ben estudiades fluctuacions d'aquestes en entorns compartits, com en el còmput en el Núvol, al tercer treball es desenvoluparà un sistema de balanceig de càrrega dissenyat expressament per aquest tipus d'entorns. Com es veurà, aquest component és capaç de gestionar i corregir de forma precisa fluctuacions impredictibles en les prestacions de còmput d'entorns compartits. Finalment, i aprofitant el desenvolupament anterior, es dissenyarà una plataforma completament serverless per a repartir i balancejar tasques executades en múltiples infraestructures de còmput independents. La motivació d'aquest últim treball ve donada pels alts costos computacionals de certes experimentacions, els quals forcen als investigadors a emprar múltiples infraestructures que, en general, pertanyen a diferents organitzacions. Es demostrarà la capacitat de la plataforma per balancejar treballs i minimitzar el malbaratament de recursos / [EN] This document is focused on computational science, specifically in the development of tools for executions of scientific computational experiments, whose impact has increased, and still increasing, in all scientific and engineering scopes. Considering the growing complexity of scientific calculus, it is required large and complex computational infrastructures to carry on the experimentation. However, to use this infrastructures, it is required a deep knowledge of the available tools and techniques to be handled efficiently. Moreover, the popularity of Cloud computing environments offers a wide variety of possible configurations for our computational infrastructures, thus complicating the configuration process. Furthermore, this increase in complexity has exacerbated the well known problem of reproducibility in science. The lack of documentation, as the used software versions, or the data required by the experiment, produces non reproducible results in computational experiments. This situation produce a non negligible waste of the resources invested in research. As consequence, several tools have been developed to facilitate the deployment, usage and configuration of complex infrastructures, provide access to data, etc. with the objective to simplify the common steps of computational experiments to researchers. Moreover, the works presented in this document share the same objective, i.e. develop tools to provide an easy, efficient and reproducible usage of cloud computing environments for scientific experimentation. The first presented work begins with an exhaustive study of the suitability of the AWS serverless environment for scientific calculus. In this one, the suitability of this kind of environments for scientific research will be studied. With this aim, the study will measure the CPU and network performance, both isolated and combined, via a MapReduce framework developed completely using serverless services. The second one is focused on the reproducibility problem in computational experiments. To improve reproducibility, the work presents an environment, based on Jupyter, which handles and simplify the deployment, configuration and usage of complex computational infrastructures. Also, includes a straight forward procedure to provide access to data and documentation of the experimentation via the Jupyter notebooks. Therefore, the whole experiment could be reproduced sharing the corresponding notebook. In the third work, a load balance library has been developed to address fluctuations of shared infrastructure capabilities. This effect has been wide studied in the literature and affects specially to cloud computing environments. The developed load balance system, as we will see, can handle and correct accurately unpredictable fluctuations in such environments. Finally, based on the previous work, a completely serverless platform is presented to split and balance job executions among several shared, heterogeneous and independent computing infrastructures. The motivation of this last work is the huge computational cost of many experiments, which forces the researchers to use multiple infrastructures belonging, in general, to different organisations. It will be shown how the developed platform is capable to balance the workload accurately. Moreover, it can fit execution time constrains specified by the user. In addition, the platform assists the computational infrastructures to scale as a function of the incoming workload, avoiding an over-provisioning or under-provisioning. Therefore, the platform provides an efficient usage of the available resources. / This study was supported by the program “Ayudas para la contratación de personal investigador en formación de carácter predoctoral, programa VALi+d” under grant number ACIF/2018/148 from the Conselleria d’Educació of the Generalitat Valenciana. The authors would also like to thank the Spanish "Ministerio de Economía, Industria y Competitividad"for the project “BigCLOE” with reference number TIN2016-79951-R. / Giménez Alventosa, V. (2022). Plataformes avançades en el Núvol per a la reproductibilitat d'experiments computacionals [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/184010 / TESIS / Compendio

Page generated in 0.1771 seconds