11 |
On algorithm selection, with an application to combinatorial search problemsKotthoff, Lars January 2012 (has links)
The Algorithm Selection Problem is to select the most appropriate way for solving a problem given a choice of different ways. Some of the most prominent and successful applications come from Artificial Intelligence and in particular combinatorial search problems. Machine Learning has established itself as the de facto way of tackling the Algorithm Selection Problem. Yet even after a decade of intensive research, there are no established guidelines as to what kind of Machine Learning to use and how. This dissertation presents an overview of the field of Algorithm Selection and associated research and highlights the fundamental questions left open and problems facing practitioners. In a series of case studies, it underlines the difficulty of doing Algorithm Selection in practice and tackles issues related to this. The case studies apply Algorithm Selection techniques to new problem domains and show how to achieve significant performance improvements. Lazy learning in constraint solving and the implementation of the alldifferent constraint are the areas in which we improve on the performance of current state of the art systems. The case studies furthermore provide empirical evidence for the effectiveness of using the misclassification penalty as an input to Machine Learning. After having established the difficulty, we present an effective technique for reducing it. Machine Learning ensembles are a way of reducing the background knowledge and experimentation required from the researcher while increasing the robustness of the system. Ensembles do not only decrease the difficulty, but can also increase the performance of Algorithm Selection systems. They are used to much the same ends in Machine Learning itself. We finally tackle one of the great remaining challenges of Algorithm Selection -- which Machine Learning technique to use in practice. Through a large-scale empirical evaluation on diverse data taken from Algorithm Selection applications in the literature, we establish recommendations for Machine Learning algorithms that are likely to perform well in Algorithm Selection for combinatorial search problems. The recommendations are based on strong empirical evidence and additional statistical simulations. The research presented in this dissertation significantly reduces the knowledge threshold for researchers who want to perform Algorithm Selection in practice. It makes major contributions to the field of Algorithm Selection by investigating fundamental issues that have been largely ignored by the research community so far.
|
12 |
Uso de meta-aprendizado na recomendação de meta-heurísticas para o problema do caixeiro viajante / Using meta-learning on the recommendation of meta-heuristics for the traveling salesman problemKanda, Jorge Yoshio 07 December 2012 (has links)
O problema do caixeiro viajante (PCV) é um problema clássico de otimização que possui diversas variações, aplicações e instâncias. Encontrar a solução ótima para muitas instâncias desse problema é geralmente muito difícil devido o alto custo computacional. Vários métodos de otimização, conhecidos como meta-heurísticas (MHs), são capazes de encontrar boas soluções para o PCV. Muitos algoritmos baseados em diversas MHs têm sido propostos e investigados para diferentes variações do PCV. Como não existe um algoritmo universal que encontre a melhor solução para todas as instâncias de um problema, diferentes MHs podem prover a melhor solução para diferentes instâncias do PCV. Desse modo, a seleção a priori da MH que produza a melhor solução para uma dada instância é uma tarefa difícil. A pesquisa desenvolvida nesta tese investiga o uso de abordagens de meta-aprendizado para selecionar as MHs mais promissoras para novas instâncias de PCV. Essas abordagens induzem meta-modelos preditivos a partir do treinamento das técnicas de aprendizado de máquina em um conjunto de meta-dados. Cada meta-exemplo, em nosso conjunto de meta-dados, representa uma instância de PCV descrita por características (meta-atributos) do PCV e pelo desempenho das MHs (meta-atributo alvo) para essa instância. Os meta-modelos induzidos são usados para indicar os valores do meta-atributo alvo para novas instâncias do PCV. Vários experimentos foram realizados durante a investigação desta pesquisa e resultados importantes foram obtidos / The traveling salesman problem (TSP) is a classical optimization problem that has several variations, applications and instances. To find the optimal solution for many instances of this problem is usually a very hard task due to high computational cost. Various optimization methods, known as metaheuristics (MHs), are capable to generate good solutions for the TSP. Many algorithms based on different MHs have been proposed and investigated for different variations of the TSP. Different MHs can provide the best optimization solution for different TSP instances, since there is no a universal algorithm able to find the best solution for all instances. Thus, a priori selection of the MH that produces the best solution for a given instance is a hard task. The research developed in this thesis investigates the use of meta-learning approaches to select the most promising MHs for new TSP instances. These approaches induce predictive meta-models from the training of machine learning techniques on a set of meta-data. In our meta-data, each meta-example is a TSP instance described by problem characteristics (meta-features) and performance of MHs (target meta-features) for this instance. The induced meta-models are used to indicate the values of the target meta-feature for new TSP instances. During the investigation of this research, several experiments were performed and important results were obtained
|
13 |
Uso de meta-aprendizado na recomendação de meta-heurísticas para o problema do caixeiro viajante / Using meta-learning on the recommendation of meta-heuristics for the traveling salesman problemJorge Yoshio Kanda 07 December 2012 (has links)
O problema do caixeiro viajante (PCV) é um problema clássico de otimização que possui diversas variações, aplicações e instâncias. Encontrar a solução ótima para muitas instâncias desse problema é geralmente muito difícil devido o alto custo computacional. Vários métodos de otimização, conhecidos como meta-heurísticas (MHs), são capazes de encontrar boas soluções para o PCV. Muitos algoritmos baseados em diversas MHs têm sido propostos e investigados para diferentes variações do PCV. Como não existe um algoritmo universal que encontre a melhor solução para todas as instâncias de um problema, diferentes MHs podem prover a melhor solução para diferentes instâncias do PCV. Desse modo, a seleção a priori da MH que produza a melhor solução para uma dada instância é uma tarefa difícil. A pesquisa desenvolvida nesta tese investiga o uso de abordagens de meta-aprendizado para selecionar as MHs mais promissoras para novas instâncias de PCV. Essas abordagens induzem meta-modelos preditivos a partir do treinamento das técnicas de aprendizado de máquina em um conjunto de meta-dados. Cada meta-exemplo, em nosso conjunto de meta-dados, representa uma instância de PCV descrita por características (meta-atributos) do PCV e pelo desempenho das MHs (meta-atributo alvo) para essa instância. Os meta-modelos induzidos são usados para indicar os valores do meta-atributo alvo para novas instâncias do PCV. Vários experimentos foram realizados durante a investigação desta pesquisa e resultados importantes foram obtidos / The traveling salesman problem (TSP) is a classical optimization problem that has several variations, applications and instances. To find the optimal solution for many instances of this problem is usually a very hard task due to high computational cost. Various optimization methods, known as metaheuristics (MHs), are capable to generate good solutions for the TSP. Many algorithms based on different MHs have been proposed and investigated for different variations of the TSP. Different MHs can provide the best optimization solution for different TSP instances, since there is no a universal algorithm able to find the best solution for all instances. Thus, a priori selection of the MH that produces the best solution for a given instance is a hard task. The research developed in this thesis investigates the use of meta-learning approaches to select the most promising MHs for new TSP instances. These approaches induce predictive meta-models from the training of machine learning techniques on a set of meta-data. In our meta-data, each meta-example is a TSP instance described by problem characteristics (meta-features) and performance of MHs (target meta-features) for this instance. The induced meta-models are used to indicate the values of the target meta-feature for new TSP instances. During the investigation of this research, several experiments were performed and important results were obtained
|
14 |
Knowledge-based 3D point clouds processingTruong, Quoc Hung 15 November 2013 (has links) (PDF)
The modeling of real-world scenes through capturing 3D digital data has proven to be both useful andapplicable in a variety of industrial and surveying applications. Entire scenes are generally capturedby laser scanners and represented by large unorganized point clouds possibly along with additionalphotogrammetric data. A typical challenge in processing such point clouds and data lies in detectingand classifying objects that are present in the scene. In addition to the presence of noise, occlusionsand missing data, such tasks are often hindered by the irregularity of the capturing conditions bothwithin the same dataset and from one data set to another. Given the complexity of the underlyingproblems, recent processing approaches attempt to exploit semantic knowledge for identifying andclassifying objects. In the present thesis, we propose a novel approach that makes use of intelligentknowledge management strategies for processing of 3D point clouds as well as identifying andclassifying objects in digitized scenes. Our approach extends the use of semantic knowledge to allstages of the processing, including the guidance of the individual data-driven processing algorithms.The complete solution consists in a multi-stage iterative concept based on three factors: the modeledknowledge, the package of algorithms, and a classification engine. The goal of the present work isto select and guide algorithms following an adaptive and intelligent strategy for detecting objects inpoint clouds. Experiments with two case studies demonstrate the applicability of our approach. Thestudies were carried out on scans of the waiting area of an airport and along the tracks of a railway.In both cases the goal was to detect and identify objects within a defined area. Results show that ourapproach succeeded in identifying the objects of interest while using various data types
|
15 |
Knowledge-based 3D point clouds processing / Traitement 3D de nuages de points basé sur la connaissanceTruong, Quoc Hung 15 November 2013 (has links)
La modélisation de scènes réelles à travers la capture de données numériques 3D a été prouvée à la fois utile et applicable dans une variété d’applications. Des scènes entières sont généralement numérisées par des scanners laser et représentées par des grands nuages de points non organisés souvent accompagnés de données photogrammétriques. Un problème typique dans le traitement de ces nuages et données réside dans la détection et la classification des objets présents dans la scène. Ces tâches sont souvent entravées par la variabilité des conditions de capture des données, la présence de bruit, les occlusions ainsi que les données manquantes. Compte tenu de la complexité des problèmes sous-jacents, les approches de traitement récentes tentent d’exploiter les connaissances sémantiques pour identifier et classer les objets. Dans cette thèse, nous proposons une nouvelle approche qui fait appel à des stratégies intelligentes de gestion des connaissances pour le traitement des nuages de points 3D ainsi que l’identification et la classification des objets dans les scènes numérisées. Notre approche étend l’utilisation des connaissances sémantiques à toutes les étapes du traitement, y compris le choix et le guidage des algorithmes de traitement axées sur les données individuelles. Notre solution constitue un concept multi-étape itératif sur la base de trois facteurs : la connaissance modélisée, un ensemble d’algorithmes de traitement, et un moteur de classification. L’objectif de ce travail est de sélectionner et d’orienter les algorithmes de manière adaptative et intelligente pour détecter des objets dans les nuages de points. Des expériences avec deux études de cas démontrent l’applicabilité de notre approche. Les études ont été réalisées sur des analyses de la salle d’attente d’un aéroport et le long des voies de chemin de fer. Dans les deux cas, l’objectif était de détecter et d’identifier des objets dans une zone définie. Les résultats montrent que notre approche a réussi à identifier les objets d’intérêt tout en utilisant différents types de données / The modeling of real-world scenes through capturing 3D digital data has proven to be both useful andapplicable in a variety of industrial and surveying applications. Entire scenes are generally capturedby laser scanners and represented by large unorganized point clouds possibly along with additionalphotogrammetric data. A typical challenge in processing such point clouds and data lies in detectingand classifying objects that are present in the scene. In addition to the presence of noise, occlusionsand missing data, such tasks are often hindered by the irregularity of the capturing conditions bothwithin the same dataset and from one data set to another. Given the complexity of the underlyingproblems, recent processing approaches attempt to exploit semantic knowledge for identifying andclassifying objects. In the present thesis, we propose a novel approach that makes use of intelligentknowledge management strategies for processing of 3D point clouds as well as identifying andclassifying objects in digitized scenes. Our approach extends the use of semantic knowledge to allstages of the processing, including the guidance of the individual data-driven processing algorithms.The complete solution consists in a multi-stage iterative concept based on three factors: the modeledknowledge, the package of algorithms, and a classification engine. The goal of the present work isto select and guide algorithms following an adaptive and intelligent strategy for detecting objects inpoint clouds. Experiments with two case studies demonstrate the applicability of our approach. Thestudies were carried out on scans of the waiting area of an airport and along the tracks of a railway.In both cases the goal was to detect and identify objects within a defined area. Results show that ourapproach succeeded in identifying the objects of interest while using various data types
|
16 |
From a Comprehensive Experimental Survey to a Cost-based Selection Strategy for Lightweight Integer Compression AlgorithmsDamme, Patrick, Ungethüm, Annett, Hildebrandt, Juliana, Habich, Dirk, Lehner, Wolfgang 11 January 2023 (has links)
Lightweight integer compression algorithms are frequently applied in in-memory database systems to tackle the growing gap between processor speed and main memory bandwidth. In recent years, the vectorization of basic techniques such as delta coding and null suppression has considerably enlarged the corpus of available algorithms. As a result, today there is a large number of algorithms to choose from, while different algorithms are tailored to different data characteristics. However, a comparative evaluation of these algorithms with different data and hardware characteristics has never been sufficiently conducted in the literature. To close this gap, we conducted an exhaustive experimental survey by evaluating several state-of-the-art lightweight integer compression algorithms as well as cascades of basic techniques. We systematically investigated the influence of data as well as hardware properties on the performance and the compression rates. The evaluated algorithms are based on publicly available implementations as well as our own vectorized reimplementations. We summarize our experimental findings leading to several new insights and to the conclusion that there is no single-best algorithm. Moreover, in this article, we also introduce and evaluate a novel cost model for the selection of a suitable lightweight integer compression algorithm for a given dataset.
|
17 |
Validation croisée et pénalisation pour l'estimation de densité / Cross-validation and penalization for density estimationMagalhães, Nelo 26 May 2015 (has links)
Cette thèse s'inscrit dans le cadre de l'estimation d'une densité, considéré du point de vue non-paramétrique et non-asymptotique. Elle traite du problème de la sélection d'une méthode d'estimation à noyau. Celui-ci est une généralisation, entre autre, du problème de la sélection de modèle et de la sélection d'une fenêtre. Nous étudions des procédures classiques, par pénalisation et par rééchantillonnage (en particulier la validation croisée V-fold), qui évaluent la qualité d'une méthode en estimant son risque. Nous proposons, grâce à des inégalités de concentration, une méthode pour calibrer la pénalité de façon optimale pour sélectionner un estimateur linéaire et prouvons des inégalités d'oracle et des propriétés d'adaptation pour ces procédures. De plus, une nouvelle procédure rééchantillonnée, reposant sur la comparaison entre estimateurs par des tests robustes, est proposée comme alternative aux procédures basées sur le principe d'estimation sans biais du risque. Un second objectif est la comparaison de toutes ces procédures du point de vue théorique et l'analyse du rôle du paramètre V pour les pénalités V-fold. Nous validons les résultats théoriques par des études de simulations. / This thesis takes place in the density estimation setting from a nonparametric and nonasymptotic point of view. It concerns the statistical algorithm selection problem which generalizes, among others, the problem of model and bandwidth selection. We study classical procedures, such as penalization or resampling procedures (in particular V-fold cross-validation), which evaluate an algorithm by estimating its risk. We provide, thanks to concentration inequalities, an optimal penalty for selecting a linear estimator and we prove oracle inequalities and adaptative properties for resampling procedures. Moreover, new resampling procedure, based on estimator comparison by the mean of robust tests, is introduced as an alternative to procedures relying on the unbiased risk estimation principle. A second goal of this work is to compare these procedures from a theoretical point of view and to understand the role of V for V-fold penalization. We validate these theoretical results on empirical studies.
|
18 |
General-purpose optimization through information maximizationLockett, Alan Justin 05 July 2012 (has links)
The primary goal of artificial intelligence research is to develop a
machine capable of learning to solve disparate real-world tasks
autonomously, without relying on specialized problem-specific
inputs. This dissertation suggests that such machines are
realistic: If No Free Lunch theorems were to apply to all real-world
problems, then the world would be utterly unpredictable. In
response, the dissertation proposes the information-maximization
principle, which claims that the optimal optimization methods make
the best use of the information available to them. This principle
results in a new algorithm, evolutionary annealing, which is shown
to perform well especially in challenging problems with irregular
structure. / text
|
19 |
An empirically derived system for high-speed renderingRautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
|
20 |
Algoritmus automatického výběru vhodného typu zařízení z databáze výměníků tepla / Algorithm for automatic selection of suitable equipment type from heat exchanger databaseHavlů, Michal January 2009 (has links)
Thesis is devoted to development of an database algorithm for selection (or necking selection) of suitable type of heat exchanger for given industrial application. Database creates a part of multipurpose calculation system containing three individual modules: (i) module for selection (or necking selection) of type of heat exchanger for given application, (ii) module for thermal-hydraulic design or rating of heat exchanger, (iii) module for calculation of investments and operating cost. Thesis describes details of method for selection of suitable heat exchanger type for given application and presents and discuss individual criteria for selection process which influence values in tables of priorites for given equipment. These tables are unavoible part of selection algorithm. Details of software application of selection algorithm are also presented in the thesis. Description of behaviour of individual types of heat exchanger creates important part of thesis. Practical application of developed selection algorithm is demonstrated on several industrial examples.
|
Page generated in 0.1673 seconds