Spelling suggestions: "subject:"algorithm byelection"" "subject:"algorithm dielection""
1 |
Metareasoning about propagators for constraint satisfactionThompson, Craig Daniel Stewart 11 July 2011
Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances.
|
2 |
Metareasoning about propagators for constraint satisfactionThompson, Craig Daniel Stewart 11 July 2011 (has links)
Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances.
|
3 |
Dynamic algorithm selection for machine learning on time seriesDahlberg, Love January 2019 (has links)
We present a software that can dynamically determine what machine learning algorithm is best to use in a certain situation given predefined traits. The produced software uses ideal conditions to exemplify how such a solution could function. The software is designed to train a selection algorithm that can predict the behavior of the specified testing algorithms to derive which among them is the best. The software is used to summarize and evaluate a collection of selection algorithm predictions to determine which testing algorithm was the best during that entire period. The goal of this project is to provide a prediction evaluation software solution can lead towards a realistic implementation.
|
4 |
Meta-aprendizado aplicado a fluxos contínuos de dados / Metalearning for algorithm selection in data stramsRossi, Andre Luís Debiaso 19 December 2013 (has links)
Algoritmos de aprendizado de máquina são amplamente empregados na indução de modelos para descoberta de conhecimento em conjuntos de dados. Como grande parte desses algoritmos assume que os dados são gerados por uma função de distribuição estacionária, um modelo é induzido uma única vez e usado indefinidamente para a predição do rótulo de novos dados. Entretanto, atualmente, diversas aplicações, como gerenciamento de transportes e monitoramento por redes de sensores, geram fluxos contínuos de dados que podem mudar ao longo do tempo. Consequentemente, a eficácia do algoritmo escolhido para esses problemas pode se deteriorar ou outros algoritmos podem se tornar mais apropriados para as características dos novos dados. Nesta tese é proposto um método baseado em meta-aprendizado para gerenciar o processo de aprendizado em ambientes dinâmicos de fluxos contínuos de dados com o objetivo de melhorar o desempenho preditivo do sistema de aprendizado. Esse método, denominado MetaStream, seleciona regularmente o algoritmo mais promissor para os dados que estão chegando, de acordo com as características desses dados e de experiências passadas. O método proposto emprega técnicas de aprendizado de máquina para gerar o meta-conhecimento, que relaciona as características extraídas dos dados em diferentes instantes do tempo ao desempenho preditivo dos algoritmos. Entre as medidas usadas para extrair informação relevante dos dados, estão aquelas comumente empregadas em meta-aprendizado convencional com diferentes conjuntos de dados, que são adaptadas para as especificidades do cenário de fluxos, e de áreas correlatas, que consideram, por exemplo, a ordem de chegada dos dados. O MetaStream é avaliado para três conjuntos de dados reais e seis algoritmos de aprendizado diferentes. Os resultados mostram a aplicabilidade do MetaStream e sua capacidade de melhorar o desempenho preditivo geral do sistema de aprendizado em relação a um método de referência para a maioria dos problemas investigados. Deve ser observado que uma combinação de modelos mostrou-se superior ao MetaStream para dois conjuntos de dados. Assim, foram analisados os principais fatores que podem ter influenciado nos resultados observados e são indicadas possíveis melhorias do método proposto / Machine learning algorithms are widely employed to induce models for knowledge discovery in databases. Since most of these algorithms suppose that the underlying distribution of the data is stationary, a model is induced only once e it is applied to predict the label of new data indefinitely. However, currently, many real applications, such as transportation management systems and monitoring of sensor networks, generate data streams that can change over time. Consequently, the effectiveness of the algorithm chosen for these problems may deteriorate or other algorithms may become more suitable for the new data characteristics. This thesis proposes a metalearning based method for the management of the learning process in dynamic environments of data streams aiming to improve the general predictive performance of the learning system. This method, named MetaStream, regularly selects the most promising algorithm for arriving data according to its characteristics and past experiences. The proposed method employs machine learning techniques to generate metaknowledge, which relates the characteristics extracted from data in different time points to the predictive performance of the algorithms. Among the measures applied to extract relevant information are those commonly used in conventional metalearning for different data sets, which are adapted for the data stream particularities, and from other related areas that consider the order of the data stream. We evaluate MetaStream for three real data stream problems and six different learning algorithms. The results show the applicability of the MetaStream and its capability to improve the general predictive performance of the learning system compared to a baseline method for the majority of the cases investigated. It must be observed that an ensemble of models is usually superior to MetaStream. Thus, we analyzed the main factors that may have influenced the results and indicate possible improvements for the proposed method
|
5 |
A probabilistic architecture for algorithm portfoliosSilverthorn, Bryan Connor 05 April 2013 (has links)
Heuristic algorithms for logical reasoning are increasingly successful on computationally difficult problems such as satisfiability, and these solvers enable applications from circuit verification to software synthesis. Whether a problem instance can be solved, however, often depends in practice on whether the correct solver was selected and its parameters appropriately set. Algorithm portfolios leverage past performance data to automatically select solvers likely to perform well on a given instance. Existing portfolio methods typically select only a single solver for each instance. This dissertation develops and evaluates a more general portfolio method, one that computes complete solver execution schedules, including repeated runs of nondeterministic algorithms, by explicitly incorporating probabilistic reasoning into its operation. This modular architecture for probabilistic portfolios (MAPP) includes novel solutions to three issues central to portfolio operation: first, it estimates solver performance distributions from limited data by constructing a generative model; second, it integrates domain-specific information by predicting instances on which solvers exhibit similar performance; and, third, it computes execution schedules using an efficient and effective dynamic programming approximation. In a series of empirical comparisons designed to replicate past solver competitions, MAPP outperforms the most prominent alternative portfolio methods. Its success validates a principled approach to portfolio operation, offers a tool for tackling difficult problems, and opens a path forward in algorithm portfolio design. / text
|
6 |
Meta-aprendizado aplicado a fluxos contínuos de dados / Metalearning for algorithm selection in data stramsAndre Luís Debiaso Rossi 19 December 2013 (has links)
Algoritmos de aprendizado de máquina são amplamente empregados na indução de modelos para descoberta de conhecimento em conjuntos de dados. Como grande parte desses algoritmos assume que os dados são gerados por uma função de distribuição estacionária, um modelo é induzido uma única vez e usado indefinidamente para a predição do rótulo de novos dados. Entretanto, atualmente, diversas aplicações, como gerenciamento de transportes e monitoramento por redes de sensores, geram fluxos contínuos de dados que podem mudar ao longo do tempo. Consequentemente, a eficácia do algoritmo escolhido para esses problemas pode se deteriorar ou outros algoritmos podem se tornar mais apropriados para as características dos novos dados. Nesta tese é proposto um método baseado em meta-aprendizado para gerenciar o processo de aprendizado em ambientes dinâmicos de fluxos contínuos de dados com o objetivo de melhorar o desempenho preditivo do sistema de aprendizado. Esse método, denominado MetaStream, seleciona regularmente o algoritmo mais promissor para os dados que estão chegando, de acordo com as características desses dados e de experiências passadas. O método proposto emprega técnicas de aprendizado de máquina para gerar o meta-conhecimento, que relaciona as características extraídas dos dados em diferentes instantes do tempo ao desempenho preditivo dos algoritmos. Entre as medidas usadas para extrair informação relevante dos dados, estão aquelas comumente empregadas em meta-aprendizado convencional com diferentes conjuntos de dados, que são adaptadas para as especificidades do cenário de fluxos, e de áreas correlatas, que consideram, por exemplo, a ordem de chegada dos dados. O MetaStream é avaliado para três conjuntos de dados reais e seis algoritmos de aprendizado diferentes. Os resultados mostram a aplicabilidade do MetaStream e sua capacidade de melhorar o desempenho preditivo geral do sistema de aprendizado em relação a um método de referência para a maioria dos problemas investigados. Deve ser observado que uma combinação de modelos mostrou-se superior ao MetaStream para dois conjuntos de dados. Assim, foram analisados os principais fatores que podem ter influenciado nos resultados observados e são indicadas possíveis melhorias do método proposto / Machine learning algorithms are widely employed to induce models for knowledge discovery in databases. Since most of these algorithms suppose that the underlying distribution of the data is stationary, a model is induced only once e it is applied to predict the label of new data indefinitely. However, currently, many real applications, such as transportation management systems and monitoring of sensor networks, generate data streams that can change over time. Consequently, the effectiveness of the algorithm chosen for these problems may deteriorate or other algorithms may become more suitable for the new data characteristics. This thesis proposes a metalearning based method for the management of the learning process in dynamic environments of data streams aiming to improve the general predictive performance of the learning system. This method, named MetaStream, regularly selects the most promising algorithm for arriving data according to its characteristics and past experiences. The proposed method employs machine learning techniques to generate metaknowledge, which relates the characteristics extracted from data in different time points to the predictive performance of the algorithms. Among the measures applied to extract relevant information are those commonly used in conventional metalearning for different data sets, which are adapted for the data stream particularities, and from other related areas that consider the order of the data stream. We evaluate MetaStream for three real data stream problems and six different learning algorithms. The results show the applicability of the MetaStream and its capability to improve the general predictive performance of the learning system compared to a baseline method for the majority of the cases investigated. It must be observed that an ensemble of models is usually superior to MetaStream. Thus, we analyzed the main factors that may have influenced the results and indicate possible improvements for the proposed method
|
7 |
Relationships Among Learning Algorithms and TasksLee, Jun won 27 January 2011 (has links) (PDF)
Metalearning aims to obtain knowledge of the relationship between the mechanism of learning and the concrete contexts in which that mechanisms is applicable. As new mechanisms of learning are continually added to the pool of learning algorithms, the chances of encountering behavior similarity among algorithms are increased. Understanding the relationships among algorithms and the interactions between algorithms and tasks help to narrow down the space of algorithms to search for a given learning task. In addition, this process helps to disclose factors contributing to the similar behavior of different algorithms. We first study general characteristics of learning tasks and their correlation with the performance of algorithms, isolating two metafeatures whose values are fairly distinguishable between easy and hard tasks. We then devise a new metafeature that measures the difficulty of a learning task that is independent of the performance of learning algorithms on it. Building on these preliminary results, we then investigate more formally how we might measure the behavior of algorithms at a ner grained level than a simple dichotomy between easy and hard tasks. We prove that, among all many possible candidates, the Classifi er Output Difference (COD) measure is the only one possessing the properties of a metric necessary for further use in our proposed behavior-based clustering of learning algorithms. Finally, we cluster 21 algorithms based on COD and show the value of the clustering in 1) highlighting interesting behavior similarity among algorithms, which leads us to a thorough comparison of Naive Bayes and Radial Basis Function Network learning, and 2) designing more accurate algorithm selection models, by predicting clusters rather than individual algorithms.
|
8 |
A Framework for Applying Reinforcement Learning to Deadlock Handling in IntralogisticsMüller, Marcel, Reyes-Rubiano, Lorena Silvana, Reggelin, Tobias, Zadek, Hartmut 14 June 2023 (has links)
Intralogistics systems, while complex, are crucial for a range of industries. One of their challenges is deadlock situations that can disrupt operations and decrease efficiency. This paper presents a four-stage framework for applying reinforcement learning algorithms to manage deadlocks in such systems. The stages include Problem Formulation, Model Selection, Algorithm Selection, and System Deployment. We carefully identify the problem, select an appropriate model to represent the system, choose a suitable reinforcement learning algorithm, and finally deploy the solution. Our approach provides a structured method to tackle deadlocks, improving system resilience and responsiveness. This comprehensive guide can serve researchers and practitioners alike, offering a new avenue for enhancing intralogistics performance. Future research can explore the framework’s effectiveness and applicability across different systems.
|
9 |
Runtime Algorithm Selection For Grid Environments: A Component Based FrameworkBora, Prachi 22 July 2003 (has links)
Grid environments are inherently heterogeneous. If the computational power provided by collaborations on the Grid is to be harnessed in the true sense, there is a need for applications that can automatically adapt to changes in the execution environment. The application writer should not be burdened with the job of choosing the right algorithm and implementation every time the resources on which the application runs are changed.
A lot of research has been done in adapting applications to changing conditions. The existing systems do not address the issue of providing a unified interface to permit algorithm selection at runtime. The goal of this research is to design and develop a unified interface to applications in order to permit seamless access to different algorithms providing similar functionalities. Long running, computationally intensive scientific applications can produce huge amounts of performance data. Often, this data is discarded once the application's execution is complete. This data can be utilized in extracting information about algorithms and their performance. This information can be used to choose algorithms intelligently.
The research described in this thesis aims at designing and developing a component based unified interface for runtime algorithm selection in grid environments. This unified interface is necessary so that the application code does not change if a new algorithm is used to solve the problem. The overhead associated with making the algorithm choice transparent to the application is evaluated. We use a data mining approach to algorithm selection and evaluate its potential effectiveness for scientific applications. / Master of Science
|
10 |
Seleção de algoritmos para a tarefa de agrupamento de dados: uma abordagem via meta-aprendizagemFerrari, Daniel Gomes 27 March 2014 (has links)
Made available in DSpace on 2016-03-15T19:38:50Z (GMT). No. of bitstreams: 1
Daniel Gomes Ferrari.pdf: 2637416 bytes, checksum: 535856887beb7ff04af53570120bc1f9 (MD5)
Previous issue date: 2014-03-27 / Natcomp Informatica e Equipamentos Eletronicos LTDA / Data clustering is an important data mining task that aims to segment a database into groups of objects based on their similarity or dissimilarity. Due to the unsupervised nature of
clustering, the search for a good quality solution can become a complex process. There is currently a wide range of clustering algorithms and selecting the most suitable one for a given
problem can be a slow and costly process. In 1976, Rice formulated the algorithm selection problem (PSA) postulating that a good performance algorithm can be chosen according to the problem s structural characteristics. Meta-learning brings the concept of learning about learning, that is, the meta-knowledge obtained from the algorithms learning process allows it
to improve its performance. Meta-learning has a major intersection with data mining in classification problems, where it is used to select algorithms. This thesis proposes an approach to the algorithm selection problem by using meta-learning techniques for clustering. The characterization of 84 problems is performed by a classical approach, based on the problems, and a new proposal based on the similarity among the objects. Ten internal indices are used to provide different performance assessments of seven algorithms, where the combination of the indices determine the ranking for the algorithms. Several analyzes are performed in order to assess the quality of the obtained meta-knowledge in facilitating the mapping between the problem s features and the performance of the algorithms. The results show that the new characterization approach and method to combine the indices provide a good quality algorithm selection mechanism for data clustering problems. / Agrupamento é uma tarefa importante na mineração de dados, tendo como objetivo segmentar uma base de dados em grupos de objetos baseando-se na similaridade ou dissimilaridade entre os mesmos. Devido à natureza não supervisionada da tarefa, a busca por uma solução de boa qualidade pode se tornar um processo complexo. Atualmente, existe na literatura acadêmica uma grande quantidade de algoritmos que podem ser utilizados na
resolução deste problema. A seleção do algoritmo mais adequado para um determinado problema pode ser um processo lento e custoso. Em 1976, Rice formulou o Problema de Seleção de Algoritmos (PSA), postulando que um algoritmo de bom desempenho pode ser escolhido de acordo com as características estruturais do problema em que o mesmo será
aplicado. A meta-aprendizagem traz consigo o conceito de aprender sobre o aprender, isto é, por meio do meta-conhecimento obtido do processo de aprendizagem dos algoritmos é possível aprimorar o desempenho do processo. Meta-aprendizagem possui grande interseção com mineração de dados no que tange problemas de classificação, sendo utilizada no desenvolvimento de sistemas de seleção de algoritmos. Nesta tese é proposta a abordagem ao PSA por meio de técnicas de meta-aprendizagem para agrupamento de dados. A
caracterização de 84 problemas é realizada pela abordagem clássica, baseada nos problemas, e por uma nova proposta baseada na similaridade entre os objetos. São utilizados dez índices internos para promover diferentes avaliações do desempenho de sete algoritmos, onde a combinação desses índices determina o ranking dos algoritmos. São realizadas diversas análises no intuito de avaliar a qualidade do meta-conhecimento obtido em viabilizar o mapeamento entre as características do problema e o desempenho dos algoritmos. Os
resultados mostram que a nova caracterização e combinação dos índices proporcionam a seleção, com qualidade, de algoritmos para agrupamento de dados.
|
Page generated in 0.1069 seconds