• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 51
  • 44
  • 9
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 250
  • 71
  • 68
  • 55
  • 53
  • 51
  • 49
  • 49
  • 38
  • 36
  • 36
  • 35
  • 33
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Integrated Software Pipelining

Eriksson, Mattias January 2009 (has links)
<p>In this thesis we address the problem of integrated software pipelining for clustered VLIW architectures. The phases that are integrated and solved as one combined problem are: cluster assignment, instruction selection, scheduling, register allocation and spilling.</p><p>As a first step we describe two methods for integrated code generation of basic blocks. The first method is optimal and based on integer linear programming. The second method is a heuristic based on genetic algorithms.</p><p>We then extend the integer linear programming model to modulo scheduling. To the best of our knowledge this is the first time anybody has optimally solved the modulo scheduling problem for clustered architectures with instruction selection and cluster assignment integrated.</p><p>We also show that optimal spilling is closely related to optimal register allocation when the register files are clustered. In fact, optimal spilling is as simple as adding an additional virtual register file representing the memory and have transfer instructions to and from this register file corresponding to stores and loads.</p><p>Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal.</p><p>Experiments have been conducted to show the usefulness and limitations of our optimal methods. For the basic block case we compare the optimal method to the heuristic based on genetic algorithms.<em></em></p><p><em>This work has been supported by The Swedish national graduate school in computer science (CUGS) and Vetenskapsrådet (VR).</em></p>
122

Influences Of Interplanetary Magnetic Field On The Variability Of Aerospace Media

Yapici, Tolga 01 September 2007 (has links) (PDF)
The Interplanetary Magnetic Field (IMF) has a controlling effect on the Magnetosphere and Ionosphere. The objective in this work is to investigate the probable effects of IMF on Ionospheric and Geomagnetic response. To fulfill the objective the concept of an event has been created based on the polarity reversals and rate of change of the interplanetary magnetic field components, Bz and By. Superposed Epoch Method (SPE) was employed with the three event definitions, which are based on IMF Bz southward turnings ranging from 6 to 11 nT in order to quantify the effects of IMF By and Bz. For the first event only IMF Bz turnings were taken into account while for the remaining, positive and negative polarity for IMF By were added. Results showed that the increase in the magnitude of IMF Bz turnings increased the drop of F layer critical frequency, f0F2. The drop was almost linear with the increase in magnitude of polarity reversals. Reversals with a positive IMF By has resulted in the continuation of geomagnetic activity more than 4 days, that is to say, the energy, that has penetrated as a consequence of reversal with a positive By polarity, was stored in outer Magnetosphere,whereas, with a negative IMF By the energy was consumed in a small time scale. At the second step of the work, although conclusions about geomagnetic activity could be done, as a consequence of data gaps for f0F2 in addition to having low numbers of events, characterization of f0F2 due to constant IMF By polarity could not be accomplished. Thus, a modeling attempt for the characterization of the response due to polarity reversals of IMF components with the Genetic Programming was carried out. Four models were constructed for different polarity reversal cases and they were used as the components of one general unique model. The model is designed in such a way that given 3 consecutive value of f0F2, IMF By and IMF Bz, the model can forecast one hour ahead value of f0F2. The overall model, GETY-IYON was successful at a normalized error of 7.3%.
123

Genetic Programming Based Multicategory Pattern Classification

Kishore, Krishna J 03 1900 (has links)
Nature has created complex biological structures that exhibit intelligent behaviour through an evolutionary process. Thus, intelligence and evolution are intimately connected. This has inspired evolutionary computation (EC) that simulates the evolutionary process to develop powerful techniques such as genetic algorithms (GAs), genetic programming (GP), evolutionary strategies (ES) and evolutionary programming (EP) to solve real-world problems in learning, control, optimization and classification. GP discovers the relationship among data and expresses it as a LISP-S expression i.e., a computer program. Thus the goal of program discovery as a solution for a problem is addressed by GP in the framework of evolutionary computation. In this thesis, we address for the first time the problem of applying GP to mu1ticategory pattern classification. In supervised pattern classification, an input vector of m dimensions is mapped onto one of the n classes. It has a number of application areas such as remote sensing, medical diagnosis etc., A supervised classifier is developed by using a training set that contains representative samples of various classes present in the application. Supervised classification has been done earlier with maximum likelihood classifier: neural networks and fuzzy logic. The major considerations in applying GP to pattern classification are listed below: (i) GP-based techniques are data distribution-free i.e., no a priori knowledge is needed abut the statistical distribution of the data or no assumption such as normal distribution for data needs to be made as in MLC. (ii) GP can directly operate on the data in its original form. (iii) GP can detect the underlying but unknown relationship that mists among data and express it as a mathematical LISP S-expression. The generated LISP S-expressions can be directly used in the application environment. (iv) GP can either discover the most important discriminating features of a class during evolution or it requires minor post-processing of the LISP-S expression to discover the discriminant features. In a neural network, the knowledge learned by the neural network about the data distributions is embedded in the interconnection weights and it requires considerable amount of post-processing of the weights to understand the decision of the neural network. In 2-category pattern classification, a single GP expression is evolved as a discriminant function. The output of the GP expression can be +l for samples of one class and -1 for samples of the other class. When the GP paradigm is applied to an n-class problem, the following questions arise: Ql. As a typical GP expression returns a value (+l or -1) for a 2-class problem, how does one apply GP for the n-class pattern classification problem? Q2. What should be the fitness function during evolution of the GP expressions? Q3. How does the choice of a function set affect the performance of GP-based classification? Q4. How should training sets be created for evaluating fitness during the evolution of GP classifier expressions? Q5. How does one improve learning of the underlying data distributions in a GP framework? Q6. How should conflict resolution be handled before assigning a class to the input feature vector? Q7. How does GP compare with other classifiers for an n-class pattern classification problem? The research described here seeks to answer these questions. We show that GP can be applied to an n-category pattern classification problem by considering it as n 2-class problems. The suitability of this approach is demonstrated by considering a real-world problem based on remotely sensed satellite images and Fisher's Iris data set. In a 2-class problem, simple thresholding is sufficient for a discriminant function to divide the feature space into two regions. This means that one genetic programming classifier expression (GPCE) is sufficient to say whether or not the given input feature vector belongs to that class; i.e., the GP expression returns a value (+1 or -1). As the n-class problem is formulated as n 2-class problems, n GPCEs are evolved. Hence, n GPCE specific training sets are needed to evolve these n GPCEs. For the sake of illustration, consider a 5-class pat tern classification problem. Let n, be the number of samples that belong to class j, and N, be the number of samples that do not belong to class j, (j = 1,..., 5). Thus, N1=n2+n3+n4+n5 N2=n1+n3+n4+n5 N3=n1+n2+n4+n5 N4=n1+n2+n3+n5 N5=n1+n2+n3+n4 Thus, When the five class problem is formulated as five 2-class problems. we need five GPCEs as discriminant functions to resolve between n1 and N1, n2 and N2, n3 and N3, n4 and N4 and lastly n5 and N5. Each of these five 2-class problems is handled as a separate 2-class problem with simple thresholding. Thus, GPCE# l resolves between samples of class# l and the remaining n - 1 classes. A training set is needed to evaluate the fitness of GPCE during its evolution. If we directly create the training set, it leads to skewness (as n1 < N1). To overcome the skewness, an interleaved data format is proposed for the training set of a GPCE. For example, in the training set of GPCE# l, samples of class# l are placed alternately between samples of the remaining n - 1 classes. Thus, the interleaved data format is an artifact to create a balanced training set. Conventionally, all the samples of a training set are fed to evaluate the fitness of every member of the population in each generation. We call this "global" learning 3s GP tries to learn the entire training set at every stage of the evolution. We have introduced incremental learning to simplify the task of learning for the GP paradigm. A subset of the training set is fed and the size of the subset is gradually increased over time to cover the entire training data. The basic motivation for incremental learning is to improve learning during evolution as it is easier to learn a smaller task and then to progress from a smaller task to a bigger task. Experimental results are presented to show that the interleaved data format and incremental learning improve the performance of the GP classifier. We also show that the GPCEs evolved with an arithmetic function set are able to track variation in the input better than GPCEs evolved with function sets containing logical and nonlinear elements. Hence, we have used arithmetic function set, incremental learning, and interleaved data format to evolve GPCEs in our simulations. AS each GPCE is trained to recognize samples belonging to its own class and reject samples belonging to other classes a strength of association measure is associated with each GPCE to indicate the degree to which it can recognize samples belonging to its own class. The strength of association measures are used for assigning a class to an input feature vector. To reduce misclassification of samples, we also show how heuristic rules can be generated in the GP framework unlike in either MLC or the neural network classifier. We have also studied the scalability and generalizing ability of the GP classifier by varying the number of classes. We also analyse the performance of the GP classifier by considering the well-known Iris data set. We compare the performance of classification rules generated from the GP classifier with those generated from neural network classifier, (24.5 method and fuzzy classifier for the Iris data set. We show that the performance of GP is comparable to other classifiers for the Iris data set. We notice that the classification rules can be generated with very little post-processing and they are very similar to the rules generated from the neural network and C4.5 for the Iris data set. Incremental learning influences the number of generations available for GP to learn the data distribution of classes whose d is -1 in the interleaved data format. This is because the samples belonging to the true class (desired output d is +1) are alternately placed between samples belonging to other classes i.e., they are repeated to balance the training set in the interleaved data format. For example, in the evolution of GPCE for class# l, the fitness function can be fed initially with samples of class#:! and subsequently with the samples of class#3, class#4 and class#. So in the evaluation of the fitness function, the samples of class#kt5 will not be present when the samples of class#2 are present in the initial stages. However, in the later stages of evolution, when samples of class#5 are fed, the fitness function will utilize the samples of both class#2 and class#5. As learning in evolutionary computation is guided by the evaluation of the fitness function, GPCE# l gets lesser number of generations to learn how to reject data of class#5 as compared to the data of class#2. This is because the termination criterion (i.e., the maximum number of generations) is defined a priori. It is clear that there are (n-l)! Ways of ordering the samples of classes whose d is -1 in the interleaved data format. Hence a heuristic is presented to determine a possible order to feed data of different classes for the GPCEs evolved with incremental learning and interleaved data format. The heuristic computes an overlap index for each class based on its spatial spread and distribution of data in the region of overlap with respect to other classes in each feature. The heuristic determines the order in which classes whose desired output d is –1 should be placed in each GPCE-specific training set for the interleaved data format. This ensures that GP gets more number of generations to learn about the data distribution of a class with higher overlap index than a class with lower overlap index. The ability of the GP classifier to learn the data distributions depends upon the number of classes and the spatial spread of data. As the number of classes increases, the GP classifier finds it difficult to resolve between classes. So there is a need to partition the feature space and identify subspaces with reduced number of classes. The basic objective is to divide the feature space into subspaces and hence the data set that contains representative samples of n classes into subdata sets corresponding to the subspaces of the feature space, so that some of the subdata sets/spaces can have data belonging to only p classes (p < n). The GP classifier is then evolved independently for the subdata sets/spaces of the feature space. This results in localized learning as the GP classifier has to learn the data distribution in only a subspace of the feature space rather than in the entire feature space. By integrating the GP classifier with feature space partitioning (FSP), we improve classification accuracy due to localized learning. Although serial computers have increased steadily in their performance, the quest for parallel implementation of a given task has continued to be of interest in any computationally intensive task since parallel implementation leads to a faster execution than a serial implementation As fitness evaluation, selection strategy and population structures are used to evolve a solution in GP, there is scope for a parallel implementation of GP classifier. We have studied distributed GP and massively parallel GP for our approach to GP-based multicategory pattern classification. We present experimental results for distributed GP with Message Passing Interface on IBM SP2 to highlight the speedup that can be achieved over the serial implementation of GP. We also show how data parallelism can be used to further speed up fitness evaluation and hence the execution of the GP paradigm for multicategory pat tern classification. We conclude that GP can be applied to n-category pattern classification and its potential lies in its simplicity and scope for parallel implementation. The GP classifier developed in this thesis can be looked upon as an addition to the earlier statistical, neural and fuzzy approaches to multicategory pattern classification.
124

Generative fixation : a unified explanation for the adaptive capacity of simple recombinative genetic algorithms /

Burjorjee, Keki M. January 2009 (has links)
Thesis (Ph. D.)--Brandeis University, 2009. / "UMI:3369218." MICROFILM COPY ALSO AVAILABLE IN THE UNIVERSITY ARCHIVES. Includes bibliographical references.
125

Predictions Within and Across Aquatic Systems using Statistical Methods and Models / Prediktioner inom och mellan akvatiska system med statistiska metoder och modeller

Dimberg, Peter H. January 2015 (has links)
Aquatic ecosystems are an essential source for life and, in many regions, are exploited to a degree which deteriorates their ecological status. Today, more than 50 % of the European lakes suffer from an ecological status which is unsatisfactory. Many of these lakes require abatement actions to improve their status, and mathematical models have a great potential to predict and evaluate different abatement actions and their outcome. Several statistical methods and models exist which can be used for these purposes; however, many of the models are not constructed using a sufficient amount or quality of data, are too complex to be used by most managers, or are too site specific. Therefore, the main aim of this thesis was to present different statistical methods and models which are easy to use by managers, are general, and provide insights for the development of similar methods and models. To reach the main aim of the thesis several different statistical and modelling procedures were investigated and applied, such as genetic programming (GP), multiple regression, Markov Chains, and finally, well-used criteria for the r2 and p-value for the development of a method to determine temporal-trends. The statistical methods and models were mainly based on the variables chlorophyll-a (chl-a) and total phosphorus (TP) concentrations, but some methods and models can be directly transferred to other variables. The main findings in this thesis were that multiple regressions overcome the performance of GP to predict summer chl-a concentrations and that multiple regressions can be used to generally describe the chl-a seasonality with TP summer concentrations and the latitude as independent variables. Also, it is possible to calculate probabilities, using Markov Chains, of exceeding certain chl-a concentrations in future months. Results showed that deep water concentrations were in general closely related to the surface water concentrations along with morphometric parameters; these independent variables can therefore be used in mass-balance models to estimate the mass in deep waters. A new statistical method was derived and applied to confirm whether variables have changed over time or not for cases where other traditional methods have failed. Finally, it is concluded that the statistical methods and models developed in this thesis will increase the understanding for predictions within and across aquatic systems.
126

Μοντελοποίηση χρονοσειρών με χρήση τεχνικών γενετικού προγραμματισμού

Θεοφιλάτος, Κωνσταντίνος 03 July 2009 (has links)
Η αυτοματοποιημένη μεθοδολογία εύρεσης προγραμμάτων υπολογιστών (κώδικα) που βασίζεται στις αρχές της βιολογικής εξέλιξης, ονομάζεται Γενετικός Προγραμματισμός (ΓΠ). Με άλλα λόγια, πρόκειται για μια τεχνική Μηχανικής Μάθησης, η οποία χρησιμοποιεί ένα Εξελικτικό Αλγόριθμο για να βελτιστοποιήσει ένα πληθυσμό από προγράμματα υπολογιστή σύμφωνα με μια συνάρτηση καταλληλότητας που καθορίζεται από την ικανότητα του προγράμματος να εκτελέσει ένα δοσμένο υπολογιστικό έργο. Στην εργασία αυτή θα χρησιμοποιηθούν διάφορες τεχνικές Γενετικού Προγραμματισμού στην μοντελοποίηση Χρονοσειρών. Τα συστήματα που θα αναπτυχθούν, θα χρησιμοποιηθούν για τους παρακάτω σκοπούς: • Μοντελοποίηση του συστήματος που «παράγει» τη χρονοσειρά, • Εξαγωγή χαρακτηριστικών και κανόνων που μπορούν να οδηγήσουν στην ικανότητα πρόβλεψης χρονοσειρών. Οι χρονοσειρές που θα χρησιμοποιηθούν για να δοκιμάσουμε την λειτουργία των συστημάτων που θα υλοποιηθούν είναι οι εξής: • Χρονοσειρά δεικτών ελληνικού χρηματιστηρίου, • Χρονοσειρές ιατρικών δεδομένων όπως για παράδειγμα χρονοσειρά σήματος μαγνητοεγκεφαλογραφήματος. Οι κλασσικές τεχνικές Γενετικού Προγραμματισμού χρησιμοποιούν δενδρικές δομές για την αναπαράσταση των προγραμμάτων-ατόμων των πληθυσμών. Στο παρελθόν έχουν εκπονηθεί και υλοποιηθεί πολλές εργασίες που χρησιμοποιούν γενετικό προγραμματισμό για την μοντελοποίηση χρονοσειρών. Τα αποτελέσματα ήταν ικανοποιητικά. Το βασικό πρόβλημα που αντιμετωπίστηκε ήταν ο μεγάλος χρόνος εκτέλεσης που απαιτούν οι κλασσικές τεχνικές Γενετικού προγραμματισμού. Το θέμα λοιπόν είναι ανοιχτό σε μελέτη και υπάρχει η ανάγκη να χρησιμοποιηθούν νέες τεχνικές γενετικού προγραμματισμού για να πάρουμε και καλύτερα και πιο γρήγορα αποτελέσματα. Στην εργασία αυτή, θα χρησιμοποιηθεί η τεχνική του Γραμμικού Γενετικού Προγραμματισμού. Σε αυτήν την τεχνική, τα προγράμματα-άτομα του πληθυσμού αναπαρίστανται σαν μια ακολουθία από εντολές οι οποίες αναπαρίστανται σε δυαδική μορφή. Οι δύο αυτές τεχνικές θα συγκριθούν και θα βγουν συμπεράσματα για το ποια είναι η πιο χρήσιμη στον τομέα της μοντελοποίησης χρονοσειρών. Ακόμη, θα υλοποιηθούν αλγόριθμοι οι οποίοι εντοπίζουν και αφαιρούν τον κώδικα που δεν συμμετέχει στην παραγωγή της εξόδου των προγραμμάτων-ατόμων του πληθυσμού. Οι αλγόριθμοι αυτοί, περιμένουμε να επιταχύνουν κατά πολύ την διαδικασία της εξέλιξης του πληθυσμού, αφού στον γενετικό προγραμματισμό σχηματίζονται συχνά τέτοια μπλοκ κώδικα που δεν επηρεάζουν την έξοδο των προγραμμάτων. / -
127

Application of evolutionary algorithm strategies to entity relationship diagrams /

Heinze, Glenn. January 2004 (has links) (PDF)
Thesis (M.Sc)--Athabasca University, 2004. / Includes bibliographical references (leaves 31-32). Also available online.
128

Criação de filtros de imagens através da utilização de programação genética

Simões, Lucas Pauli 03 June 2013 (has links)
Made available in DSpace on 2016-06-02T19:06:07Z (GMT). No. of bitstreams: 1 5428.pdf: 4673420 bytes, checksum: 83314c7f075f21d0c118b9716342077e (MD5) Previous issue date: 2013-06-03 / Universidade Federal de Sao Carlos / The techniques of computer vision have been used more and more by the industries in order to aid the automation of their processes; however, the implementation of computer vision techniques has several difficulties according with the application. Such techniques are limited to the computational cost along with its complexity. Problems of elements identification in industrial sceneries are examples of an application that can generate a larger complexity in the process automation. Researches are accomplished using several techniques for solving those difficulties. This work approaches the field of computer vision through the use of artificial intelligence techniques. Here, methods are evaluated for creation and replication of binary images filters through the use of the genetic programming with the objective of elements identification in an industrial scenery. Two methods that possess different approaches are evaluated; one uses operations between pixels and other mathematical morphology for objects detection. The results are presented qualitatively as well as quantitatively through comparative images of the evaluated methods and statistical measures, respectively. Through the GPLab toolbox together with Matlab, the automatic creation of filters for objects identification was possible, so the detection and deletion of elements in images can be used by other support systems to automatic operation in an industrial environment. The results established a efficient way of filter creation with the use of the genetic programming. / As técnicas de visão computacional têm sido utilizadas cada vez mais pelas indústrias para o auxílio da automação de seus processos, porém, a implementação das técnicas de visão computacional encontra diversas dificuldades de acordo com a aplicação. Tais técnicas são limitadas ao custo computacional de acordo com sua complexidade. Problemas de identificação de elementos em cenários industriais são exemplos de uma aplicação que pode gerar uma maior complexidade na automação do processo. Pesquisas são realizadas utilizando diversas técnicas para a solução dessas dificuldades. Este trabalho aborda o campo de visão computacional através da utilização de técnicas de inteligência artificial. Aqui são avaliados métodos para criação e replicação de filtros de imagens binárias através da utilização da programação genética com o objetivo de identificação de elementos em um cenário industrial. São avaliados dois métodos que possuem abordagens diferentes, um utiliza operações entre pixels e outra morfologia matemática para a detecção de objetos. Os resultados são apresentados tanto qualitativamente quanto quantitativamente através de imagens comparativas dos métodos avaliados e das medidas estatísticas, respectivamente. Através da utilização de toolbox GPLab em conjunto com o Matlab, foi possível a criação de filtros de forma automática para identificação de objetos, de forma que a detecção e remoção de elementos em imagens possa ser utilizada por outros sistemas de apoio ao funcionamento automático em um ambiente industrial. Os resultados estabeleceram uma forma de criação de filtros eficiente a partir da utilização da programação genética.
129

Uma hiper-heurística híbrida para a otimização de algorítmos

MIRANDA, Pericles Barbosa Cunha de 22 August 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-04T18:13:43Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Teste - Péricles Miranda.pdf: 1959669 bytes, checksum: 8b0b1e3f94dd3295bce6153865564a12 (MD5) / Made available in DSpace on 2017-05-04T18:13:43Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Teste - Péricles Miranda.pdf: 1959669 bytes, checksum: 8b0b1e3f94dd3295bce6153865564a12 (MD5) Previous issue date: 2016-08-22 / A escolha de algoritmos ou heurísticas para a resolução de um dado problema é uma tarefa desafiadora devido à variedade de possíveis escolhas de variações/configurações de algoritmos e a falta de auxílio em como escolhê-las ou combiná-las. Por exemplo, o desempenho de algoritmo de otimização depende da escolha dos seus operadores de busca e do ajuste adequado de seus hiper-parâmetros, cada um deles com muitas possibilidades de opções a serem escolhidas. Por este motivo, existe um interesse de pesquisa crescente na automatização da otimização de algoritmos de modo a tornar esta tarefa mais independente da interação humana. Diferentes abordagens têm lidado com a tarefa de ajuste de algoritmos como sendo outro problema de (meta)otimização. Estas abordagens são comumente chamadas de hiper-heurísticas, onde cada solução do espaço de busca, neste caso, é um possível algoritmo avaliado em um dado problema. Inicialmente, hiper-heurísticas foram aplicadas na seleção de valores de hiper-parâmetros em um espaço de busca pré-definido e limitado. No entanto, recentemente, hiper-heurísticas têm sido desenvolvidas para gerar algoritmos a partir de componentes e funções especificados. Hiperheurísticas de geração são consideradas mais flexíveis que as de seleção devido à sua capacidade de criar algoritmos novos e personalizados para um dado problema. As hiper-heurísticas têm sido largamente utilizadas na otimização de meta-heurísticas. No entanto, o processo de busca torna-se bastante custoso, pois a avaliação das soluções trata-se da execução do algoritmo no problema de entrada. Neste trabalho, uma nova hiper-heurística foi desenvolvida para a otimização de algoritmos considerando um dado problema. Esta solução visa prover algoritmos otimizados que sejam adequados para o problema dado e reduzir o custo computacional do processo de geração significativamente quando comparado ao de outras hiper-heurísticas. A hiper-heurística proposta combina uma abordagem de seleção de algoritmos com uma hiper-heurística de geração. A hiperheurística de geração é responsável por criar uma base de conhecimento, que contém algoritmos que foram gerados para um conjunto de problemas. Uma vez que esta base de conhecimento esteja disponível, ela é usada como fonte de algoritmos a serem recomendados pela abordagem de seleção de algoritmos. A ideia é reusar algoritmos previamente construídos pela hiper-heurística de geração em problemas similares. Vale salientar que a criação de hiper-heurísticas visando reduzir o custo de geração de algoritmos sem comprometer a qualidade destes algoritmos não foi estudada na literatura. Além disso, hiper-heurísticas híbridas que combinam de abordagens de seleção de algoritmos e hiper-heurísticas de geração para a otimização de algoritmos, proposta nesta tese, é novidade. Para avaliar o algoritmo proposto, foi considerada como estudo de caso a otimização do algoritmo baseado em enxames (PSO). Nos experimentos realizados, foram considerados 32 problemas de otimização. O algoritmo proposto foi avaliado quanto à sua capacidade de recomendar bons algoritmos para problemas de entrada, se estes algoritmos atingem resultados competitivos frente à literatura. Além disso, o sistema foi avaliado quanto à sua precisão na recomendação, ou seja, se o algoritmo recomendado seria, de fato, o melhor a ser selecionado. Os resultados mostraram que a hiper-heurística proposta é capaz de recomendar algoritmos úteis para os problemas de entrada e de forma eficiente. Adicionalmente, os algoritmos recomendados atingiram resultados competitivos quando comparados com algoritmos estado da arte e a recomendação dos algoritmos atingiu um alto percentual de precisão. / Designing an algorithm or heuristic to solve a given problem is a challenging task due to the variety of possible design choices and the lack of clear guidelines on how to choose and/or combine them. For instance, the performance of an optimization algorithm depends on the designofitssearchoperatorsaswellasanadequatesettingofspecifichyper-parameters,eachof them with many possible options to choose from. Because of that, there is a growing research interest in automating the design of algorithms by exploring mainly optimization and machine learningapproaches,aimingtomakethealgorithmdesignprocessmoreindependentfromhuman interaction. Different approaches have dealt with the task of optimizing algorithms as another (meta)optimization problem. These approaches are commonly called hyper-heuristics, where each solution of the search space is a possible algorithm. Initially, hyper-heuristics were applied for the selection of parameters in a predefined and limited search space. Nonetheless, recently, generation hyper-heuristics have been developed to generate algorithms from a set of specified components and functions. Generation hyper-heuristics are considered more flexible than the selection ones due to its capacity to create new and customized algorithms for a given problem. Hyper-heuristics have been widely used for the optimization of meta-heuristics. However, the search process becomes expensive because the evaluation of each solution depends on the execution of an algorithm in a problem. In this work, a novel hyper-heuristic was developed to optimize algorithms considering a given problem. The proposed approach aims to provide optimizedalgorithmsfortheinputproblemandreducethecomputationalcostoftheoptimization process significantly when compared to other hyper-heuristics. The proposed hyper-heuristics combines an automated algorithm selection method with a generation hyper-heuristic. The generation hyper-heuristic is responsible for the creation of the knowledge base, which contains previously built algorithms for a set of problems. Once the knowledge base is available, it is used as a source of algorithms to be recommended by the automated algorithm selection method. The idea is to reuse the algorithms already built by the generation hyper-heuristic on similar problems. It is worth mentioning that the creation of hyper-heuristics aiming to reduce the cost of the algorithm generation without harming the quality of these algorithms were not studied yet. Besides, hybrid hyper-heuristics which combine an algorithm selection approach with a generation hyper-heuristic for the algorithm optimization, proposed in this thesis, are a novelty. To evaluate the proposed algorithm, it was considered as case study the optimization of the Particle Swarm Optimization algorithm (PSO). In our experiments, we considered 32 optimizationproblems.Theproposedsystemwasevaluatedregardingitscapacitytorecommend adequate algorithms for an input problem, the quality of the recommended algorithms, and, finally, regarding its accuracy to recommend algorithms. The results showed that the proposed system recommends useful algorithms for the input problem. Besides, the algorithms achieved competitive results when compared to state-of-the-art algorithms, and also, the system presented a high percentage of accuracy in the recommendation.
130

Recuperação de imagens com realimentação de relevancia baseada em programação genetica / Image retrieval with relevance feedback based on genetic programing

Ferreira, Cristiano Dalmaschio 31 July 2007 (has links)
Orientador: Ricardo da Silva Torres / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-08T22:04:09Z (GMT). No. of bitstreams: 1 Ferreira_CristianoDalmaschio_M.pdf: 3661487 bytes, checksum: 589a6834c502d67559dbb716e1dd4645 (MD5) Previous issue date: 2007 / Resumo: A técnica de realimentação de relevância tem sido utilizada com o intuito de incorporar a subjetividade da percepção visual de usuários à recuperação de imagens por conteúdo. Basicamente, o processo de realimentação de relevância consiste na: (i) exibição de um pequeno conjunto de imagens; (ii) rotulação dessas imagens pelo usuário, indicando quais são relevantes ou não; (iii) e finalmente, aprendizado das preferências do usuário a partir das imagens rotuladas e seleção de um novo conjunto de imagens para exibição. O processo se repete até que o usuário esteja satisfeito. Esta dissertação apresenta dois arcabouços para recuperação de imagens por conteúdo com realimentação de relevância. Esses arcabouços utilizam programação genética para assimilar a percepção visual do usuário por meio de uma combinação de descritores. A utilização de programação genética é motivada pela sua capacidade exploratória do espaço de busca uma vez que esse espaço se adequa ao objetivo principal dos arcabouços propostos: encontrar, dentre todas as possíveis funções de combinação de descritores, aquela que melhor representa as características visuais que um usuário deseja ressaltar na realização de uma consulta. Os arcabouços desenvolvidos foram validados por meio de uma série de experimentos, envolvendo três diferentes bases de imagens e descritores de cor, forma e textura para a caracterização do conteúdo dessas imagens. Os arcabouços propostos foram comparados com três outros métodos de recuperação de imagens por conteúdo com realimentação de relevância, considerando-se a eficiência e a efetividade no processo de recuperação. Os resultados experimentais mostraram a superioridade dos arcabouços propostos. As contribuições dessa dissertação são: (i) estudo sobre diferentes técnicas de realimentação de relevância; (ii) proposta de dois arcabouços para recuperação de imagens por conteúdo com realimentação de relevância baseado em programação genética; (iii) implementação dos métodos propostos, validando-os por meio de uma série de experimentos e comparações com outros métodos / Abstract: Relevance Feedback has been used to incorporate the subjectivity of user visual perception in content-based image retrieval tasks. The relevance feedback process consists in the following steps: (i) showing a small set of images; (ii) indication of relevant or irrelevant images by the user; (iii) and finally, learning the user needs from her feedback, and selecting a new set of images to be showed. This procedure is repeated until the user is satisfied. This dissertation presents two content-based image retrieval frameworks with relevance feedback. These frameworks employ Genetic Programming to discover a combination of descriptors that characterize the user perception of image similarity. The use of genetic programming is motivated by its capability of exploring the search space, which deals with the major goal of the proposed frameworks: find, among all combination functions of descriptors, the one that best represents the user needs. Several experiments were conducted to validate the proposed frameworks. These experiments employed three different images databases and color, shape and texture descriptors to represent the content of database images. The proposed frameworks were compared with three other content-based image retrieval methods regarding their efficiency and effectiveness in the retrieval process. Experiment results demonstrate the superiority of the proposed methods. The contributions of this work are: (i) study of different relevance feedback techniques; (ii) proposal of two content-based image retrieval frameworks with relevance feedback, based on genetic programming; (ii) implementation of the proposed methods and their validation with several experiments, and comparison with other methods / Mestrado / Banco de Dados / Mestre em Ciência da Computação

Page generated in 0.0391 seconds