• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 86
  • 7
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 259
  • 259
  • 83
  • 80
  • 78
  • 69
  • 67
  • 54
  • 54
  • 54
  • 53
  • 47
  • 41
  • 39
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A genetic algorithm for impedance matching network design

Du Plessis, W.P. (Warren Paul) 10 August 2007 (has links)
Please read the abstract (Summary) in the section 00front of this document / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
112

[en] AUTOMATIC TRACING OF ENVELOPES IN PLANAR STRUCTURES USING A EVOLUTIONARY ALGORITHM / [pt] TRAÇADO AUTOMÁTICO DE ENVOLTÓRIAS DE ESFORÇOS EM ESTRUTURAS PLANAS UTILIZANDO UM ALGORITMO EVOLUCIONÁRIO

GISELE CRISTINA DA CUNHA HOLTZ 21 December 2005 (has links)
[pt] O objetivo deste trabalho é desenvolver dentro do programa FTOOL uma ferramenta para obtenção de envoltórias de esforços internos devido a cargas móveis. Envoltórias geralmente são obtidas através de interpolação de valores limites de seções pré-selecionadas ao longo da estrutura. Estes valores são obtidos com base no posicionamento da carga móvel em relação às linhas de influência dos esforços internos. A determinação de valores limites de um esforço em uma seção constitui um problema de otimização cujo objetivo é minimizar ou maximizar os valores dos esforços em relação à posição do tremtipo que percorre a estrutura. Porém, não existe uma expressão analítica que defina os valores limites de um esforço em uma seção para um dado trem-tipo, o que impossibilita o uso da maioria dos métodos clássicos de otimização para resolver o problema, porque esses métodos requerem, na maioria das vezes, o uso de pelo menos a primeira derivada da função objetivo em relação às variáveis de projeto. Portanto, este trabalho adotou algoritmos da Estratégia Evolutiva ( EE ) para determinar os valores limites devidos a cargas móveis. Foram feitas duas implementação distintas de Estratégia Evolutiva, conhecidas como EE − + ) 1 ( (lambda) e EE − + ) ( (lambda) (mi) . Além de utilizar algoritmos de EE para resolver o problema de envoltórias, foi desenvolvido um outro processo de solução denominado Força Bruta, que consiste em percorrer com o trem-tipo toda estrutura por passos pré-estabelecidos e calcular os valores dos esforços mínimos e máximos. Para a grande maioria dos casos, os resultados obtidos com a Estratégia Evolutiva foram corretos, porém, em alguns casos mais críticos, o valor exato da envoltória não é encontrado em algumas seções da estrutura, embora encontre um valor muito próximo a ele. Observou-se que os resultados da EE podem ser melhorados quando se enriquece a solução com uma estratégia econômica de posicionamento de cargas concentradas em cima de picos da linha de influência. / [en] The objective of this work is to develop a tool for obtaining envelopes of internal forces due to load-trains in the FTOOL software. Usually, envelopes are obtained through interpolation of limiting values on pre- selected sections along the structure. These values are obtained based on the positioning of the loadtrain in relation to influence lines of internal forces. The determination of limiting values of an effect at a section represents an optimization problem whose objective is to minimize or maximize the values of that effect in relation to the position of a load-train that passes along the structure. However, there is no analytical expression that defines a limiting value of an effect on a section for a specific load-train. Therefore, classical optimization methods cannot be used to solve this problem. Rather, the solution requires a method that does not require derivatives of the objective function. For this reason, this work adopts algorithms of the Evolution Strategy (ES) to achieve the limiting values due to load-trains. Two distinct algorithms of the ES, known as ES − + ) 1 ( (lambda) and ES − + ) ( (lambda) (mi) , were implemented. In addition to the ES algorithms to trace the envelopes, another process of solution called Force Brute was developed. It consists of moving the load-train in pre-determined steps along the structure and calculating minimum e maximum values. In general, the ES method converges to the correct solution. However, there are cases, depending on the complexity of the load-train, that the algorithms do not find the exact limiting value (although usually very close to it). It was observed that the ES results could be complemented and improved with results from an inexpensive solution in which concentrated loads are positioned on peak values of the influence lines.
113

No Free Lunch, Bayesian Inference, and Utility: A Decision-Theoretic Approach to Optimization

Monson, Christopher Kenneth 27 April 2006 (has links) (PDF)
Existing approaches to continuous optimization are essentially mechanisms for deciding which locations should be sampled in order to obtain information about a target function's global optimum. These methods, while often effective in particular domains, generally base their decisions on heuristics developed in consideration of ill-defined desiderata rather than on explicitly defined goals or models of the available information that may be used to achieve them. The problem of numerical optimization is essentially one of deciding what information to gather, then using that information to infer the location of the global optimum. That being the case, it makes sense to model the problem using the language of decision theory and Bayesian inference. The contribution of this work is precisely such a model of the optimization problem, a model that explicitly describes information relationships, admits clear expression of the target function class as dictated by No Free Lunch, and makes rational and mathematically principled use of utility and cost. The result is an algorithm that displays surprisingly sophisticated behavior when supplied with simple and straightforward declarations of the function class and the utilities and costs of sampling. In short, this work intimates that continuous optimization is equivalent to statistical inference and decision theory, and the result of viewing the problem in this way has concrete theoretical and practical benefits.
114

Scheduling and Resource Efficiency Balancing. Discrete Species Conserving Cuckoo Search for Scheduling in an Uncertain Execution Environment

Bibiks, Kirils January 2017 (has links)
The main goal of a scheduling process is to decide when and how to execute each of the project’s activities. Despite large variety of researched scheduling problems, the majority of them can be described as generalisations of the resource-constrained project scheduling problem (RCPSP). Because of wide applicability and challenging difficulty, RCPSP has attracted vast amount of attention in the research community and great variety of heuristics have been adapted for solving it. Even though these heuristics are structurally different and operate according to diverse principles, they are designed to obtain only one solution at a time. In the recent researches on RCPSPs, it was proven that these kind of problems have complex multimodal fitness landscapes, which are characterised by a wide solution search spaces and presence of multiple local and global optima. The main goal of this thesis is twofold. Firstly, it presents a variation of the RCPSP that considers optimisation of projects in an uncertain environment where resources are modelled to adapt to their environment and, as the result of this, improve their efficiency. Secondly, modification of a novel evolutionary computation method Cuckoo Search (CS) is proposed, which has been adapted for solving combinatorial optimisation problems and modified to obtain multiple solutions. To test the proposed methodology, two sets of experiments are carried out. Firstly, the developed algorithm is applied to a real-life software development project. Secondly, the performance of the algorithm is tested on universal benchmark instances for scheduling problems which were modified to take into account specifics of the proposed optimisation model. The results of both experiments demonstrate that the proposed methodology achieves competitive level of performance and is capable of finding multiple global solutions, as well as prove its applicability in real-life projects.
115

The Proteomics Approach To Evolutionary Computation: An Analysis Of Pr

Garibay, Ivan 01 January 2004 (has links)
As the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into complex functions or designs. The observation from the central dogma of molecular biology that proteins are the basic building blocks of life and the recent advances in proteomics on analysis of structure, function and interaction of entire protein complements, lead us to propose a unifying framework of thought for these approaches: the proteomics approach. This thesis propose to investigate whether the self-organization of protein analogous structures at the representation level can increase the degree of complexity and ``novelty'' of solutions obtainable using evolutionary search techniques. In order to do so, we identify two fundamental aspects of this transition: (1) proteins interact in a three dimensional medium analogous to a multiset; and (2) proteins are functional structures. The first aspect is foundational for understanding of the second. This thesis analyzes the first aspect. It investigates the effects of using a genome to proteome mapping on evolutionary computation. This analysis is based on a genetic algorithm (GA) with a string to multiset mapping that we call the proportional genetic algorithm (PGA), and it focuses on the feasibility and effectiveness of this mapping. This mapping leads to a fundamental departure from typical EC methods: using a multiset of proteins as an intermediate mapping results in a \emph{completely location independent} problem representation where the location of the genes in a genome has no effect on the fitness of the solutions. Completely location independent representations, by definition, do not suffer from traditional EC hurdles associated with the location of the genes or positional effect in a genome. Such representations have the ability to self-organize into a genomic structure that appears to favor positive correlations between form and quality of represented solutions. Completely location independent representations also introduce new problems of their own such as the need for large alphabets of symbols and the theoretical need for larger representation spaces than traditional approaches. Overall, these representations perform as well or better than traditional representations and they appear to be particularly good for the class of problems involving proportions or multisets. This thesis concludes that the use of protein analogous structures as an intermediate representation in evolutionary computation is not only feasible but in some cases advantageous. In addition, it lays the groundwork for further research on proteins as functional self-organizing structures capable of building increasingly complex functionality, and as basic units of problem representation for evolutionary computation.
116

Learning From Geometry In Learning For Tactical And Strategic Decision Domains

Gauci, Jason 01 January 2010 (has links)
Artificial neural networks (ANNs) are an abstraction of the low-level architecture of biological brains that are often applied in general problem solving and function approximation. Neuroevolution (NE), i.e. the evolution of ANNs, has proven effective at solving problems in a variety of domains. Information from the domain is input to the ANN, which outputs its desired actions. This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT), based on a novel indirect encoding of ANNs. The key insight in HyperNEAT is to make the algorithm aware of the geometry in which the ANNs are embedded and thereby exploit such domain geometry to evolve ANNs more effectively. The dissertation focuses on applying HyperNEAT to tactical and strategic decision domains. These domains involve simultaneously considering short-term tactics while also balancing long-term strategies. Board games such as checkers and Go are canonical examples of such domains; however, they also include real-time strategy games and military scenarios. The dissertation details three proposed extensions to HyperNEAT designed to work in tactical and strategic decision domains. The first is an action selector ANN architecture that allows the ANN to indicate its judgements on every possible action all at once. The second technique is called substrate extrapolation. It allows learning basic concepts at a low resolution, and then increasing the resolution to learn more advanced concepts. The iii final extension is geometric game-tree pruning, whereby HyperNEAT can endow the ANN the ability to focus on specific areas of a domain (such as a checkers board) that deserve more inspection. The culminating contribution is to demonstrate the ability of HyperNEAT with these extensions to play Go, a most challenging game for artificial intelligence, by combining HyperNEAT with UCT
117

Multiagent Learning Through Indirect Encoding

D'Ambrosio, David B 01 January 2011 (has links)
Designing a system of multiple, heterogeneous agents that cooperate to achieve a common goal is a difficult task, but it is also a common real-world problem. Multiagent learning addresses this problem by training the team to cooperate through a learning algorithm. However, most traditional approaches treat multiagent learning as a combination of multiple single-agent learning problems. This perspective leads to many inefficiencies in learning such as the problem of reinvention, whereby fundamental skills and policies that all agents should possess must be rediscovered independently for each team member. For example, in soccer, all the players know how to pass and kick the ball, but a traditional algorithm has no way to share such vital information because it has no way to relate the policies of agents to each other. In this dissertation a new approach to multiagent learning that seeks to address these issues is presented. This approach, called multiagent HyperNEAT, represents teams as a pattern of policies rather than individual agents. The main idea is that an agent’s location within a canonical team layout (such as a soccer team at the start of a game) tends to dictate its role within that team, called the policy geometry. For example, as soccer positions move from goal to center they become more offensive and less defensive, a concept that is compactly represented as a pattern. iii The first major contribution of this dissertation is a new method for evolving neural network controllers called HyperNEAT, which forms the foundation of the second contribution and primary focus of this work, multiagent HyperNEAT. Multiagent learning in this dissertation is investigated in predator-prey, room-clearing, and patrol domains, providing a real-world context for the approach. Interestingly, because the teams in multiagent HyperNEAT are represented as patterns they can scale up to an infinite number of multiagent policies that can be sampled from the policy geometry as needed. Thus the third contribution is a method for teams trained with multiagent HyperNEAT to dynamically scale their size without further learning. Fourth, the capabilities to both learn and scale in multiagent HyperNEAT are compared to the traditional multiagent SARSA(λ) approach in a comprehensive study. The fifth contribution is a method for efficiently learning and encoding multiple policies for each agent on a team to facilitate learning in multi-task domains. Finally, because there is significant interest in practical applications of multiagent learning, multiagent HyperNEAT is tested in a real-world military patrolling application with actual Khepera III robots. The ultimate goal is to provide a new perspective on multiagent learning and to demonstrate the practical benefits of training heterogeneous, scalable multiagent teams through generative encoding.
118

Alayzing The Effects Of Modularity On Search Spaces

Garibay, Ozlem 01 January 2008 (has links)
We are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well understood. In this thesis, we seek to better understand the effects of modularity on search. In particular, what are the effects of module creation on the search space structure and how do these structural changes affect performance? We define a theoretical and empirical framework to study modularity in evolutionary algorithms. Using this framework, we provide evidence of the following. First, not all types of modularity have an effect on search. We can have highly modular spaces that in essence are equivalent to simpler non-modular spaces. This is the case, because these spaces achieve higher degree of modularity without changing the fundamental structure of the search space. Second, for the cases when modularity actually has an effect on the fundamental structure of the search space, if left without guidance, it would only crowd and complicate the space structure resulting in a harder space for most search algorithms. Finally, we have the case when modularity not only has an effect in the search space structure, but most importantly, module creation can be guided by problem domain knowledge. When this knowledge can be used to estimate the value of a module in terms of its contribution toward building the solution, then modularity is extremely effective. It is in this last case that creating high value modules or low value modules has a direct and decisive impact on performance. The results presented in this thesis help to better understand, in a principled way, the effects of modularity on search. Better understanding the effects of modularity on search is a step forward in the larger issue of evolutionary search applied to increasingly complex problems.
119

Pattern Recognition via Machine Learning with Genetic Decision-Programming

Hoff, Carl C. January 2005 (has links)
No description available.
120

Desarrollo de técnicas de computación evolutiva para soporte en minería de datos y texto

Cecchini, Rocío L. 13 April 2010 (has links)
La obtención de información a partir de un conjunto de datos o minería de datos es una tarea compleja que involucra varias etapas, tal como sucede en la minería de texto. Esta puede ser considerada como un caso particular de minería de datos donde los datos contemplan la incorporación de texto. Ambos procesos de minería se vuelven aun más complejos cuando nos encontramos ante grandes cúmulos de datos o texto. Es común encontrar conjuntos de datos grandes, complejos y ricos en información en áreas como medicina, comercio, ingeniería y ciencias de la computación. Simultáneamente, los avances tecnológicos han dado lugar a la acumulación de sustanciosas cantidades de documentos, artículos y texto; el ejemplo más contundente de esta clase de material es la Web, la cual se estima que alcanza más de 8.05 billones de páginas. La propuesta de esta tesis es el uso de herramientas evolutivas mono- y multi-objetivo como un soporte para algunas de las etapas de este proceso. En particular, las etapas que implican optimización y búsqueda dentro de estos grandes espacios en los cuales otros métodos serían inviables. A lo largo de la investigación se desarrollaron, evaluaron y compararon algoritmos evolutivos mono y multi-objetivo tanto para la rama de minería de datos como para la rama de minería de texto. Como caso particular dentro de minería de datos, se contempló el problema de encontrar las relaciones más relevantes entre variables dentro de distintos conjuntos de datos. Dichas relaciones, no son visibles para un experto cuando se encuentra frente a la base de datos original cruda, la cual puede contemplar miles de variables y miles de instan-cias. Para resolver este problema se propuso una metodología de dos fases. Los algoritmos desarrollados en este contexto se integraron a la primera fase de la arquitectura y fueron exitosamente utilizados como mecanismo de búsqueda masiva. Por otra parte, en el caso de minería de texto se abordó el problema de recuperar información relacionada y novedosa con respecto a un tópico de interés. Para este problema se propuso, implementó y evaluó una arquitectura que, partiendo de una descripción para el tópico de interés, evoluciona varios conjuntos de términos hacia conjuntos que logren obtener mejores documentos con respecto a dicho tema de interés y con respecto a los objetivos propuestos (por ejemplo: simi-litud, precisión, cobertura). Dentro de las técnicas evolutivas multi-objetivo propuestas, se diseñaron adaptaciones de los algoritmos basados en Pareto más prometedores reportados por la literatura y se propusieron versiones multi-objetivo agregativas. Ambos enfoques, los basados en Pareto y los agregativos, demostraron ser claramente competentes tanto para minería de datos como para minería de texto. / Data mining comprises the capture of information from data, which is a complex task that involves many stages. The same applies to text mining that can be considered as a special case of data mining where the data include text. As data and text sets increase, both mining processes become even more complicated. Large, complex and rich information data sets arise in many common research elds like medicine, commerce, engineering and computer science. Simultaneously, techno-logical advances have led to theaccumulation of substantial amounts of documents, articles and text; the clearest example of this kind of material is the Web, which is estimated to have reached more than 8.05 billion pages. This thesis proposes the use of mono- and multi-objective evolutionary tools as support in some of the stages of the data and text mining processes. In particular, those stages which imply optimiza-tion and search in wide search spaces where other methods could be unfeasible. In this research work, several mono- and multi-objective evolutionary algorithms were developed, evaluated and compared for both, data and text mining research areas. As a particular case in data mining, the problem of finding the most relevant relationship among variables from the data was considered. These relations, are not obvious for experts when they are faced with the original raw database, which can include thousands of variables and thousand of samples. In order to solve this problem, a two-phase methodology was proposed. In this context, the developed algorithms were integrated into the first phase and were succesfully used as massive search mechanisms. On the other hand, as a particular case of the text mining research area, the problem of retrieving novel material that is related to a search context was considered. In order to overcome this problem, an architecture was proposed, implemented and evaluated. Starting from a description for the topic of interest, this architecture evolves several sets of terms towards sets which can obtain better documents with respect to both, the topic of interest and the proposed objectives (e.g., similarity, precision, recall). Among the proposed multi-objetive evolutionary techniques, adap-tations of the more promising reported Pareto-based evolutionary algorithms were designed and new multi-objective aggregative schemes were proposed. Both approaches- i.e., the Pareto-based strategy and the aggregative techniques- proved to be clearly competent for both research areas: data and text mining.

Page generated in 0.2534 seconds