• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 70
  • 29
  • 23
  • 22
  • 14
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 411
  • 411
  • 350
  • 81
  • 78
  • 74
  • 68
  • 63
  • 55
  • 48
  • 44
  • 43
  • 42
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

A discourse concerning certain stochastic optimization algorithms and their application to the imaging of cataclysmic variable stars

Wood, Derren W 27 July 2005 (has links)
This thesis is primarily concerned with a description of four types of stochastic algorithms, namely the genetic algorithm, the continuous parameter genetic algorithm, the particle swarm algorithm and the differential evolution algorithm. Each of these techniques is presented in sufficient detail to allow the layman to develop her own program upon examining the text. All four algorithms are applied to the optimization of a certain set of unconstrained problems known as the extended Dixon-Szegö test set. An algorithm's performance at optimizing a set of problems such as these is often used as a benchmark for judging its efficacy. Although the same thing is done here, an argument is presented that shows that no such general benchmarking is possible. Indeed, it is asserted that drawing general comparisons between stochastic algorithms on the basis of any performance criterion is a meaningless pursuit unless the scope of such comparative statements is limited to specific sets of optimization problems. The idea is a result of the no free lunch theorems proposed by Wolpert and Macready. Two methods of presenting the results of an optimization run are discussed. They are used to show that judging an optimizer's performance is largely a subjective undertaking, despite the apparently objective performance measures which are commonly used when results are published. An important theme of this thesis is the observation that a simple paradigm shift can result in a different decision regarding which algorithm is best suited to a certain task. Hence, an effort is made to present the proper interpretation of the results of such tests (from the author's point of view). Additionally, the four abovementioned algorithms are used in a modelling environment designed to determine the structure of a Magnetic Cataclysmic Variable. This 'real world' modelling problem contrasts starkly with the well defined test set and highlights some of the issues that designers must face in the optimization of physical systems. The particle swarm optimizer will be shown to be the algorithm capable of achieving the best results for this modelling problem if an unbiased <font face="symbol">c</font>2 performance measure is used. However, the solution it generates is clearly not physically acceptable. Even though this drawback is not directly attributable to the optimizer, it is at least indicative of the fact that there are practical considerations which complicate the issue of algorithm selection. / Dissertation (MEng (Mechanical Engineering))--University of Pretoria, 2006. / Mechanical and Aeronautical Engineering / unrestricted
122

A Computational Intelligence Approach to Clustering of Temporal Data

Georgieva, Kristina Slavomirova January 2015 (has links)
Temporal data is common in real-world datasets. Analysis of such data, for example by means of clustering algorithms, can be difficult due to its dynamic behaviour. There are various types of changes that may occur to clusters in a dataset. Firstly, data patterns can migrate between clusters, shrinking or expanding the clusters. Additionally, entire clusters may move around the search space. Lastly, clusters can split and merge. Data clustering, which is the process of grouping similar objects, is one approach to determine relationships among data patterns, but data clustering approaches can face limitations when applied to temporal data, such as difficulty tracking the moving clusters. This research aims to analyse the ability of particle swarm optimisation (PSO) and differential evolution (DE) algorithms to cluster temporal data. These algorithms experience two weaknesses when applied to temporal data. The first weakness is the loss of diversity, which refers to the fact that the population of the algorithm converges, becoming less diverse and, therefore, limiting the algorithm’s exploration capabilities. The second weakness, outdated memory, is only experienced by the PSO and refers to the previous personal best solutions found by the particles becoming obsolete as the environment changes. A data clustering algorithm that addresses these two weaknesses is necessary to cluster temporal data. This research describes various adaptations of PSO and DE algorithms for the purpose of clustering temporal data. The algorithms proposed aim to address the loss of diversity and outdated memory problems experienced by PSO and DE algorithms. These problems are addressed by combining approaches previously used for the purpose of dealing with temporal or dynamic data, such as repulsion and anti-convergence, with PSO and DE approaches used to cluster data. Six PSO algorithms are introduced in this research, namely the data clustering particle swarm optimisation (DCPSO), reinitialising data clustering particle swarm optimisation (RDCPSO), cooperative data clustering particle swarm optimisation (CDCPSO), multi-swarm data clustering particle swarm optimisation (MDCPSO), cooperative multi-swarm data clustering particle swarm optimisation (CMDCPSO), and elitist cooperative multi-swarm data clustering particle swarm optimisation (eCMDCPSO). Additionally, four DE algorithms are introduced, namely the data clustering differential evolution (DCDE), re-initialising data clustering differential evolution (RDCDE), dynamic data clustering differential evolution (DCDynDE), and cooperative dynamic data clustering differential evolution (CDCDynDE). The PSO and DE algorithms introduced require prior knowledge of the total number of clusters in the dataset. The total number of clusters in a real-world dataset, however, is not always known. For this reason, the best performing PSO and best performing DE are compared. The CDCDynDE is selected as the winning algorithm, which is then adapted to determine the optimal number of clusters dynamically. The resulting algorithm is the k-independent cooperative data clustering differential evolution (KCDCDynDE) algorithm, which was compared against the local network neighbourhood artificial immune system (LNNAIS) algorithm, which is an artificial immune system (AIS) designed to cluster temporal data and determine the total number of clusters dynamically. It was determined that the KCDCDynDE performed the clustering task well for problems with frequently changing data, high-dimensions, and pattern and cluster data migration types. / Dissertation (MSc)--University of Pretoria, 2015. / Computer Science / Unrestricted
123

Proposta de uma rede neural modular que seleciona um conjunto diferente de características por módulo

SEVERO, Diogo da Silva 15 August 2013 (has links)
Submitted by Luiza Maria Pereira de Oliveira (luiza.oliveira@ufpe.br) on 2017-07-12T14:35:17Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Diogo da Silva Severo.pdf: 871898 bytes, checksum: d5d7499d1a7c7d0838db7f6fc9dd682b (MD5) / Made available in DSpace on 2017-07-12T14:35:17Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Diogo da Silva Severo.pdf: 871898 bytes, checksum: d5d7499d1a7c7d0838db7f6fc9dd682b (MD5) Previous issue date: 2013-08-15 / Redes Neurais Artificiais foram inspiradas nas redes neurais biológicas e as principais semelhanças compartilhadas por ambas são: capacidade de processamento de informação de forma paralela e distribuída, presença de unidades de processamento simples e capacidade de aprendizado através de exemplos. Entretanto, as redes neurais artificiais não apresentam uma característica inerente às redes neurais biológicas: modularização. Em contraste com as redes neurais artificiais, nosso cérebro apresenta áreas especializadas distintas responsáveis por tarefas específicas como visão, audição e fala, por exemplo. Com o intuito de aproximar ainda mais as redes neurais artificiais das redes neurais biológicas, foram propostas as redes neurais modulares. Tais redes tiram proveito da modularização para superar as redes neurais simples quando lidam com problemas complexos. Um conceito crucial relacionado ao uso de redes neurais modulares é a decomposição. A decomposição trata da divisão do problema original em vários subproblemas, menores e mais simples de serem resolvidos. Cada subproblema é tratado por um especialista (rede neural simples) específico. Ao solucionar seus respectivos subproblemas, cada módulo faz uso de todo o conjunto original de características para treinar seus especialistas. Entretanto, é esperado que diferentes módulos requeiram diferentes características para realizar suas tarefas. Dessa forma, é importante escolher quais características melhor preservam a informação discriminatória entre classes necessária à tarefa de classificação de cada módulo. Este trabalho propõe uma arquitetura de rede neural modular que seleciona um conjunto específico de características por módulo, sendo este um tópico pouco explorado na literatura uma vez que, em sua maioria, os trabalhos envolvendo redes neurais modulares não realizam seleção de características para cada módulo específico. O procedimento de seleção de características é um método de otimização global baseado no PSO binário. Outra contribuição do presente trabalho é um método híbrido de seleção e ponderação de características baseado no PSO binário. Foram realizados experimentos com bases de dados públicas e os resultados mostraram que a arquitetura proposta obteve melhores taxas de classificação ou taxas iguais, porém, fazendo uso de menos características quando comparadas a redes neurais modulares que não realizam a seleção de características por módulo. Os experimentos realizados com o método híbrido de seleção e ponderação de características baseado em otimização por enxame de partículas mostraram taxas de classificação superiores às taxas obtidas pelos métodos que serviram de comparação. / Artificial Neural Networks were inspired by biological neural networks and the major similarities shared by both are: the ability to process information in a parallel and istributed way, the presence of simple processing units and the ability for learning through examples. However, artificial neural networks do not present an inherent characteristic of biological neural networks: modularization. In contrast to artifical neural networks, our brain has distinct specialized areas for specific tasks such as vision, hearing and speech, for example. With the aim of bringing even more artificial neural networks to biological neural networks, modular neural networks were proposed. Such networks take advantage of modularization to outperform the simple neural networks when dealing with complex problems. A crucial concept related to the use of neural networks is the task decomposition. The task decomposition divides the original problem into several subproblems, smaller and simpler to resolve. Each subproblem is handled by a specific expert (simple neural network). To solve their subproblems, each module makes use of the whole set of features to train its expert. Nevertheless, it is expected that different modules require different features to perform their tasks. Thus, it is important to choose which features better preserve the discriminant information among classes for each module. This work proposes a modular neural network architecture that selects a specific set of features per module. This approach is a topic little explored in literature since in most cases research involving modular neural networks do not perform feature selection for each particular module. The feature selection procedure is an optimization method based on the binary particle swarm optimization. Another contribution of this work is a hybrid feature selection and weighting method based on binary PSO. Experiments were carried out on public datasets and the results show that the proposed architecture achieved better accuracy rates or equal rates, however, using less features when compared to modular neural networks that do not select features per module. Experiments with the hybrid feature selection and weighting method based on optimization particle swarm show better accuracy rates when compared to other hybrids methods used in this work as comparison methods.
124

Computational Evacuation Models for Populations with Heterogeneous Mobility Requirements

Hata, John Myerly 09 September 2021 (has links)
No description available.
125

Spacecraft Trajectory Optimization Suite (STOPS): Optimization of Low-Thrust Interplanetary Spacecraft Trajectories Using Modern Optimization Techniques

Sheehan, Shane P 01 September 2017 (has links)
The work presented here is a continuation of Spacecraft Trajectory Optimization Suite (STOpS), a master’s thesis written by Timothy Fitzgerald at California Polytechnic State University, San Luis Obispo. Low-thrust spacecraft engines are becoming much more common due to their high efficiency, especially for interplanetary trajectories. The version of STOpS presented here optimizes low-thrust trajectories using the Island Model Paradigm with three stochastic evolutionary algorithms: the genetic algorithm, differential evolution, and particle swarm optimization. While the algorithms used here were designed for the original STOpS, they were modified for this work. The low-thrust STOpS was successfully validated with two trajectory problems and their known near-optimal solutions. The first verification case was a constant-thrust, variable-time Earth orbit to Mars orbit transfer where the thrust was 3.787 Newtons and the time was approximately 195 days. The second verification case was a variable-thrust, constant-time Earth orbit to Mercury orbit transfer with the thrust coming from a solar electric propulsion model equation and the time being 355 days. Low-thrust STOpS found similar near-optimal solutions in each case. The final result of this work is a versatile MATLAB tool for optimizing low-thrust interplanetary trajectories.
126

Vícepásmová magnetická anténa / Multiband magnetic antenna

Ryšánek, Martin January 2010 (has links)
The thesis deals with a parametric analysis of a magnetic multiband antenna and explains the principle of its operation. In the thesis, an optimization of the antenna by the particle swarm optimization is performed in order to meet impedance matching in prescribed frequency bands.
127

Evoluční algoritmy / Evolutionary Algorithms

Szöllösi, Tomáš January 2012 (has links)
The task of this thesis was focused on comparison selected evolutionary algorithms for their success and computing needs. The paper discussed the basic principles and concepts of evolutionary algorithms used for optimization problems. Author programmed selected evolutionary algorithms and subsequently tasted on various test functions with exactly the given input conditions. Finally the algorithms were compared and evaluated the results obtained for different settings.
128

Optimalizace investičního portfolia pomocí metaheuristiky / Portfolio Optimization Using Metaheuristics

Haviar, Martin January 2015 (has links)
This thesis deals with design and implementation of an investment model, which applies methods of Post-modern portfolio theory. Particle swarm optimization (PSO) metaheuristic was used for portfolio optimization and the parameters were analyzed with several experiments. Johnsons SU distribution was used for estimation of future returns as it proved to be the best of analyzed distributions. The result is software application written in Python, which is tested for stability and performance of model in extreme situations.
129

Akcelerace částicových rojů PSO pomocí GPU / Acceleration of Particle Swarm Optimization Using GPUs

Krézek, Vladimír January 2012 (has links)
This work deals with the PSO technique (Particle Swarm Optimization), which is capable to solve complex problems. This technique can be used for solving complex combinatorial problems (the traveling salesman problem, the tasks of knapsack), design of integrated circuits and antennas, in fields such as biomedicine, robotics, artificial intelligence or finance. Although the PSO algorithm is very efficient, the time required to seek out appropriate solutions for real problems often makes the task intractable. The goal of this work is to accelerate the execution time of this algorithm by the usage of Graphics processors (GPU), which offers higher computing potential while preserving the favorable price and size. The boolean satisfiability problem (SAT) was chosen to verify and benchmark the implementation. As the SAT problem belongs to the class of the NP-complete problems, any reduction of the solution time may broaden the class of tractable problems and bring us new interesting knowledge.
130

Akcelerace částicových rojů PSO pomocí GPU / Particle Swarm Optimization on GPUs

Záň, Drahoslav January 2013 (has links)
This thesis deals with a population based stochastic optimization technique PSO (Particle Swarm Optimization) and its acceleration. This simple, but very effective technique is designed for solving difficult multidimensional problems in a wide range of applications. The aim of this work is to develop a parallel implementation of this algorithm with an emphasis on acceleration of finding a solution. For this purpose, a graphics card (GPU) providing massive performance was chosen. To evaluate the benefits of the proposed implementation, a CPU and GPU implementation were created for solving a problem derived from the known NP-hard Knapsack problem. The GPU application shows 5 times average and almost 10 times the maximum speedup of computation compared to an optimized CPU application, which it is based on.

Page generated in 0.1 seconds