Spelling suggestions: "subject:"aptimization algorithm"" "subject:"aptimization allgorithm""
1 
Software Architecture Design for Supporting Optimization Algorithm DesignsZhong, Dajun 05 September 2008 (has links)
In this research, we designed and implemented optimization search algorithms to facilitate implementation of optimization search software. We provided the design of module interaction graph including modules, ports, and channels. We can map solving algorithms of subproblems onto behavioral designs incorresponding modules. Finally, they can integrate module¡¦s with channels. Since optimization search algorithms may evolve one to several solutions at the same time, we planned a solution set organization to support designerplanned search strategy. During the optimization process, solutions or subsolutions should be evaluated and analyzed. Because excessive executive time as commonly spent in replicated evaluation, we planned dynamic programming for reusing evaluation results to reduce replicated evaluation time. Lastly, when evolving new solutions, usually only a small number of decisions are changed. We planed a hierarchical decision representation and maintenance operations to reduce replication of common parts among solutions to further enhance its execution speed.

2 
Developing agile motor skills on virtual and real humanoidsHa, Sehoon 07 January 2016 (has links)
Demonstrating strength and agility on virtual and real humanoids has been an important goal in computer graphics and robotics. However, developing physics based controllers for various agile motor skills requires a tremendous amount of prior knowledge and manual labor due to complex mechanisms of the motor skills. The focus of the dissertation is to develop a set of computational tools to expedite the design process of physicsbased controllers that can execute a variety of agile motor skills on virtual and real humanoids. Instead of designing directly controllers real humanoids, this dissertation takes an approach that develops appropriate theories and models in virtual simulation and systematically transfers the solutions to hardware systems.
The algorithms and frameworks in this dissertation span various topics from spe cific physicsbased controllers to general learning frameworks. We first present an online algorithm for controlling falling and landing motions of virtual characters. The proposed algorithm is effective and efficient enough to generate falling motions for a wide range of arbitrary initial conditions in realtime. Next, we present a robust falling strategy for real humanoids that can manage a wide range of perturbations by planning the optimal contact sequences. We then introduce an iterative learning framework to easily design various agile motions, which is inspired by human learn ing techniques. The proposed framework is followed by novel algorithms to efficiently optimize control parameters for the target tasks, especially when they have many constraints or parameterized goals. Finally, we introduce an iterative approach for exporting simulationoptimized control policies to hardware of robots to reduce the
number of hardware experiments, that accompany expensive costs and labors.

3 
Evolving Cuckoo Search : From singleobjective to multiobjectiveLidberg, Simon January 2011 (has links)
This thesis aims to produce a novel multiobjective algorithm that is based on Cuckoo Search by Dr. XinShe Yang. Cuckoo Search is a promising natureinspired metaheuristic optimization algorithm, which currently is only able to solve singleobjective optimization problems. After an introduction, a number of theoretical points are presented as a basis for the decision of which algorithms to hybridize Cuckoo Search with. These are then reviewed in detail and verified against current benchmark algorithms to evaluate their efficiency. To test the proposed algorithm in a new setting, a realworld combinatorial problem is used. The proposed algorithm is then used as an optimization engine for a simulationbased system and compared against a current implementation.

4 
Particle Swarm Optimization Algorithm for Multiuser Detection in DSCDMA SystemFang, Pinghau 31 July 2010 (has links)
In directsequence code division multiple access (DSCDMA) systems, the
heuristic optimization algorithms for multiuser detection include genetic algorithms
(GA) and simulated annealing (SA) algorithm. In this thesis, we use particle swarm
optimization (PSO) algorithms to solve the optimization problem of multiuser
detection (MUD). PSO algorithm has several advantages, such as fast convergence,
low computational complexity, and good performance in searching optimum solution.
In order to enhance the performance and reduce the number of parameters, we
propose two modified PSO algorithms, inertia weighting controlled PSO (WPSO)
and reducedparameter PSO (RPSO). From simulation results, the performance of
our proposed algorithms can achieve that of optimal solution. Furthermore, our
proposed algorithms have faster convergence performance and lower complexity
when compared with other conventional algorithms.

5 
A Hybrid Algorithm for the Longest Common Subsequence of Multiple SequencesWeng, Hsiangyi 19 August 2009 (has links)
The kLCS problem is to find the longest common subsequence (LCS) of k input sequences. It is difficult while the number of input sequences is large.
In the past, researchers focused on finding the LCS of two sequences (2LCS). However, there is no good algorithm for finding the optimal solution of kLCS up to now. For solving the kLCS problem, in this thesis, we first propose a mixed algorithm, which is a combination of a heuristic algorithm, genetic algorithm (GA) and ant colony optimization (ACO) algorithm.
Then, we propose an enhanced ACO (EACO) algorithm, composed of the heuristic algorithm and matching pair algorithm (MPA). In our experiments, we compare our algorithms with expansion algorithm, best next for maximal available symbol algorithm, GA and ACO algorithm. The experimental results on several sets of DNA and protein sequences show that our EACO algorithm outperforms other algorithms in the lengths of solutions.

6 
A Dynamic Taxi Ride Sharing System Using Particle Swarm OptimizationSilwal, Shrawani 30 April 2020 (has links)
No description available.

7 
Algoritmus s pravděpodobnostním směrovým vektorem / Optimization Algorithm with Probability Direction VectorPohl, Jan January 2015 (has links)
This disertation presents optimization algorithm with probability direction vector. This algorithm, in its basic form, belongs to category of stochastic optimization algorithms. It uses statistically effected perturbation of individual through state space. This work also represents modification of basic idea to the form of swarm optimization algoritm. This approach contains form of stochastic cooperation. This is one of the new ideas of this algorithm. Population of individuals cooperates only through modification of probability direction vector and not directly. Statistical tests are used to compare resultes of designed algorithms with commonly used algorithms Simulated Annealing and SOMA. This part of disertation also presents experimental data from other optimization problems. Disertation ends with chapter which seeks optimal set of control variables for each designed algorithm.

8 
Bioinspired optimization algorithms for smart antennasZuniga, Virgilio January 2011 (has links)
This thesis studies the effectiveness of bioinspired optimization algorithms in controlling adaptive antenna arrays. Smart antennas are able to automatically extract the desired signal from interferer signals and external noise. The angular pattern depends on the number of antenna elements, their geometrical arrangement, and their relative amplitude and phases. In the present work different antenna geometries are tested and compared when their array weights are optimized by different techniques. First, the Genetic Algorithm and Particle Swarm Optimization algorithms are used to find the best set of phases between antenna elements to obtain a desired antenna pattern. This pattern must meet several restraints, for example: Maximizing the power of the main lobe at a desired direction while keeping nulls towards interferers. A series of experiments show that the PSO achieves better and more consistent radiation patterns than the GA in terms of the total area of the antenna pattern. A second set of experiments use the SignaltoInterferenceplusNoiseRatio as the fitness function of optimization algorithms to find the array weights that configure a rectangular array. The results suggest an advantage in performance by reducing the number of iterations taken by the PSO, thus lowering the computational cost. During the development of this thesis, it was found that the initial states and particular parameters of the optimization algorithms affected their overall outcome. The third part of this work deals with the metaoptimization of these parameters to achieve the best results independently from particular initial parameters. Four algorithms were studied: Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing and Hill Climb. It was found that the metaoptimization algorithms Local Unimodal Sampling and Pattern Search performed better to set the initial parameters and obtain the best performance of the bioinspired methods studied.

9 
Solution of Largescale Structured Optimization Problems with Schurcomplement and Augmented Lagrangian Decomposition MethodsJose S Rodriguez (6760907) 02 August 2019 (has links)
<pre>In this dissertation we develop numerical algorithms and software tools to facilitate parallel solutions of nonlinear programming (NLP) problems. In particular, we address largescale, blockstructured problems with an intrinsic decomposable configuration. These problems arise in a great number of engineering applications, including parameter estimation, optimal control, network optimization, and stochastic programming. The structure of these problems can be leveraged by optimization solvers to accelerate solutions and overcome memory limitations, and we propose variants to two classes of optimization algorithms: augmented Lagrangian (AL) schemes and Schurcomplement interiorpoint methods. </pre>
<pre><br></pre>
<pre>The convergence properties of augmented Lagrangian decomposition schemes like the alternating direction method of multipliers (ADMM) and progressive hedging (PH) are well established for convex optimization but convergence guarantees in nonconvex settings are still poorly understood. In practice, however, ADMM and PH often perform satisfactorily in complex nonconvex NLPs. In this work, we study connections between the method of multipliers (MM), ADMM, and PH to derive benchmarking metrics that explain why PH and ADMM work in practice. We illustrate the concepts using challenging dynamic optimization problems. Our exposition seeks to establish more formalism in benchmarking ADMM, PH, and AL schemes and to motivate algorithmic improvements.</pre>
<pre><br></pre>
<pre>The effectiveness of nonlinear interiorpoint solvers for solving largescale problems relies quite heavily on the solution of the underlying linear algebra systems. The schurcomplement decomposition is very effective for parallelizing the solution of linear systems with modest coupling. However, for systems with large number of coupling variables the schurcomplement method does not scale favorably. We implement an approach that uses a Krylov solver (GMRES) preconditioned with ADMM to solve blockstructured linear systems that arise in the interiorpoint method. We show that this ADMMGMRES approach overcomes the wellknown scalability issues of Schur decomposition.</pre>
<pre><br></pre>
<pre>One important drawback of using decomposition approaches like ADMM and PH is their convergence rate. Unlike Schurcomplement interiorpoint algorithms that have superlinear convergence, augmented Lagrangian approaches typically exhibit linear and sublinear rates. We exploit connections between ADMM and the Schurcomplement decomposition to derive an accelerated version of ADMM. Specifically, we study the effectiveness of performing a NewtonRaphson algorithm to compute multiplier estimates for augmented Lagrangian methods. We demonstrate using twostage stochastic programming problems that our multiplier update achieves convergence in fewer iterations for MM on general nonlinear problems. In the case of ADMM, the newton update significantly reduces the number of subproblem solves for convex quadratic programs (QPs). Moreover, we show that using newton multiplier updates makes the method robust to the selection of the penalty parameter.</pre>
<pre><br></pre>
<pre>Traditionally, stateoftheart optimization solvers are implemented in lowlevel programming languages. In our experience, the development of decomposition algorithms in these frameworks is challenging. They present a steep learning curve and can slow the development and testing of new numerical algorithms. To mitigate these challenges, we developed PyNumero, a new open source framework implemented in Python and C++. The package seeks to facilitate development of optimization algorithms for largescale optimization within a highlevel programming environment while at the same time minimizing the computational burden of using Python. The efficiency of PyNumero is illustrated by implementing algorithms for problems arising in stochastic programming and optimal control. Timing results are presented for both serial and parallel implementations. Our computational studies demonstrate that with the appropriate balance between compiled code and Python, efficient implementations of optimization algorithms are achievable in these highlevel languages.</pre>

10 
Algoritmo de otimização bayesiano com detecção de comunidades / Bayesian optimization algorithm with community detectionCrocomo, Márcio Kassouf 02 October 2012 (has links)
ALGORITMOS de Estimação de Distribuição (EDAs) compõem uma frente de pesquisa em Computação Evolutiva que tem apresentado resultados promissores para lidar com problemas complexos de larga escala. Nesse contexto, destacase o Algoritmo de Otimização Bayesiano (BOA) que usa um modelo probabilístico multivariado (representado por uma rede Bayesiana) para gerar novas soluções a cada iteração. Baseado no BOA e na investigação de algoritmos de detecção de estrutura de comunidades (para melhorar os modelos multivariados construídos), propõese dois novos algoritmos denominados CDBOA e StrOp. Mostrase que ambos apresentam vantagens significativas em relação ao BOA. O CDBOA mostrase mais flexível que o BOA, ao apresentar uma maior robustez a variações dos valores de parâmetros de entrada, facilitando o tratamento de uma maior diversidade de problemas do mundo real. Diferentemente do CDBOA e BOA, o StrOp mostra que a detecção de comunidades a partir de uma rede Bayesiana pode modelar mais adequadamente problemas decomponíveis, reestruturandoos em subproblemas mais simples, que podem ser resolvidos por uma busca gulosa, resultando em uma solução para o problema original que pode ser ótima no caso de problemas perfeitamente decomponíveis, ou uma aproximação, caso contrário. Também é proposta uma nova técnica de reamostragens para EDAs (denominada REDA). Essa técnica possibilita a obtenção de modelos probabilísticos mais representativos, aumentando significativamente o desempenho do CDBOA e StrOp. De uma forma geral, é demonstrado que, para os casos testados, CDBOA e StrOp necessitam de um menor tempo de execução do que o BOA. Tal comprovação é feita tanto experimentalmente quanto por análise das complexidades dos algoritmos. As características principais desses algoritmos são avaliadas para a resolução de diferentes problemas, mapeando assim suas contribuições para a área de Computação Evolutiva / ESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CDBOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CDBOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of realworld problems. Unlike CDBOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CDBOA and StrOp. In general, it is shown that, for the scenarios tested, CDBOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation

Page generated in 0.1554 seconds