• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • Tagged with
  • 24
  • 24
  • 17
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Probe Design Using Multi-objective Genetic Algorithm

Lin, Fang-lien 22 August 2005 (has links)
DNA microarrays are widely used techniques in molecular biology and DNA computing area. Before performing the microarray experiment, a set of subsequences of DNA called probes which are complementary to the target genes of interest must be found. And its reliability seriously depends on the quality of the probe sequences. Therefore, one must carefully choose the probe set in target sequences. A new method for probe design strategy using multi-objective genetic algorithm is proposed. The proposed algorithm is able to find a set of suitable probes more efficient and uses a model based on suffix tree to speed up the specificity constraint checking. The dry dock experimental results show that the proposed algorithm finds several probes for DNA microarray that not only obey the design properties, but also have specificity.
2

Bi-objective multi-assignment capacitated location-allocation problem

Maach, Fouad 01 June 2007 (has links)
Optimization problems of location-assignment correspond to a wide range of real situations, such as factory network design. However most of the previous works seek in most cases at minimizing a cost function. Traffic incidents routinely impact the performance and the safety of the supply. These incidents can not be totally avoided and must be regarded. A way to consider these incidents is to design a network on which multiple assignments are performed. Precisely, the problem we focus on deals with power supplying that has become a more and more complex and crucial question. Many international companies have customers who are located all around the world; usually one customer per country. At the other side of the scale, power extraction or production is done in several sites that are spread on several continents and seas. A strong willing of becoming less energetically-dependent has lead many governments to increase the diversity of supply locations. For each kind of energy, many countries expect to deal ideally with 2 or 3 location sites. As a decrease in power supply can have serious consequences for the economic performance of a whole country, companies prefer to balance equally the production rate among all sites as the reliability of all the sites is considered to be very similar. Sharing equally the demand between the 2 or 3 sites assigned to a given area is the most common way. Despite the cost of the network has an importance, it is also crucial to balance the loading between the sites to guarantee that no site would take more importance than the others for a given area. In case an accident happens in a site or in case technical problems do not permit to satisfy the demand assigned to the site, the overall power supply of this site is still likely to be ensured by the one or two available remaining site(s). It is common to assign a cost per open power plant and another cost that depends on the distance between the factory or power extraction point and the customer. On the whole, such companies who are concerned in the quality service of power supply have to find a good trade-off between this factor and their overall functioning cost. This situation exists also for companies who supplies power at the national scale. The expected number of areas as well that of potential sites, can reach 100. However the targeted size of problem to be solved is 50. This thesis focuses on devising an efficient methodology to provide all the solutions of this bi-objective problem. This proposal is an investigation of close problems to delimit the most relevant approaches to this untypical problem. All this work permits us to present one exact method and an evolutionary algorithm that might provide a good answer to this problem. / Master of Science
3

Využití umělých neuronových sítí k urychlení evolučních algoritmů / Utilizing artificial neural networks to accelerate evolutionary algorithms

Wimberský, Antonín January 2011 (has links)
In the present work, we study possibilities of using artificial neural networks for accelerating of evolutionary algorithms. Improving consists in decreasing in number of calls to the fitness function, the evaluation of which is in some kinds of optimization problems very time- consuming and expensive. We use neural network as a regression model, which serves for fitness estimation in a run of evolutionary algorithm. Together with the regression model, we work also with the real fitness function, which we use for re-evaluation of individuals that are selecting according to a beforehand chosen strategy. These individuals re-evaluated by the real fitness function are used for improving the regression model. Because a significant number of individuals are evaluated only with the regression model, the number of calls to the real fitness function, that is needed for finding of a good solution of the optimization problem, is substantially reduced.
4

Statistical data compression by optimal segmentation. Theory, algorithms and experimental results.

Steiner, Gottfried 09 1900 (has links) (PDF)
The work deals with statistical data compression or data reduction by a general class of classification methods. The data compression results in a representation of the data set by a partition or by some typical points (called prototypes). The optimization problems are related to minimum variance partitions and principal point problems. A fixpoint method and an adaptive approach is applied for the solution of these problems. The work contains a presentation of the theoretical background of the optimization problems and lists some pseudo-codes for the numerical solution of the data compression. The main part of this work concentrates on some practical questions for carrying out a data compression. The determination of a suitable number of representing points, the choice of an objective function, the establishment of an adjacency structure and the improvement of the fixpoint algorithm belong to the practically relevant topics. The performance of the proposed methods and algorithms is compared and evaluated experimentally. A lot of examples deepen the understanding of the applied methods. (author's abstract)
5

Exon Primers Design Using Multiobjective Genetic Algorithm

Huang, Erh-chien 29 August 2005 (has links)
Exons are expression DNA sequences. A DNA sequence which includes gene has exons and introns. During transcription and translation, introns will be removed, and exons will remain to become protein. Many researchers need exon primers for PCR experiments. However, it is a difficult to find that many exon primers satisfy all primer design constraints at the same time. Here, we proposed an efficient exon primer design algorithm. The algorithm applies multiobjective genetic algorithm (MGA) instead of the single objective algorithm which can easily lend to unsuitable solutions. And a hash-index algorithm is applied to make specificity checking in a reasonable time. The algorithm has tested by a variety of mRNA sequences. These dry dock experiments show that our proposed algorithm can find primers which satisfy all exon primer design constraints.
6

Embedded System Optimization of Radar Post-processing in an ARM CPU Core

Ogbonnia, Chibundu 04 May 2022 (has links)
Algorithms executed on the radar processor system contributes to a significant performance bottleneck of the overall radar system. One key performance concern is the latency in target detection when dealing with hard deadline systems. Research has shown software optimization as one major contributor to radar system performance improvements. This thesis aims at software optimizations using a manual and automatic approach and analyzing the results to make informed future decisions while working with an ARM processor system. In order to ascertain an optimized implementation, a question put forward was whether the algorithms on the ARM processor could work with a 6-antenna implementation without a decline in the performance. However, an answer would also help project how many additional algorithms can still be added without performance decline. The manual optimization was done based on the quantitative analysis of the software execution time. The manual optimization approach looked at the vectorization strategy using the NEON vector register on the ARM CPU to reimplement the initial Constant False Alarm Rate(CFAR) Detection algorithm. An additional optimization approach was eliminating redundant loops while going through the Range Gates and Doppler filters. In order to determine the best compiler for automatic code optimization for the radar algorithms on the ARM processor, the GCC and Clang compilers were used to compile the initial algorithms and the optimized implementation on the radar post-processing stage. Analysis of the optimization results showed that it is possible to run the radar post-processing algorithms on the ARM processor at the 6-antenna implementation without system load stress. In addition, the results show an excellent headroom margin based on the defined scenario. The result analysis further revealed that the effect of dynamic memory allocation could not be underrated in situations where performance is a significant concern. Additional statements from the result demonstrated that the GCC and Clang compiler has their strength and weaknesses when used in the compilation. One limiting factor to note on the optimization using the NEON register is the sample size’s effect on the optimization implementation. Although it fits into the test samples used based on the defined scenario, there might be varying results in varying window cell size situations that might not necessarily improve the time constraints.
7

A Hierarchical Particle Swarm Optimizer and Its Adaptive Variant

Janson, Stefan, Middendorf, Martin 05 February 2019 (has links)
Ahierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes.
8

On the optimization of offshore wind farm layouts

Pillai, Ajit Chitharanjan January 2017 (has links)
Layout optimization of offshore wind farms seeks to automate the design of the wind farm and the placement of wind turbines such that the proposed wind farm maximizes its potential. The optimization of an offshore wind farm layout therefore seeks to minimize the costs of the wind farm while maximizing the energy extraction while considering the effects of wakes on the resource; the electrical infrastructure required to collect the energy generated; the cost variation across the site; and all technical and consenting constraints that the wind farm developer must adhere to. As wakes, electrical losses, and costs are non-linear, this produces a complex optimization problem. This thesis describes the design, development, validation, and initial application of a new framework for the optimization of offshore wind farm layouts using either a genetic algorithm or a particle swarm optimizer. The developed methodology and analysis tool have been developed such that individual components can either be used to analyze a particular wind farm layout or used in conjunction with the optimization algorithms to design and optimize wind farm layouts. To accomplish this, separate modules have been developed and validated for the design and optimization of the necessary electrical infrastructure, the assessment of the energy production considering energy losses, and the estimation of the project costs. By including site-dependent parameters and project specific constraints, the framework is capable of exploring the influence the wind farm layout has on the levelized cost of energy of the project. Deploying the integrated framework using two common engineering metaheuristic algorithms to hypothetical, existing, and future wind farms highlights the advantages of this holistic layout optimization framework over the industry standard approaches commonly deployed in offshore wind farm design leading to a reduction in LCOE. Application of the tool to a UK Round 3 site recently under development has also highlighted how the use of this tool can aid in the development of future regulations by considering various constraints on the placement of wind turbines within the site and exploring how these impact the levelized cost of energy.
9

Dynamic Electronic Asset Allocation Comparing Genetic Algorithm with Particle Swarm Optimization

Md Saiful Islam (5931074) 17 January 2019 (has links)
<div>The contribution of this research work can be divided into two main tasks: 1) implementing this Electronic Warfare Asset Allocation Problem (EWAAP) with the Genetic Algorithm (GA); 2) Comparing performance of Genetic Algorithm to Particle Swarm Optimization (PSO) algorithm. This research problem implemented Genetic Algorithm in C++ and used QT Data Visualization for displaying three-dimensional space, pheromone, and Terrain. The Genetic algorithm implementation maintained and preserved the coding style, data structure, and visualization from the PSO implementation. Although the Genetic Algorithm has higher fitness values and better global solutions for 3 or more receivers, it increases the running time. The Genetic Algorithm is around (15-30%) more accurate for asset counts from 3 to 6 but requires (26-82%) more computational time. When the allocation problem complexity increases by adding 3D space, pheromones and complex terrains, the accuracy of GA is 3.71% better but the speed of GA is 121% slower than PSO. In summary, the Genetic Algorithm gives a better global solution in some cases but the computational time is higher for the Genetic Algorithm with than Particle Swarm Optimization.</div>
10

Optimization Of Non-uniform Planar Array Geometry For Direction Of Arrival Estimation

Birinci, Toygar 01 July 2006 (has links) (PDF)
In this work, a novel method is proposed to optimize the array geometry for DOA estimation. The method is based on minimization of fine error variances with the constraint that the gross error probability is below a certain threshold. For this purpose, a metric function that reflects the gross and fine error characteristics of the array is offered. Theoretical analyses show that the minimization of this metric function leads to small DOA estimation error variance and small gross error probability. Analyses have been carried out under the assumptions of planar array geometry, isotropic array elements and AWGN. Genetic algorithm is used as an optimization tool and performance simulation is performed by comparing the DOA estimation errors of optimized array to a uniform circular array (UCA). Computer simulations support the theoretical analyses and show that the method proposed leads to significant improvement in array geometry in terms of DOA estimation performance.

Page generated in 0.1472 seconds