Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
441 |
Approximation Algorithms for Rectangle Piercing ProblemsMahmood, Abdullah-Al January 2005 (has links)
Piercing problems arise often in facility location, which is a well-studied area of computational geometry. The general form of the piercing problem discussed in this dissertation asks for the minimum number of facilities for a set of given rectangular demand regions such that each region has at least one facility located within it. It has been shown that even if all regions are uniform sized squares, the problem is NP-hard. Therefore we concentrate on approximation algorithms for the problem. As the known approximation ratio for arbitrarily sized rectangles is poor, we restrict our effort to designing approximation algorithms for unit-height rectangles. Our e-approximation scheme requires <I>n</I><sup><I>O</I>(1/ε??)</sup> time. We also consider the problem with restrictions like bounding the depth of a point and the width of the rectangles. The approximation schemes for these two cases take <I>n</I><sup><I>O</I>(1/ε)</sup> time. We also show how to maintain a factor 2 approximation of the piercing set in <I>O</I>(log <I>n</I>) amortized time in an insertion-only scenario.
|
442 |
Bio-inspired optimization algorithms for smart antennasZuniga, Virgilio January 2011 (has links)
This thesis studies the effectiveness of bio-inspired optimization algorithms in controlling adaptive antenna arrays. Smart antennas are able to automatically extract the desired signal from interferer signals and external noise. The angular pattern depends on the number of antenna elements, their geometrical arrangement, and their relative amplitude and phases. In the present work different antenna geometries are tested and compared when their array weights are optimized by different techniques. First, the Genetic Algorithm and Particle Swarm Optimization algorithms are used to find the best set of phases between antenna elements to obtain a desired antenna pattern. This pattern must meet several restraints, for example: Maximizing the power of the main lobe at a desired direction while keeping nulls towards interferers. A series of experiments show that the PSO achieves better and more consistent radiation patterns than the GA in terms of the total area of the antenna pattern. A second set of experiments use the Signal-to-Interference-plus-Noise-Ratio as the fitness function of optimization algorithms to find the array weights that configure a rectangular array. The results suggest an advantage in performance by reducing the number of iterations taken by the PSO, thus lowering the computational cost. During the development of this thesis, it was found that the initial states and particular parameters of the optimization algorithms affected their overall outcome. The third part of this work deals with the meta-optimization of these parameters to achieve the best results independently from particular initial parameters. Four algorithms were studied: Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing and Hill Climb. It was found that the meta-optimization algorithms Local Unimodal Sampling and Pattern Search performed better to set the initial parameters and obtain the best performance of the bio-inspired methods studied.
|
443 |
CPU Performance Evaluation for 2D Voronoi TessellationOlsson, Victor, Eklund, Viktor January 2019 (has links)
Voronoi tessellation can be used within a couple of different fields. Some of these fields include healthcare, construction and urban planning. Since Voronoi tessellations are used in multiple fields, it is motivated to know the strengths and weaknesses of the algorithms used to generate them, in terms of their efficiency. The objectives of this thesis are to compare two CPU algorithm implementations for Voronoi tessellation in regards to execution time and see which of the two is the most efficient. The algorithms compared are The Bowyer-Watson algorithm and Fortunes algorithm. The Fortunes algorithm used in the research is based upon a pre-existing Fortunes implementation while the Bowyer-Watson implementation was specifically made for this research. Their differences in efficiency were determined by measuring their execution times and comparing them. This was done in an iterative manner, where for each iteration, the amount of data to be computed was continuously increased. The results show that Fortunes algorithm is more efficient on the CPU without using any acceleration techniques for any of the algorithms. It took 70 milliseconds for the Bowyer-Watson method to calculate 3000 input points while Fortunes method took 12 milliseconds under the same conditions. As a conclusion, Fortunes algorithm was more efficient due to the Bowyer-Watson algorithm doing unnecessary calculations. These calculations include checking all the triangles for every new point added. A suggestion for improving the speed of this algorithm would be to use a nearest neighbour search technique when searching through triangles.
|
444 |
Sobre um método assemelhado ao de Francis para a determinação de autovalores de matrizes /Oliveira, Danilo Elias de. January 2006 (has links)
Orientador: Eliana Xavier Linhares de Andrade / Banca: Roberto Andreani / Banca: Cleonice Fátima Bracciali / Resumo: O principal objetivo deste trabalho é apresentar, discutir as qualidades e desempenho e provar a convergência de um método iterativo para a solução numérica do problema de autovalores de uma matriz, que chamamos de Método Assemelhado ao de Francis (MAF). O método em questão distingue-se do QR de Francis pela maneira, mais simples e rápida, de se obter as matrizes ortogonais Qk, k = 1; 2. Apresentamos, também, uma comparação entre o MAF e os algoritmos QR de Francis e LR de Rutishauser. / Abstract: The main purpose of this work is to presente, to discuss the qualities and performance and to prove the convergence of an iterative method for the numerical solution of the eigenvalue problem, that we have called the Método Assemelhado ao de Francis (MAF)þþ. This method di ers from the QR method of Francis by providing a simpler and faster technique of getting the unitary matrices Qk; k = 1; 2; We present, also, a comparison analises between the MAF and the QR of Francis and LR of Rutishauser algorithms. / Mestre
|
445 |
Parallel computing techniques for computed tomographyDeng, Junjun 01 May 2011 (has links)
X-ray computed tomography is a widely adopted medical imaging method that uses projections to recover the internal image of a subject. Since the invention of X-ray computed tomography in the 1970s, several generations of CT scanners have been developed. As 3D-image reconstruction increases in popularity, the long processing time associated with these machines has to be significantly reduced before they can be practically employed in everyday applications. Parallel computing is a computer science computing technique that utilizes multiple computer resources to process a computational task simultaneously; each resource computes only a part of the whole task thereby greatly reducing computation time. In this thesis, we use parallel computing technology to speed up the reconstruction while preserving the image quality.
Three representative reconstruction algorithms--namely, Katsevich, EM, and Feldkamp algorithms--are investigated in this work. With the Katsevich algorithm, a distributed-memory PC cluster is used to conduct the experiment. This parallel algorithm partitions and distributes the projection data to different computer nodes to perform the computation. Upon completion of each sub-task, the results are collected by the master computer to produce the final image. This parallel algorithm uses the same reconstruction formula as the sequential counterpart, which gives an identical image result.
The parallelism of the iterative CT algorithm uses the same PC cluster as in the first one. However, because it is based on a local CT reconstruction algorithm, which is different from the sequential EM algorithm, the image results are different with the sequential counterpart. Moreover, a special strategy using inhomogeneous resolution was used to further speed up the computation. The results showed that the image quality was largely preserved while the computational time was greatly reduced.
Unlike the two previous approaches, the third type of parallel implementation uses a shared-memory computer. Three major accelerating methods--SIMD (Single instruction, multiple data), multi-threading, and OS (ordered subsets)--were employed to speed up the computation. Initial investigations showed that the image quality was comparable to those of the conventional approach though the computation speed was significantly increased.
|
446 |
Genetic Algorithm Applied to Generalized Cell Formation Problems / Les algorthmes génétiques appliqués aux problèmes de formation de cellules de production avec routages et processes alternatifsVin, Emmanuelle 19 March 2010 (has links)
The objective of the cellular manufacturing is to simplify the management of the
manufacturing industries. In regrouping the production of different parts into clusters,
the management of the manufacturing is reduced to manage different small
entities. One of the most important problems in the cellular manufacturing is the
design of these entities called cells. These cells represent a cluster of machines that
can be dedicated to the production of one or several parts. The ideal design of a
cellular manufacturing is to make these cells totally independent from one another,
i.e. that each part is dedicated to only one cell (i.e. if it can be achieved completely
inside this cell). The reality is a little more complex. Once the cells are created,
there exists still some traffic between them. This traffic corresponds to a transfer of
a part between two machines belonging to different cells. The final objective is to
reduce this traffic between the cells (called inter-cellular traffic).
Different methods exist to produce these cells and dedicated them to parts. To
create independent cells, the choice can be done between different ways to produce
each part. Two interdependent problems must be solved:
• the allocation of each operation on a machine: each part is defined by one or
several sequences of operations and each of them can be achieved by a set of
machines. A final sequence of machines must be chosen to produce each part.
• the grouping of each machine in cells producing traffic inside and outside the
cells.
In function of the solution to the first problem, different clusters will be created to
minimise the inter-cellular traffic.
In this thesis, an original method based on the grouping genetic algorithm (Gga)
is proposed to solve simultaneously these two interdependent problems. The efficiency
of the method is highlighted compared to the methods based on two integrated algorithms
or heuristics. Indeed, to form these cells of machines with the allocation
of operations on the machines, the used methods permitting to solve large scale
problems are generally composed by two nested algorithms. The main one calls the
secondary one to complete the first part of the solution. The application domain goes
beyond the manufacturing industry and can for example be applied to the design of
the electronic systems as explained in the future research.
|
447 |
Dynamic Approach To Wind Sensitive Optimum Cruise Phase Flight PlanningYildiz, Guray 01 October 2012 (has links) (PDF)
A Flight Management System (FMS) performs 4 Dimensional flight planning / Lateral Planning (Calculation of the latitude and longitudes of waypoints), Vertical Planning (Calculation of the altitudes of waypoints) and Temporal Planning(Calculation of Estimated Time of Arrival).
Correct and accurate calculation of4D flight path and then guiding the pilot/airplane to track the route in specified accuracy limits in terms of lateral (i.e Required Navigational
Performance RNP), vertical (Reduced Vertical Seperation Minima RVSM), and time (Required Time of Arrival RTA) is what FMS performs in brief.
Any deviation of planned input values versus actual input values, especially during the emergency cases (i.e burning outoneof engines etc.), causes the aircraft to deviate the
plan and requires replanning now taking into consideration the currentsituation.
In emergency situations especially in Oceaning Flights (flights whose cruise phase lasts more than 5 hour is called as &ldquo / Oceaning Flights&rdquo / ) Optimum Cruise Phase Flight Route
Planning plays a vital role.
In avionics domain &ldquo / Optimum&rdquo / does not mean &ldquo / shortest path&rdquo / mainly due to the effect of weather data as wind speed and direction directly affects the groundspeed.
In the scope of the current thesis, an algorithm employing dynamic programming paradigms will be designed and implemented to find the optimum flight route planning. A
top down approach by making use of aircraft route planning ontology will be implemented to fill the gap between the flight plan specific domain knowledge and optimization
techniques employed. Where as the algorithm will be generic by encapsulating the aircraft&rsquo / s performance characteristics / it will be evaluated on C-130 aircraft.
|
448 |
QoS Routing With Multiple ConstraintsJishnu, A 03 1900 (has links) (PDF)
No description available.
|
449 |
Connected Dominating Set Construction and Application in Wireless Sensor NetworksWu, Yiwei 01 December 2009 (has links)
Wireless sensor networks (WSNs) are now widely used in many applications. Connected Dominating Set (CDS) based routing which is one kind of hierarchical methods has received more attention to reduce routing overhead. The concept of k-connected m-dominating sets (kmCDS) is used to provide fault tolerance and routing flexibility. In this thesis, we first consider how to construct a CDS in WSNs. After that, centralized and distributed algorithms are proposed to construct a kmCDS. Moreover, we introduce some basic ideas of how to use CDS in other potential applications such as partial coverage and data dissemination in WSNs.
|
450 |
A Framework for Consistency Based Feature SelectionLin, Pengpeng 01 May 2009 (has links)
Feature selection is an effective technique in reducing the dimensionality of features in many applications where datasets involve hundreds or thousands of features. The objective of feature selection is to find an optimal subset of relevant features such that the feature size is reduced and understandability of a learning process is improved without significantly decreasing the overall accuracy and applicability. This thesis focuses on the consistency measure where a feature subset is consistent if there exists a set of instances of length more than two with the same feature values and the same class labels. This thesis introduces a new consistency-based algorithm, Automatic Hybrid Search (AHS) and reviews several existing feature selection algorithms (ES, PS and HS) which are based on the consistency rate. After that, we conclude this work by conducting an empirical study to a comparative analysis of different search algorithms.
|
Page generated in 0.0553 seconds