• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2606
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5945
  • 1424
  • 873
  • 728
  • 722
  • 669
  • 492
  • 492
  • 480
  • 448
  • 421
  • 414
  • 386
  • 366
  • 341
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Bio-inspired optimization algorithms for smart antennas

Zuniga, Virgilio January 2011 (has links)
This thesis studies the effectiveness of bio-inspired optimization algorithms in controlling adaptive antenna arrays. Smart antennas are able to automatically extract the desired signal from interferer signals and external noise. The angular pattern depends on the number of antenna elements, their geometrical arrangement, and their relative amplitude and phases. In the present work different antenna geometries are tested and compared when their array weights are optimized by different techniques. First, the Genetic Algorithm and Particle Swarm Optimization algorithms are used to find the best set of phases between antenna elements to obtain a desired antenna pattern. This pattern must meet several restraints, for example: Maximizing the power of the main lobe at a desired direction while keeping nulls towards interferers. A series of experiments show that the PSO achieves better and more consistent radiation patterns than the GA in terms of the total area of the antenna pattern. A second set of experiments use the Signal-to-Interference-plus-Noise-Ratio as the fitness function of optimization algorithms to find the array weights that configure a rectangular array. The results suggest an advantage in performance by reducing the number of iterations taken by the PSO, thus lowering the computational cost. During the development of this thesis, it was found that the initial states and particular parameters of the optimization algorithms affected their overall outcome. The third part of this work deals with the meta-optimization of these parameters to achieve the best results independently from particular initial parameters. Four algorithms were studied: Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing and Hill Climb. It was found that the meta-optimization algorithms Local Unimodal Sampling and Pattern Search performed better to set the initial parameters and obtain the best performance of the bio-inspired methods studied.
442

CPU Performance Evaluation for 2D Voronoi Tessellation

Olsson, Victor, Eklund, Viktor January 2019 (has links)
Voronoi tessellation can be used within a couple of different fields. Some of these fields include healthcare, construction and urban planning. Since Voronoi tessellations are used in multiple fields, it is motivated to know the strengths and weaknesses of the algorithms used to generate them, in terms of their efficiency. The objectives of this thesis are to compare two CPU algorithm implementations for Voronoi tessellation in regards to execution time and see which of the two is the most efficient. The algorithms compared are The Bowyer-Watson algorithm and Fortunes algorithm. The Fortunes algorithm used in the research is based upon a pre-existing Fortunes implementation while the Bowyer-Watson implementation was specifically made for this research. Their differences in efficiency were determined by measuring their execution times and comparing them. This was done in an iterative manner, where for each iteration, the amount of data to be computed was continuously increased. The results show that Fortunes algorithm is more efficient on the CPU without using any acceleration techniques for any of the algorithms. It took 70 milliseconds for the Bowyer-Watson method to calculate 3000 input points while Fortunes method took 12 milliseconds under the same conditions. As a conclusion, Fortunes algorithm was more efficient due to the Bowyer-Watson algorithm doing unnecessary calculations. These calculations include checking all the triangles for every new point added. A suggestion for improving the speed of this algorithm would be to use a nearest neighbour search technique when searching through triangles.
443

Sobre um método assemelhado ao de Francis para a determinação de autovalores de matrizes /

Oliveira, Danilo Elias de. January 2006 (has links)
Orientador: Eliana Xavier Linhares de Andrade / Banca: Roberto Andreani / Banca: Cleonice Fátima Bracciali / Resumo: O principal objetivo deste trabalho é apresentar, discutir as qualidades e desempenho e provar a convergência de um método iterativo para a solução numérica do problema de autovalores de uma matriz, que chamamos de Método Assemelhado ao de Francis (MAF). O método em questão distingue-se do QR de Francis pela maneira, mais simples e rápida, de se obter as matrizes ortogonais Qk, k = 1; 2. Apresentamos, também, uma comparação entre o MAF e os algoritmos QR de Francis e LR de Rutishauser. / Abstract: The main purpose of this work is to presente, to discuss the qualities and performance and to prove the convergence of an iterative method for the numerical solution of the eigenvalue problem, that we have called the Método Assemelhado ao de Francis (MAF)þþ. This method di ers from the QR method of Francis by providing a simpler and faster technique of getting the unitary matrices Qk; k = 1; 2; We present, also, a comparison analises between the MAF and the QR of Francis and LR of Rutishauser algorithms. / Mestre
444

Parallel computing techniques for computed tomography

Deng, Junjun 01 May 2011 (has links)
X-ray computed tomography is a widely adopted medical imaging method that uses projections to recover the internal image of a subject. Since the invention of X-ray computed tomography in the 1970s, several generations of CT scanners have been developed. As 3D-image reconstruction increases in popularity, the long processing time associated with these machines has to be significantly reduced before they can be practically employed in everyday applications. Parallel computing is a computer science computing technique that utilizes multiple computer resources to process a computational task simultaneously; each resource computes only a part of the whole task thereby greatly reducing computation time. In this thesis, we use parallel computing technology to speed up the reconstruction while preserving the image quality. Three representative reconstruction algorithms--namely, Katsevich, EM, and Feldkamp algorithms--are investigated in this work. With the Katsevich algorithm, a distributed-memory PC cluster is used to conduct the experiment. This parallel algorithm partitions and distributes the projection data to different computer nodes to perform the computation. Upon completion of each sub-task, the results are collected by the master computer to produce the final image. This parallel algorithm uses the same reconstruction formula as the sequential counterpart, which gives an identical image result. The parallelism of the iterative CT algorithm uses the same PC cluster as in the first one. However, because it is based on a local CT reconstruction algorithm, which is different from the sequential EM algorithm, the image results are different with the sequential counterpart. Moreover, a special strategy using inhomogeneous resolution was used to further speed up the computation. The results showed that the image quality was largely preserved while the computational time was greatly reduced. Unlike the two previous approaches, the third type of parallel implementation uses a shared-memory computer. Three major accelerating methods--SIMD (Single instruction, multiple data), multi-threading, and OS (ordered subsets)--were employed to speed up the computation. Initial investigations showed that the image quality was comparable to those of the conventional approach though the computation speed was significantly increased.
445

Genetic Algorithm Applied to Generalized Cell Formation Problems / Les algorthmes génétiques appliqués aux problèmes de formation de cellules de production avec routages et processes alternatifs

Vin, Emmanuelle 19 March 2010 (has links)
The objective of the cellular manufacturing is to simplify the management of the manufacturing industries. In regrouping the production of different parts into clusters, the management of the manufacturing is reduced to manage different small entities. One of the most important problems in the cellular manufacturing is the design of these entities called cells. These cells represent a cluster of machines that can be dedicated to the production of one or several parts. The ideal design of a cellular manufacturing is to make these cells totally independent from one another, i.e. that each part is dedicated to only one cell (i.e. if it can be achieved completely inside this cell). The reality is a little more complex. Once the cells are created, there exists still some traffic between them. This traffic corresponds to a transfer of a part between two machines belonging to different cells. The final objective is to reduce this traffic between the cells (called inter-cellular traffic). Different methods exist to produce these cells and dedicated them to parts. To create independent cells, the choice can be done between different ways to produce each part. Two interdependent problems must be solved: • the allocation of each operation on a machine: each part is defined by one or several sequences of operations and each of them can be achieved by a set of machines. A final sequence of machines must be chosen to produce each part. • the grouping of each machine in cells producing traffic inside and outside the cells. In function of the solution to the first problem, different clusters will be created to minimise the inter-cellular traffic. In this thesis, an original method based on the grouping genetic algorithm (Gga) is proposed to solve simultaneously these two interdependent problems. The efficiency of the method is highlighted compared to the methods based on two integrated algorithms or heuristics. Indeed, to form these cells of machines with the allocation of operations on the machines, the used methods permitting to solve large scale problems are generally composed by two nested algorithms. The main one calls the secondary one to complete the first part of the solution. The application domain goes beyond the manufacturing industry and can for example be applied to the design of the electronic systems as explained in the future research.
446

Dynamic Approach To Wind Sensitive Optimum Cruise Phase Flight Planning

Yildiz, Guray 01 October 2012 (has links) (PDF)
A Flight Management System (FMS) performs 4 Dimensional flight planning / Lateral Planning (Calculation of the latitude and longitudes of waypoints), Vertical Planning (Calculation of the altitudes of waypoints) and Temporal Planning(Calculation of Estimated Time of Arrival). Correct and accurate calculation of4D flight path and then guiding the pilot/airplane to track the route in specified accuracy limits in terms of lateral (i.e Required Navigational Performance RNP), vertical (Reduced Vertical Seperation Minima RVSM), and time (Required Time of Arrival RTA) is what FMS performs in brief. Any deviation of planned input values versus actual input values, especially during the emergency cases (i.e burning outoneof engines etc.), causes the aircraft to deviate the plan and requires replanning now taking into consideration the currentsituation. In emergency situations especially in Oceaning Flights (flights whose cruise phase lasts more than 5 hour is called as &ldquo / Oceaning Flights&rdquo / ) Optimum Cruise Phase Flight Route Planning plays a vital role. In avionics domain &ldquo / Optimum&rdquo / does not mean &ldquo / shortest path&rdquo / mainly due to the effect of weather data as wind speed and direction directly affects the groundspeed. In the scope of the current thesis, an algorithm employing dynamic programming paradigms will be designed and implemented to find the optimum flight route planning. A top down approach by making use of aircraft route planning ontology will be implemented to fill the gap between the flight plan specific domain knowledge and optimization techniques employed. Where as the algorithm will be generic by encapsulating the aircraft&rsquo / s performance characteristics / it will be evaluated on C-130 aircraft.
447

QoS Routing With Multiple Constraints

Jishnu, A 03 1900 (has links) (PDF)
No description available.
448

Connected Dominating Set Construction and Application in Wireless Sensor Networks

Wu, Yiwei 01 December 2009 (has links)
Wireless sensor networks (WSNs) are now widely used in many applications. Connected Dominating Set (CDS) based routing which is one kind of hierarchical methods has received more attention to reduce routing overhead. The concept of k-connected m-dominating sets (kmCDS) is used to provide fault tolerance and routing flexibility. In this thesis, we first consider how to construct a CDS in WSNs. After that, centralized and distributed algorithms are proposed to construct a kmCDS. Moreover, we introduce some basic ideas of how to use CDS in other potential applications such as partial coverage and data dissemination in WSNs.
449

A Framework for Consistency Based Feature Selection

Lin, Pengpeng 01 May 2009 (has links)
Feature selection is an effective technique in reducing the dimensionality of features in many applications where datasets involve hundreds or thousands of features. The objective of feature selection is to find an optimal subset of relevant features such that the feature size is reduced and understandability of a learning process is improved without significantly decreasing the overall accuracy and applicability. This thesis focuses on the consistency measure where a feature subset is consistent if there exists a set of instances of length more than two with the same feature values and the same class labels. This thesis introduces a new consistency-based algorithm, Automatic Hybrid Search (AHS) and reviews several existing feature selection algorithms (ES, PS and HS) which are based on the consistency rate. After that, we conclude this work by conducting an empirical study to a comparative analysis of different search algorithms.
450

A New Optimality Measure for Distance Dominating Sets

Simjour, Narges January 2006 (has links)
We study the problem of finding the smallest power of an input graph that has <em>k</em> disjoint dominating sets, where the <em>i</em>th power of an input graph <em>G</em> is constructed by adding edges between pairs of vertices in <em>G</em> at distance <em>i</em> or less, and a subset of vertices in a graph <em>G</em> is a dominating set if and only if every vertex in <em>G</em> is adjacent to a vertex in this subset.   The problem is a different view of the <em>d</em>-domatic number problem in which the goal is to find the maximum number of disjoint dominating sets in the <em>d</em>th power of the input graph.   This problem is motivated by applications in multi-facility location and distributed networks. In the facility location framework, for instance, there are <em>k</em> types of services that all clients in different regions of a city should receive. A graph representing the map of regions in the city is given where the nodes of the graph represent regions and neighboring regions are connected by edges. The problem is how to establish facility servers in the city (each region can host at most one server) such that every client in the city can access a facility server in its region or in a region in the neighborhood. Since it may not be possible to find a facility location satisfying this condition, "a region in the neighborhood" required in the question is modified to "a region at the minimum possible distance <em>d</em>".   In this thesis, we study the connection of the above-mentioned problem with similar problems including the domatic number problem and the <em>d</em>-domatic number problem. We show that the problem is NP-complete for any fixed <em>k</em> greater than two even when the input graph is restricted to split graphs, <em>2</em>-connected graphs, or planar bipartite graphs of degree four. In addition, the problem is in P for bounded tree-width graphs, when considering <em>k</em> as a constant, and for strongly chordal graphs, for any <em>k</em>. Then, we provide a slightly simpler proof for a known upper bound for the problem. We also develop an exact (exponential) algorithm for the problem, running in time <em>O</em>(2. 73<sup><em>n</em></sup>). Moreover, we prove that the problem cannot be approximated within ratio smaller than <em>2</em> even for split graphs, <em>2</em>-connected graphs, and planar bipartite graphs of degree four. We propose a greedy <em>3</em>-approximation algorithm for the problem in the general case, and other approximation ratios for permutation graphs, distance-hereditary graphs, cocomparability graphs, dually chordal graphs, and chordal graphs. Finally, we list some directions for future work.

Page generated in 0.0328 seconds