• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

A Comparative study of Simulated Annealing Algorithms and Genetic Algorithms on Parameters Calibration for Tidal Model

Hung, Yi-ting 13 July 2009 (has links)
The manual trial and error has been widely used in the past, but such approach is inefficient. In recent years, many heuristic algorithms used in a wide range of applications have been developed. These algorithms have more efficiency than traditional ones, because they can locate the best solution. Every algorithm has its own niche application in different problems. In this study, the boundary parameters of the hydrodynamic-based tidal model are calibrated by using the Simulated Annealing algorithms (SA). The objective is to minimize the deviation between the estimated results acquired from the simulation model and the real tidal data along Taiwan coast. Based on the real physics distribution of the boundary parameters, we aimed to minimize the sum of each station¡¦s root mean square error (RMSE). Genetic Algorithms (GAs) and Simulated Annealing Algorithms on parameters calibration for tidal model are compared under the same condition. GAs is superior on solving the problems mentioned above while both algorithms showed improved results. By setting the initial solution derived from GAs, the solving efficiency of SA can be improved in this study.
772

Application of Adaptive Algorithm on Analysis of Spatial Energy of Ocean Ambient Noise

Cheng, Ni-hung 23 July 2009 (has links)
Ocean ambient noise is one of factors that can affect the performance of sonar and underwater communication system, it can degrade the performance of sonar system on listening or active detection, and also can affect the quality of underwater communication. Due to the variation of temperature and density in the ocean which make ambient noise has directionality. Beamforming can analyze the directionality of noise energy. Conventional beamforming is based on the assumption of plane wave sound field, so the energy from each angle is obtained by linear accumulation of every element. However plane wave assumption may not be satisfied because of the boundary interactions of sound propagation and energy attenuation of water column, therefore conventional beamforming may have poor beam resolution and SNR in applications. This research is to study of the influence of spatial coherence of ambient noise on beam resolution, and to improve the beam resolution by using the adaptive algorithm from the communication system theory. Firstly, simulations were performed to study spatial coherence between plane wave and non-plane wave in ambient noise, and the results were compared with beam resolution. This research also analyzes the influence of different conditions of noise spatial coherence on beamforming with ASIAEX data. The results showed that ambient noise has lower spatial coherence at high frequency, and the beamforming has poor beam resolution because of the lower spatial coherence in noise. Therefore, the adaptive beamforming were performed to improve the beam resolution, and compared with the conventional beamforming. The results showed that the highest improvement on beam resolution is 42.9 %, and increased SNR by 6 dB. On the other hand, the application of ASIAEX data show that, the highest improvement on beam resolution is 40.0 %, and increased SNR by 8 dB. The noise notch of ambient noise became more significant by increasing in beam resolution, and it also promoted the accuracy of analysis on noise directionality.
773

Design of an Efficient Clipping Engine for OpenGL-ES 2.0 Vertex Shaders in 3D Graphics Systems

Lin, Keng-Hsien 01 September 2009 (has links)
In computer graphics technique, the 3D graphic pipeline flow has two processing modules: Geometry module and Rendering module. The geometry module supports vertex coordinate transformation, vertex lighting computation, backface-culling, pre-clipping, and clipping functions. Clipping module clips the outside part of objects by view volume boundaries. Adding clipping module into geometry module will make 3D graphics pipeline flow more efficiency. Due to the sequential parsing nature of clipping, it causes the challenges to implement clipping function in hardware design. This paper implements a dual-path clipping engine placed after the Vertex Shader in geometry module and supports OpenGL-ES 2.0 specification. With the clipping engine, it reduces the unnecessary operations in 3D graphics pipeline flow and makes the performance efficient. The pipelined and shared hardware design is proposed to improve the area cost and throughput of the interpolation operation in clipping engine. The two vertices in/out clipping method is proposed in this paper. Users have more different choices of clipping algorithms for hardware implementation with respect to the performance and hardware limitation.
774

Real-time rendering of large terrains using algorithms for continuous level of detail

Andersson, Michael January 2002 (has links)
<p>Three-dimensional computer graphics enjoys a wide range of applications of which games and movies are only few examples. By incorporating three-dimensional computer graphics in to a simulator the simulator is able to provide the operator with visual feedback during a simulation. Simulators come in many different flavors where flight and radar simulators are two types in which three-dimensional rendering of large terrains constitutes a central component.</p><p>Ericsson Microwave Systems (EMW) in Skövde is searching for an algorithm that (a) can handle terrain data that is larger than physical memory and (b) has an adjustable error metric that can be used to reduce terrain detail level if an increase in load on other critical parts of the system is observed. The aim of this paper is to identify and evaluate existing algorithms for terrain rendering in order to find those that meet EMW: s requirements. The objectives are to (i) perform a literature survey over existing algorithms, (ii) implement these algorithms and (iii) develop a test environment in which these algorithms can be evaluated form a performance perspective.</p><p>The literature survey revealed that the algorithm developed by Lindstrom and Pascucci (2001) is the only algorithm of those examined that succeeded to fulfill the requirements without modifications or extra software. This algorithm uses memory-mapped files to be able to handle terrain data larger that physical memory and focuses on how terrain data should be laid out on disk in order to minimize the number of page faults. Testing of this algorithm on specified test architecture show that the error metric used could be adjusted to effectively control the terrains level of detail leading to a substantial increase in performance. The results also reveal the need for both view frustum culling as well a level of detail algorithm to achieve fast display rates of large terrains. Further the results also show the importance of how terrain data is laid out on disk especially when physical memory is limited.</p>
775

Entwicklung eines iterativen 3D Rekonstruktionverfahrens für die Kontrolle der Tumorbehandlung mit Schwerionen mittels der Positronen-Emissions-Tomographie

Lauckner, Kathrin 31 March 2010 (has links) (PDF)
At the Gesellschaft für Schwerionenforschung in Darmstadt a therapy unit for heavy ion cancer treatment has been established in collaboration with the Deutsches Krebsforschungszentrum Heidelberg, the Radiologische Universitätsklinik Heidelberg and the Forschungszentrum Rossendorf. For quality assurance the dual-head positron camera BASTEI (Beta Activity meaSurements at the Therapy with Energetic Ions) has been integrated into this facility. It measures ß+-activity distributions generated via nuclear fragmentation reactions within the target volume. BASTEI has about 4 million coincidence channels. The emission data are acquired in a 3D regime and stored in a list mode data format. Typically counting statstics is two to three orders of magnitude lower than those of typical PET-scans in nuclear medicine. Two iterative 3D reconstruction algorithms based on ISRA (Image Space Reconstruction Algorithm) and MLEM (Maximum Likelihood Expectation Maximization), respectively, have been adapted to this imaging geometry. The major advantage of the developed approaches are run-time Monte-Carlo simulations which are used to calculate the transition matrix. The influences of detector sensitivity variations, randoms, activity from outside of the field of view and attenuation are corrected for the individual coincidence channels. Performance studies show, that the implementation based on MLEM is the algorithm of merit. Since 1997 it has been applied sucessfully to patient data. The localization of distal and lateral gradients of the ß+-activity distribution is guaranteed in the longitudinal sections. Out of the longitudinal sections the lateral gradients of the ß+-activity distribution should be interpreted using a priori knowledge.
776

On Generating Complex Numbers for FFT and NCO Using the CORDIC Algorithm / Att generera komplexa tal för FFT och NCO med CORDIC-algoritmen

Andersson, Anton January 2008 (has links)
<p>This report has been compiled to document the thesis work carried out by Anton Andersson for Coresonic AB. The task was to develop an accelerator that could generate complex numbers suitable for fast fourier transforms (FFT) and tuning the phase of complex signals (NCO). Of many ways to achieve this, the CORDIC algorithm was chosen. It is very well suited since the basic implementation allows rotation of 2D-vectors using only shift and add operations. Error bounds and proof of convergence are derived carefully The accelerator was implemented in VHDL in such a way that all critical parameters were easy to change. Performance measures were extracted by simulating realistic test cases and then compare the output with reference data precomputed with high precision. Hardware costs were estimated by synthesizing a set of different configurations. Utilizing graphs of performance versus cost makes it possible to choose an optimal configuration. Maximum errors were extracted from simulations and seemed rather large for some configurations. The maximum error distribution was then plotted in histograms revealing that the typical error is often much smaller than the largest one. Even after trouble-shooting, the errors still seem to be somewhat larger than what other implementations of CORDIC achieve. However, precision was concluded to be sufficient for targeted applications.</p> / <p>Den här rapporten dokumenterar det examensarbete som utförts av AntonAndersson för Coresonic AB. Uppgiften bestod i att utveckla enaccelerator som kan generera komplexa tal som är lämpliga att använda försnabba fouriertransformer (FFT) och till fasvridning av komplexasignaler (NCO). Det finns en mängd sätt att göra detta men valet föllpå en algoritm kallad CORDIC. Den är mycket lämplig då den kan rotera2D-vektorer godtycklig vinkel med enkla operationer som bitskift ochaddition. Felgränser och konvergens härleds noggrannt. Acceleratorn implementerades i språket VHDL med målet att kritiskaparametrar enkelt skall kunna förändras. Därefter simuleradesmodellen i realistiska testfall och resulteten jämfördes medreferensdata som förberäknats med mycket hög precision. Dessutomsyntetiserades en mängd olika konfigurationer så att prestanda enkeltkan viktas mot kostnad.Ur de koefficienter som erhölls genom simuleringar beräknades detstörsta erhållna felet för en mängd olika konfigurationer. Felenverkade till en början onormalt stora vilket krävde vidareundersökning. Samtliga fel från en konfiguration ritades ihistogramform, vilket visade att det typiska felet oftast varbetydligt mindre än det största. Även efter felsökning verkar acceleratorngenerera tal med något större fel än andra implementationer avCORDIC. Precisionen anses dock vara tillräcklig för avsedda applikationer.</p>
777

Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer

Nyman, Peter January 2008 (has links)
<p>Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.</p> / <p>Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.</p>
778

Description and Application of Genetic Algorithm

WANG, MIN January 2012 (has links)
Genetic Algorithm (GA) as a class of Evolutionary Algorithm (EA) is a search algorithm based on the mechanics of natural selection and natural genetics. This dissertation presents the description, solving procedures and application of GA. The definitions of selection, crossover and mutation operators are given in details and an application based on GA in Time Table Problem (TTP) is performed in a new way. Due to its high capability of overall search, GA is particularly appropriate for solving timetabling and scheduling problems. TTP (Time Table Problem) which belongs to NP-hard problem is a special problem concerning resource management. In this dissertation, a new chromosome coding is designed in order to solve TTP more effectively. And the result presented by MATLAB will converge to a steady condition.
779

Genetic Algorithm for Selecting Optimal Secondary Users to Collaborate in Spectrum sensing / Genetisk algoritm för val av Optimal Sekundära användare att samarbeta i Spectrum avkänning

farooq, Muhammad, Raja, Abdullah Aslam January 2010 (has links)
Cognitive Radio is an innovative technology that allows the secondary unlicensed users to share the spectrum with licensed primary users to utilize the spectrum. For maximum utilization of spectrum, in cognitive radio network spectrum sensing is an important issue. Cognitive user under extreme shadowing and channel fading can‟t sense the primary licensed user signal correctly and thus to improve the performance of spectrum sensing, collaboration between secondary unlicensed users is required. In collaborative spectrum sensing the observation of each secondary user is received by a base station acting as a central entity, where a final conclusion about the presence or absence of the primary user signal is made using a particular decision and fusion rule. Due to spatially correlated shadowing the collaborative spectrum sensing performance decreases, and thus optimum secondary users must be selected to, not only improve spectrum sensing performance but also lessen the processing overhead of the central entity. A particular situation is depicted in the project where according to some performance parameters, first those optimum secondary users that have enough spatial separation and high average received SNR are selected using Genetic Algorithm, and then collaboration among these optimum secondary users is done to evaluate the performance. The collaboration of optimal secondary user providing high probability of detection and low probability of false alarm, for sensing the spectrum is compared with the collaboration of all the available secondary users in that radio environment. At the end a conclusion has been made that collaboration of selected optimum secondary users provides better performance, then the collaboration of all the secondary users available. / Cognitive Radio is an innovative technology that allows the secondary unlicensed users to share the spectrum with licensed primary users to utilize the spectrum. For maximum utilization of spectrum, in cognitive radio network spectrum sensing is an important issue. Cognitive user under extreme shadowing and channel fading can‟t sense the primary licensed user signal correctly and thus to improve the performance of spectrum sensing, collaboration between secondary unlicensed users is required. In collaborative spectrum sensing the observation of each secondary user is received by a base station acting as a central entity, where a final conclusion about the presence or absence of the primary user signal is made using a particular decision and fusion rule. Due to spatially correlated shadowing the collaborative spectrum sensing performance decreases, and thus optimum secondary users must be selected to, not only improve spectrum sensing performance but also lessen the processing overhead of the central entity. A particular situation is depicted in the project where according to some performance parameters, first those optimum secondary users that have enough spatial separation and high average received SNR are selected using Genetic Algorithm, and then collaboration among these optimum secondary users is done to evaluate the performance. The collaboration of optimal secondary user providing high probability of detection and low probability of false alarm, for sensing the spectrum is compared with the collaboration of all the available secondary users in that radio environment. At the end a conclusion has been made that collaboration of selected optimum secondary users provides better performance, then the collaboration of all the secondary users available.
780

An Application Developed for Simulation of Electrical Excitation and Conduction in a 3D Human Heart

Yu, Di 01 January 2013 (has links)
This thesis first reviews the history of General Purpose computing Graphic Processing Unit (GPGPU) and then introduces the fundamental problems that are suitable for GPGPU algorithm. The architecture of GPGPU is compared against modern CPU architecture, and the fundamental difference is outlined. The programming challenges faced by GPGPU and the techniques utilized to overcome these issues are evaluated and discussed. The second part of the thesis presents an application developed with GPGPU technology to simulate the electrical excitation and conduction in a 3D human heart model based on cellular automata model. The algorithm and implementation are discussed in detail and the performance of GPU is compared against CPU.

Page generated in 0.0547 seconds