Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
461 |
Transform-Domain Adaptive Constrained Filtering Algorithms for Time Delay EstimationHou, Jui-Hsiang 27 June 2002 (has links)
The convergence speed using the conventional approaches, viz., time-domain adaptive constrained and unconstrained LMS algorithms, becomes slowly, when dealing with the correlated source signals. In consequence, the performance of time delay estimation (TDE) will be degraded, dramatically. To improve this problem, the so-called transform-domain adaptive constrained filtering scheme, i.e., the adaptive constrained discrete-cosine-transform (DCT) LMS algorithm, has been proposed in [15]. However, the use of any one orthogonal transform will not result in a completely diagonal the input signal auto-correlation matrix for all types of input signals. In fact, the significant non-diagonal entries in the transform-domain auto-correlation matrix, will deteriorate the convergence performance of the algorithm.
To further overcome the problem described above, in this thesis, a modified approach, referred as the adaptive constrained modified DCT-LMS (CMDCT-LMS) algorithm, is devised for TDE under a wide class of input processes. In addition, based on the orthogonal discrete wavelet transform (DWT), an adaptive constrained modified DMT-LMS (CMDWT-LMS) algorithm is also devised and applied to the problem of TDE. We show that the proposed two modified constrained approaches for TDE does perform well than the unmodified approaches under different source signal models. Moreover, the adaptive CMDCT-LMS filtering algorithm does perform slightly better than the adaptive CMDWT-LMS filtering algorithm as observed from the simulation results.
|
462 |
Wavelet-Based Multiuser MC-CDMA Receiver with Linearly Constrained Constant Modulus Inverse QRD-RLS AlgorithmLiu, Hsiao-Chen 07 July 2002 (has links)
In this thesis, the problem of multiple access interference (MAI) suppression for the multi-carrier (MC) code division multiple access (CDMA) system, based on the wavelet-based (WB) multi-carrier modulation, associated with the combining process is investigated for Rayleigh fading channel. The main concern of this thesis is to derive a new scheme, based on the linearly constrained constant modulus (LCCM) criterion with the robust inverse QR decomposition (IQRD) recursive least squares (RLS) algorithm to improve the performance of the conventional MC-CDMA system with combining process. To verify the merits of the new algorithm, the effect due to imperfect channel parameters estimation and frequency offset are investigated.
We show that the proposed robust LCCM IQRD-RLS algorithm outperforms the conventional LCCM-gradient algorithm [6], in terms of output SINR, improvement percentage index (IPI), and bit error rate (BER) for MAI suppression under channel mismatch environment. Also, the performance of the WB MC-CDMA system is superior to the one with conventional MC-CDMA system. It is more robust to the channel mismatch and frequency offset. Moreover, the WB MC-CDMA system with robust LCCM IQRD-RLS algorithm does have better performance over other conventional approaches, such as the LCCM-gradient algorithm, maximum ratio combining (MRC), blind adaptation algorithm and partitioned linear interference canceller (PLIC) approach with LMS algorithm, in terms of the capability of MAI suppression and bit error rate (BER).
|
463 |
Maximum Weight Approach for Code Synchronization in DS/SS Systems Using Adaptive Constrained Filtering Technique with Direct-Delay-Estimation FormulaChen, Guo-Hua 04 July 2003 (has links)
The technique of direct sequence spread spectrum (DS/SS) has been widely used in commercial mobile communication systems. The efficiency of DS/SS system is highly dependent on the accurate and fast synchronization between the incoming and locally generated PN (pseudo-noise) codes. The code synchronization is processed in two steps, acquisition (coarse alignment) and tracking (fine alignment), to bring the delay offset between the two codes. Conventionally, for code synchronization, most of techniques were proposed based on the correlation property of PN codes. Recently, the different approach, by using the adaptive LMS filtering scheme, has been proposed to reduce the hardware complexity and to improve the performance of code synchronization, especially for a long PN code.
In this thesis, a new coherent adaptive code synchronization scheme is proposed, where the adaptive constrained LMS (CLMS) algorithm with the maximum tap-weight (MTW) test method is devised for code acquisition. The statistics of weight vector of the proposed CLMS scheme are derived to evaluate the performance, in terms of mean acquisition time (MAT). Analytical and simulation results verify that the proposed scheme for code acquisition outperforms the one using the conventional LMS filtering schemes, under the integer and non-integer time delay cases. Moreover, the setting of threshold value is derived for code acquisition, which is independent of the values of signal-to-noise ratio (SNR) and time delay.
Next, the CLMS scheme is proposed associated with the direct delay estimation (DDE) formula for code tracking. This approach does achieve a good delay-tracking performance, which is verified via computer simulation. Simultaneously, the hardware complexity can further be reduced due to that a code-tracking loop implemented by the interpolation method is not required.
|
464 |
Modified Generalized Sidelobe Canceller with Inverse QRD-RLS AlgorithmChang, Chun-Lin 11 July 2003 (has links)
The conventional temporal filtering approach cannot be used to separate signal from interference which occupies the same temporal frequency band as signal. Using a spatial filtering at the receiver can separate signals from interference that originates from different spatial location. Many adaptive array beamforming algorithms, based on linear constraints, have been proposed for suppressing undesired interference and being applied to wireless communication systems for multiuser detection. The adaptive array system can be employed to automatically adjust its directional to achieve the purpose that nulls the interferences or jammers and thus, enhances the reception of the desired signal. Inverse QR Decomposition Recursive Least-square (IQRD-RLS) algorithm has many advantages such as where the LS weight vector be computed without back substitution, a well known numerical stable algorithm and offering better convergence rate, steady-state means-square error, and parameter tracking capability over the adaptive least mean square (LMS) based algorithms. In this thesis, a new application, GSC-IQRD-RLS combining Generalized Sidelobe Canceller (GSC) and IQRD-RLS algorithm, is developed. It preserves the advantages of GSC such as simple structure, less computations, and converts a linearly constrained optimization problem into a standard optimum filtering problem. But the performance is equivalent between GSC-IQRD-RLS and LC-IQRD-RLS algorithms.
|
465 |
Approximation and Optimal Algorithms for Scheduling Jobs subject to Release DatesYu, Su-Jane 30 July 2003 (has links)
In this dissertation, we study the single machine scheduling problem with an objective of minimizing the total completion time subject to release dates. The problem, denoted 1|rj £UCj ,was known to be strongly NP-hard and both theoretically and practically important. The focus of the research in this dissertation is to develop the efficient algorithms for solving the 1|rj|£UCj problem.
This thesis contains two parts.
In the first part, the theme concerns the approximation approach. We derive a necessary and sufficient condition for local optimality, which can be implemented as a priority rule and be used to construct three heuristic algorithms with running times of O(n log n). By ¡¨local optimality¡¨, we mean the optimality of all candidates whenever a job is selected in a schedule, without considering the other jobs preceding or following. This is the most broadly considered concepts of locally optimal rule. We also identify a dominant subset which is strictly contained in each of all known dominant subsets, where a dominant subset is a set of solutions containing all optimal schedules.
In the second part, we develop our optimality algorithms for the 1|rj |£UCj problem. First, we present a lemma for estimating the sum of delay times of the rest jobs, if the starting time is delayed a period of time in a schedule. Then, using the lemma, partially, we proceed to develop a new partition property and three dominance theorems, that will be used and have improved the branch-and-bound algorithms for our optimization approach. By exploiting the insights gained from our heuristics as a branching scheme and by exploiting our heuristics as an upper bounding procedure, we propose three branch-and-bound algorithms. Our algorithms can optimally solve the problem up to 120 jobs, which is known to be the best till now.
|
466 |
On the design of architecture-aware algorithms for emerging applicationsKang, Seunghwa 30 January 2011 (has links)
This dissertation maps various kernels and applications to a spectrum of programming models and architectures and also presents architecture-aware algorithms for different systems. The kernels and applications discussed in this dissertation have widely varying computational characteristics. For example, we consider both dense numerical computations and sparse graph algorithms. This dissertation also covers emerging applications from image processing, complex network analysis, and computational biology.
We map these problems to diverse multicore processors and manycore accelerators. We also use new programming models (such as Transactional Memory, MapReduce, and Intel TBB) to address the performance and productivity challenges in the problems. Our experiences highlight the importance of mapping applications to appropriate programming models and architectures. We also find several limitations of current system software and architectures and directions to improve those. The discussion focuses on system software and architectural support for nested irregular parallelism, Transactional Memory, and hybrid data transfer mechanisms. We believe that the complexity of parallel programming can be significantly reduced via collaborative efforts among researchers and practitioners from different domains. This dissertation participates in the efforts by providing benchmarks and suggestions to improve system software and architectures.
|
467 |
Numerical Methods for Structured Matrix FactorizationsKressner, Daniel 13 June 2001 (has links) (PDF)
This thesis describes improvements of the periodic QZ algorithm and several variants of the Schur algorithm for block Toeplitz matrices.
Documentation of the available software is included.
|
468 |
Parallele Genetische Algorithmen / Parallel Genetic AlgorithmsRiedel, Marion 08 May 2002 (has links) (PDF)
The paper "Parallel Genetic Algorithms" discusses the theoretical basics of Evolutionary Algorithms concentrating on Genetic Algorithms. Possibilities for a parallelization of these algorithms are examined and explained on the basis of concepts of parallel programming. A concrete suggestion for a practical realization of a parallel Genetic Algorithm at different levels of complexity is presented. / Die Studienarbeit zum Thema "Parallele Genetische Algorithmen" befasst
sich mit den theoretischen Grundlagen Evolutionärer Algorithmen, wobei
die Konzentration bei Genetischen Algorithmen liegt, und untersucht die
Möglichkeiten einer parallelen Realisierung dieser Algorithmen. Des
weiteren werden Konzepte der Parallelen Programmierung diskutiert sowie
ein konkreter Vorschlag zur praktischen Realisierung eines parallelen
Genetischen Algorithmus' auf verschiedenen Komplexitätsebenen
vorgestellt.
|
469 |
An Experimental Study of Distance Sensitivity OraclesWilliams, Vincent Troy 26 October 2010 (has links)
The paper \A Nearly Optimal Oracle for Avoiding Failed Vertices and Edges" by
Aaron Bernstein and David Karger lays out a nearly optimal algorithm for nding the
shortest distances and paths between vertices with any given single failure in constant
time without reconstructing the oracle. Using their paper as a guideline, we have
implemented their algorithm in C++ and recorded each step in this thesis. Each step
has its own pseudo-code and its own analysis to prove that the entire oracle construction
stays within the stated running time and total space bounds, from the authors. The
effciency of the algorithm is compared against that of the brute-force methods total
running time and total space needed. Using multiple test cases with an increasing
number of vertices and edges, we have experimentally validated that their algorithm
holds true to their statements of space, running time, and query time.
|
470 |
A probabilistic architecture for algorithm portfoliosSilverthorn, Bryan Connor 05 April 2013 (has links)
Heuristic algorithms for logical reasoning are increasingly successful on computationally difficult problems such as satisfiability, and these solvers enable applications from circuit verification to software synthesis. Whether a problem instance can be solved, however, often depends in practice on whether the correct solver was selected and its parameters appropriately set. Algorithm portfolios leverage past performance data to automatically select solvers likely to perform well on a given instance. Existing portfolio methods typically select only a single solver for each instance. This dissertation develops and evaluates a more general portfolio method, one that computes complete solver execution schedules, including repeated runs of nondeterministic algorithms, by explicitly incorporating probabilistic reasoning into its operation. This modular architecture for probabilistic portfolios (MAPP) includes novel solutions to three issues central to portfolio operation: first, it estimates solver performance distributions from limited data by constructing a generative model; second, it integrates domain-specific information by predicting instances on which solvers exhibit similar performance; and, third, it computes execution schedules using an efficient and effective dynamic programming approximation. In a series of empirical comparisons designed to replicate past solver competitions, MAPP outperforms the most prominent alternative portfolio methods. Its success validates a principled approach to portfolio operation, offers a tool for tackling difficult problems, and opens a path forward in algorithm portfolio design. / text
|
Page generated in 0.0519 seconds