• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Early-Decision Decoding of LDPC Codes

Blad, Anton January 2009 (has links)
<p>Since their rediscovery in 1995, low-density parity-check (LDPC) codes have received wide-spread attention as practical capacity-approaching code candidates. It has been shown that the class of codes can perform arbitrarily close to the channel capacity, and LDPC codes are also used or suggested for a number of important current and future communication standards. However, the problem of implementing an energy-efficient decoder has not yet been solved. Whereas the decoding algorithm is computationally simple, withuncomplicated arithmetic operations and low accuracy requirements, the random structure and irregularity of a theoretically well-defined code does not easily allow efficient VLSI implementations. Thus the LDPC decoding algorithm can be said to be communication-bound rather than computation-bound.</p><p>In this thesis, a modification to the sum-product decoding algorithm called early-decision decoding is suggested. The modification is based on the idea that the values of the bits in a block can be decided individually during decoding. As the sum-product decoding algorithm is a soft-decision decoder, a reliability can be defined for each bit. When the reliability of a bit is above a certain threshold, the bit can be removed from the rest of the decoding process, and thus the internal communication associated with the bit can be removed in subsequent iterations. However, with the early decision modification, an increased error probability is associated. Thus, bounds on the achievable performance as well as methods to detect graph inconsistencies resulting from erroneous decisions are presented. Also, a hybrid decoder achieving a negligible performance penalty compared to the sum-product decoder is presented. With the hybrid decoder, the internal communication is reduced with up to 40% for a rate-1/2 code with a length of 1152 bits, whereas increasing the rate allows significantly higher gains.</p><p>The algorithms have been implemented in a Xilinx Virtex 5 FPGA, and the resulting slice utilization andenergy dissipation have been estimated. However, due to increased logic overhead of the early decision decoder, the slice utilization increases from 14.5% to 21.0%, whereas the logic energy dissipation reduction from 499 pJ to 291 pJ per iteration and bit is offset by the clock distribution power, increased from 141 pJ to 191 pJ per iteration and bit. Still, the early decision decoder shows a net 16% estimated decrease of energy dissipation.</p>
2

Convergence Analysis for Inertial Krasnoselskii-Mann Type Iterative Algorithms

Huang, Wei-Shiou 16 February 2011 (has links)
We consider the problem of finding a common fixed point of an infinite family ${T_n}$ of nonlinear self-mappings of a closed convex subset $C$ of a real Hilbert space $H$. Namely, we want to find a point $x$ with the property (assuming such common fixed points exist): [ xin igcap_{n=1}^infty ext{Fix}(T_n). ] We will use the Krasnoselskii-Mann (KM) Type inertial iterative algorithms of the form $$ x_{n+1} = ((1-alpha_n)I+alpha_nT_n)y_n,quad y_n = x_n + eta_n(x_n-x_{n-1}).eqno(*)$$ We discuss the convergence properties of the sequence ${x_n}$ generated by this algorithm (*). In particular, we prove that ${x_n}$ converges weakly to a common fixed point of the family ${T_n}$ under certain conditions imposed on the sequences ${alpha_n}$ and ${eta_n}$.
3

Early-Decision Decoding of LDPC Codes

Blad, Anton January 2009 (has links)
Since their rediscovery in 1995, low-density parity-check (LDPC) codes have received wide-spread attention as practical capacity-approaching code candidates. It has been shown that the class of codes can perform arbitrarily close to the channel capacity, and LDPC codes are also used or suggested for a number of important current and future communication standards. However, the problem of implementing an energy-efficient decoder has not yet been solved. Whereas the decoding algorithm is computationally simple, withuncomplicated arithmetic operations and low accuracy requirements, the random structure and irregularity of a theoretically well-defined code does not easily allow efficient VLSI implementations. Thus the LDPC decoding algorithm can be said to be communication-bound rather than computation-bound. In this thesis, a modification to the sum-product decoding algorithm called early-decision decoding is suggested. The modification is based on the idea that the values of the bits in a block can be decided individually during decoding. As the sum-product decoding algorithm is a soft-decision decoder, a reliability can be defined for each bit. When the reliability of a bit is above a certain threshold, the bit can be removed from the rest of the decoding process, and thus the internal communication associated with the bit can be removed in subsequent iterations. However, with the early decision modification, an increased error probability is associated. Thus, bounds on the achievable performance as well as methods to detect graph inconsistencies resulting from erroneous decisions are presented. Also, a hybrid decoder achieving a negligible performance penalty compared to the sum-product decoder is presented. With the hybrid decoder, the internal communication is reduced with up to 40% for a rate-1/2 code with a length of 1152 bits, whereas increasing the rate allows significantly higher gains. The algorithms have been implemented in a Xilinx Virtex 5 FPGA, and the resulting slice utilization andenergy dissipation have been estimated. However, due to increased logic overhead of the early decision decoder, the slice utilization increases from 14.5% to 21.0%, whereas the logic energy dissipation reduction from 499 pJ to 291 pJ per iteration and bit is offset by the clock distribution power, increased from 141 pJ to 191 pJ per iteration and bit. Still, the early decision decoder shows a net 16% estimated decrease of energy dissipation.
4

On the numerical solution of continuous coupled algebraic Riccati equations

Rajasingam, Prasanthan 01 May 2016 (has links)
In this dissertation we first derive a new unified upper solution bound for the continuous coupled algebraic Riccati equation, which arises from the optimal control of a Markovian jump linear system. In particular, we address the issue of rank deficiency with the control matrices. In the case of rank deficiency the existing matrix upper bounds are inapplicable. Moreover, our new result is not restricted to rank deficiency cases only. It now contains the existing results as special cases. Next, an iterative refinement is presented to improve our new unified matrix upper solution bounds. In particular, this iterative refinement determines a monotonically decreasing sequence of upper bounds for the solution of the continuous coupled algebraic Riccati equation. We formulate a new iterative algorithm by modifying this iterative refinement. We also prove that this new algorithm generates a monotonically decreasing sequence of matrix upper solution bounds that converges to the maximal solution of the continuous coupled algebraic Riccati equation. Furthermore, we prove the convergence of an accelerated Riccati iteration which computes a positive semidefinite solution of the continuous coupled algebraic Riccati equation. In particular, we establish sufficient conditions for the convergence of this algorithm. We also prove that for particular initial values this algorithm determines a monotonically increasing sequence of positive semidefinite matrices that converge to the minimal solution of the continuous coupled algebraic Riccati equation. Additionally, we show that for specific initial values this algorithm generates a monotonically decreasing sequence that converges to the maximal solution of the continuous coupled algebraic Riccati equation. In addition, we prove that this accelerated Riccati iteration converges faster than the Riccati iteration. Finally, we formulate a weighted modified accelerated Riccati iteration which is a more generalized Riccati type iteration. All of the existing Riccati iterations are now the special cases of this algorithm. Furthermore, we establish sufficient conditions for the convergence of this algorithm and we prove the monotonic convergence of the sequence generated by this algorithm. We also discuss how the weight and other quantities affect the rate of convergence of this algorithm. Illustrative numerical examples are also presented.
5

Iterative algorithms for trust and reputation management and recommender systems

Ayday, Erman 10 November 2011 (has links)
This thesis investigates both theoretical and practical aspects of the design and analysis of iterative algorithms for trust and reputation management and recommender systems. It also studies the application of iterative trust and reputation management mechanisms in ad-hoc networks and P2P systems. First, an algebraic and iterative trust and reputation management scheme (ITRM) is proposed. The proposed ITRM can be applied to centralized schemes, in which a central authority collects the reports and forms the reputations of the service providers (sellers) as well as report/rating trustworthiness of the (service) consumers (buyers). It is shown that ITRM is robust in filtering out the peers who provide unreliable ratings. Next, the first application of Belief Propagation algorithm, a fully iterative probabilistic algorithm, on trust and reputation management (BP-ITRM) is proposed. In BP-ITRM, the reputation management problem is formulated as an inference problem, and it is described as computing marginal likelihood distributions from complicated global functions of many variables. However, it is observed that computing the marginal probability functions is computationally prohibitive for large scale reputation systems. Therefore, the belief propagation algorithm is utilized to efficiently (in linear complexity) compute these marginal probability distributions. In BP-ITRM, the reputation system is modeled by using a factor graph and reputation values of the service providers (sellers) are computed by iterative probabilistic message passing between the factor and variable nodes on the graph. It is shown that BP-ITRM is reliable in filtering out malicious/unreliable reports. It is proven that BP-ITRM iteratively reduces the error in the reputation values of service providers due to the malicious raters with a high probability. Further, comparison of BP-ITRM with some well-known and commonly used reputation management techniques (e.g., Averaging Scheme, Bayesian Approach and Cluster Filtering) indicates the superiority of the proposed scheme both in terms of robustness against attacks and efficiency. The introduction of the belief propagation and iterative message passing methods onto trust and reputation management has opened up several research directions. Thus, next, the first application of the belief propagation algorithm in the design of recommender systems (BPRS) is proposed. In BPRS, recommendations (predicted ratings) for each active user are iteratively computed by probabilistic message passing between variable and factor nodes in a factor graph. It is shown that as opposed to the previous recommender algorithms, BPRS does not require solving the recommendation problem for all users if it wishes to update the recommendations for only a single active user using the most recent data (ratings). Further, BPRS computes the recommendations for each user with linear complexity, without requiring a training period while it remains comparable to the state of art methods such as Correlation-based neighborhood model (CorNgbr) and Singular Value Decomposition (SVD) in terms of rating and precision accuracy. This work also explores fundamental research problems related to application of iterative and probabilistic reputation management systems in various fields (such as ad-hoc networks and P2P systems). A distributed malicious node detection mechanism is proposed for delay tolerant networks (DTNs) using ITRM which enables every node to evaluate other nodes based on their past behavior, without requiring a central authority. Further, for the first time. the belief propagation algorithm is utilized in the design and evaluation of distributed trust and reputation management systems for P2P networks. Several schemes are extensively simulated and are compared to demonstrate the effectiveness of the iterative algorithms and belief propagation on these applications.
6

Hybrid Steepest-Descent Methods for Variational Inequalities

Huang, Wei-ling 26 June 2006 (has links)
Assume that F is a nonlinear operator on a real Hilbert space H which is strongly monotone and Lipschitzian on a nonempty closed convex subset C of H. Assume also that C is the intersection of the fixed point sets of a finite number of nonexpansive mappings on H. We make a slight modification of the iterative algorithm in Xu and Kim (Journal of Optimization Theory and Applications, Vol. 119, No. 1, pp. 185-201, 2003), which generates a sequence {xn} from an arbitrary initial point x0 in H. The sequence {xn} is shown to converge in norm to the unique solution u* of the variational inequality, under the conditions different from Xu and Kim¡¦s ones imposed on the parameters. Applications to constrained generalized pseudoinverse are included. The results presented in this paper are complementary ones to Xu and Kim¡¦s theorems (Journal of Optimization Theory and Applications, Vol. 119, No. 1, pp. 185-201, 2003).
7

A Non-iterative Pressure Based Algorithm For The Computation Of Reacting Radiating Flows

Uygur, Ahmet Bilge 01 March 2007 (has links) (PDF)
A non-iterative pressure based algorithm which consists of splitting the solution of momentum energy and species equations into a sequence of predictor-corrector stages was developed for the simulation of transient reacting radiating flows. A semi-discrete approach called the Method of Lines (MOL) which enables implicit time-integration at all splitting stages was used for the solution of conservation equations. The solution of elliptic pressure equation for the determination of pressure field was performed by a multi-grid solver (MUDPACK package). Radiation calculations were carried out by coupling previously developed gray and non-gray radiation models with the algorithm. A first order (global) reaction mechanism was employed to account for the chemistry. The predictions of the algorithm for the following test cases: i) non-isothermal turbulent pipe flow and ii) laminar methane-air diffusion flame / were benchmarked against experimental data and numerical solutions available in the literature and the capability of the code to predict transient solutions was demonstrated on these test cases. Favorable agreements were obtained for both test cases. The effect of radiation and non-gray treatment of the radiative properties were investigated on the second test case. It was found that incorporation of radiation has significant effect on Temeprature and velocity fields but its effect is limited in species predictions. Executions with both radiation models revealed that the non-gray radiation model considered in the present study produces similar results with the gray model at a considerably higher computational cost. The algorithm developed was found to be an efficient and versatile tool for the timedependent simulation of different flow scenarios constitutes the initial steps towards the computation of transient turbulent combustion.
8

On a class of distributed algorithms over networks and graphs

Lee, Sang Hyun, 1977- 01 June 2011 (has links)
Distributed iterative algorithms are of great importance, as they are known to provide low-complexity and approximate solutions to what are otherwise high-dimensional intractable optimization problems. The theory of message-passing based algorithms is fairly well developed in the coding, machine learning and statistical physics literatures. Even though several applications of message-passing algorithms have already been identified, this work aims at establishing that a plethora of other applications exist where it can be of great importance. In particular, the goal of this work is to develop and demonstrate applications of this class of algorithms in network communications and computational biology. In the domain of communications, message-passing based algorithms provide distributed ways of inferring the optimal solution without the aid of a central agent for various optimization problems that happen in the resource allocation of communication networks. Our main framework is Affinity Propagation (AP), originally developed for clustering problems. We reinterpret this framework to unify the development of distributed algorithms for discrete resource allocation problems. Also, we consider a network-coded communication network, where continuous rate allocation is studied. We formulate an optimization problem with a linear cost function, and then utilize a Belief Propagation (BP) approach to determine a decentralized rate allocation strategy. Next, we move to the domain of computational biology, where graphical representations and computational biology play a major role. First, we consider the motif finding problem with several DNA sequences. In effect, this is a sequence matching problem, which can be modeled using various graphical representations and also solved using low-complexity algorithms based on message-passing techniques. In addition, we address the application of message-passing algorithms for a DNA sequencing problem where the one dimensional structure of a single DNA sequence is identified. We reinterpret the problem as being equivalent to the decoding of a nonlinear code. Based on the iterative decoding framework, we develop an appropriate graphical model which enables us to derive a message-passing algorithm to improve the performance of the DNA sequencing problem. Although this work consists of disparate application domains of communications, networks and computational biology, graphical models and distributed message-passing algorithms form a common underlying theme. / text
9

Algorithmes itératifs à faible complexité pour le codage de canal et le compressed sensing / Low Complexity Iterative Algorithms for Channel Coding and Compressed Sensing

Danjean, Ludovic 29 November 2012 (has links)
L'utilisation d'algorithmes itératifs est aujourd'hui largement répandue dans tous les domaines du traitement du signal et des communications numériques. Dans les systèmes de communications modernes, les algorithmes itératifs sont utilisés dans le décodage des codes ``low-density parity-check`` (LDPC), qui sont une classe de codes correcteurs d'erreurs utilisés pour leurs performances exceptionnelles en terme de taux d'erreur. Dans un domaine plus récent qu'est le ``compressed sensing``, les algorithmes itératifs sont utilisés comme méthode de reconstruction afin de recouvrer un signal ''sparse`` à partir d'un ensemble d'équations linéaires, appelées observations. Cette thèse traite principalement du développement d'algorithmes itératifs à faible complexité pour les deux domaines mentionnés précédemment, à savoir le design d'algorithmes de décodage à faible complexité pour les codes LDPC, et le développement et l'analyse d'un algorithme de reconstruction à faible complexité, appelé ''Interval-Passing Algorithm (IPA)'', dans le cadre du ``compressed sensing``.Dans la première partie de cette thèse, nous traitons le cas des algorithmes de décodage des codes LDPC. Il est maintenu bien connu que les codes LDPC présentent un phénomène dit de ''plancher d'erreur`` en raison des échecs de décodage des algorithmes de décodage traditionnels du types propagation de croyances, et ce en dépit de leurs excellentes performances de décodage. Récemment, une nouvelle classe de décodeurs à faible complexité, appelés ''finite alphabet iterative decoders (FAIDs)'' ayant de meilleures performances dans la zone de plancher d'erreur, a été proposée. Dans ce manuscrit nous nous concentrons sur le problème de la sélection de bons décodeurs FAID pour le cas de codes LDPC ayant un poids colonne de 3 et le cas du canal binaire symétrique. Les méthodes traditionnelles pour la sélection des décodeurs s'appuient sur des techniques asymptotiques telles que l'évolution de densité, mais qui ne garantit en rien de bonnes performances sur un code de longueurs finies surtout dans la région de plancher d'erreur. C'est pourquoi nous proposons ici une méthode de sélection qui se base sur la connaissance des topologies néfastes au décodage pouvant être présente dans un code en utilisant le concept de ``trapping sets bruités''. Des résultats de simulation sur différents codes montrent que les décodeurs FAID sélectionnés grâce à cette méthode présentent de meilleures performance dans la zone de plancher d'erreur comparé au décodeur à propagation de croyances.Dans un second temps, nous traitons le sujet des algorithmes de reconstruction itératifs pour le compressed sensing. Des algorithmes itératifs ont été proposés pour ce domaine afin de réduire la complexité induite de la reconstruction par ``linear programming''. Dans cette thèse nous avons modifié et analysé un algorithme de reconstruction à faible complexité dénommé IPA utilisant les matrices creuses comme matrices de mesures. Parallèlement aux travaux réalisés dans la littérature dans la théorie du codage, nous analysons les échecs de reconstruction de l'IPA et établissons le lien entre les ``stopping sets'' de la représentation binaire des matrices de mesure creuses. Les performances de l'IPA en font un bon compromis entre la complexité de la reconstruction sous contrainte de minimisation de la norme $ell_1$ et le très simple algorithme dit de vérification. / Iterative algorithms are now widely used in all areas of signal processing and digital communications. In modern communication systems, iterative algorithms are used for decoding low-density parity-check (LDPC) codes, a popular class of error-correction codes that are now widely used for their exceptional error-rate performance. In a more recent field known as compressed sensing, iterative algorithms are used as a method of reconstruction to recover a sparse signal from a linear set of measurements. This thesis primarily deals with the development of low-complexity iterative algorithms for the two aforementioned fields, namely, the design of low-complexity decoding algorithms for LDPC codes, and the development and analysis of a low complexity reconstruction algorithm called Interval-Passing Algorithm (IPA) for compressed sensing.In the first part of this thesis, we address the area of decoding algorithms for LDPC codes. It is well-known that LDPC codes suffer from the error floor phenomenon in spite of their exceptional performance, where traditional iterative decoders based on the belief propagation (BP) fail for certain low-noise configurations. Recently, a novel class of decoders called ''finite alphabet iterative decoders (FAIDs)'' were proposed that are capable of surpassing BP in the error floor at much lower complexity. In this work, we focus on the problem of selection of particularly good FAIDs for column-weight-three codes over the Binary Symmetric channel (BSC). Traditional methods for decoder selection use asymptotic techniques such as the density evolution method, which do not guarantee a good performance on finite-length codes especially in theerror floor region. Instead, we propose a methodology for selection that relies on the knowledge of potentially harmful topologies that could be present in a code, using the concept of noisy trapping set. Numerical results are provided to show that FAIDs selected based on our methodology outperform BP in the error floor on several codes.In the second part of this thesis, we address the area of iterative reconstruction algorithms for compressed sensing. Iterative algorithms have been proposed for compressed sensing in order to tackle the complexity of the LP reconstruction method. In this work, we modify and analyze a low complexity reconstruction algorithm called the IPA which uses sparse matrices as measurement matrices. Similar to what has been done for decoding algorithms in the area of coding theory, we analyze the failures of the IPA and link them to the stopping sets of the binary representation of the sparse measurement matrices used. The performance of the IPA makes it a good trade-off between the complex L1-minimization reconstruction and the very simple verification decoding.
10

Exécution d'applications parallèles en environnements hétérogènes et volatils : déploiement et virtualisation / Parallel applications execution in heterogeneous and volatile environnments : mapping and virtualization

Miquée, Sébastien 25 January 2012 (has links)
La technologie actuelle permet aux scientifiques de divers domaines d'obtenir des données de plus en plus précises et volumineuses, Afin de résoudre ces problèmes associés à l'obtention de ces données, les architectures de calcul évoluent, en fournissant toujours plus de ressources, notamment grâce à des machines plus puissantes et à leur mutualisation. Dans cette thèse, nous proposons d’étudier dans un premier temps le placement des tâches d'applications itératives asynchrones dans des environnements hétérogènes et volatils. Notre solution nous permet également de s'affranchir de l(hétérogénéité des machines hôtes tout en offrent une implantation facilitée de politiques de tolérance aux pannes, les expérimentations que nous avons menées sont encourageantes et montrent qu'il existe un réel potentiel quand à l'utilisation d'une telle plateforme pour l'exécution d'applications scientifiques. / The current technology allows scientists of several domains to obtain more precise and large data. In the same time, computing architectures evolve too, by providing even more computing resources, with more powerful machines and the pooling of them. In this thesis, in a first time we propose to study the problem of the mapping of asynchronous iterative applications tasks into heterogeneous and volatile environments. Our solution allows also to overcome the heterogeneity of host machines while offering an easier implementation of policies for fault tolerance. The experiments we have conducted are encouraging ad show that there is real potential for the use of such a platform for running scientific applications.

Page generated in 0.0818 seconds