• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 11
  • 6
  • 5
  • 4
  • Tagged with
  • 78
  • 78
  • 18
  • 17
  • 17
  • 16
  • 16
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Iterative decoding beyond belief propagation for low-density parity-check codes / Décodage itératif pour les codes LDPC au-delà de la propagation de croyances

Planjery, Shiva Kumar 05 December 2012 (has links)
Les codes Low-Density Parity-Check (LDPC) sont au coeur de larecherche des codes correcteurs d'erreurs en raison de leur excellenteperformance de décodage en utilisant un algorithme de décodageitératif de type propagation de croyances (Belief Propagation - BP).Cet algorithme utilise la représentation graphique d'un code, ditgraphe de Tanner, et calcule les fonctions marginales sur le graphe.Même si l'inférence calculée n'est exacte que sur un graphe acyclique(arbre), l'algorithme BP estime de manière très proche les marginalessur les graphes cycliques, et les codes LDPC peuvent asymptotiquementapprocher la capacité de Shannon avec cet algorithme.Cependant, sur des codes de longueurs finies dont la représentationgraphique contient des cycles, l'algorithme BP est sous-optimal etdonne lieu à l'apparition du phénomène dit de plancher d'erreur. Leplancher d'erreur se manifeste par la dégradation soudaine de la pentedu taux d'erreur dans la zone de fort rapport signal à bruit où lesstructures néfastes au décodage sont connues en termes de TrappingSets présents dans le graphe de Tanner du code, entraînant un échec dudécodage. De plus, les effets de la quantification introduite parl'implémentation en hardware de l'algorithme BP peuvent amplifier ceproblème de plancher d'erreur.Dans cette thèse nous introduisons un nouveau paradigme pour ledécodage itératif à précision finie des codes LDPC sur le canalbinaire symétrique. Ces nouveaux décodeurs, appelés décodeursitératifs à alphabet fini (Finite Alphabet Iterative Decoders – FAID)pour préciser que les messages appartiennent à un alphabet fini, sontcapables de surpasser l'algorithme BP dans la région du plancherd'erreur. Les messages échangés par les FAID ne sont pas desprobabilités ou vraisemblances quantifiées, et les fonctions de miseà jour des noeuds de variable ne copient en rien le décodage par BP cequi contraste avec les décodeurs BP quantifiés traditionnels. Eneffet, les fonctions de mise à jour sont de simples tables de véritéconçues pour assurer une plus grande capacité de correction d'erreuren utilisant la connaissance de topologies potentiellement néfastes audécodage présentes dans un code donné. Nous montrons que sur demultiples codes ayant un poids colonne de trois, il existe des FAIDutilisant 3 bits de précision pouvant surpasser l'algorithme BP(implémenté en précision flottante) dans la zone de plancher d'erreursans aucun compromis dans la latence de décodage. C'est pourquoi lesFAID obtiennent des performances supérieures comparées au BP avecseulement une fraction de sa complexité.Par ailleurs, nous proposons dans cette thèse une décimation amélioréedes FAID pour les codes LDPC dans le traitement de la mise à jour desnoeuds de variable. La décimation implique de fixer certains bits ducode à une valeur particulière pendant le décodage et peut réduire demanière significative le nombre d'itérations requises pour corriger uncertain nombre d'erreurs fixé tout en maintenant de bonnesperformances d'un FAID, le rendant plus à même d'être analysé. Nousillustrons cette technique pour des FAID utilisant 3 bits de précisioncodes de poids colonne trois. Nous montrons également comment cettedécimation peut être utilisée de manière adaptative pour améliorer lescapacités de correction d'erreur des FAID. Le nouveau modèle proposéde décimation adaptative a, certes, une complexité un peu plus élevée,mais améliore significativement la pente du plancher d'erreur pour unFAID donné. Sur certains codes à haut rendement, nous montrons que ladécimation adaptative des FAID permet d'atteindre des capacités decorrection d'erreur proches de la limite théorique du décodage au sensdu maximum de vraisemblance. / At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efficiently decoded by message-passing algorithms which are traditionally based on the belief propagation (BP) algorithm. The BP algorithm operates on a graphical model of a code known as the Tanner graph, and computes marginals of functions on the graph. While inference using BP is exact only on loop-free graphs (trees), the BP still provides surprisingly close approximations to exact marginals on loopy graphs, and LDPC codes can asymptotically approach Shannon's capacity under BP decoding.However, on finite-length codes whose corresponding graphs are loopy, BP is sub-optimal and therefore gives rise to the error floor phenomenon. The error floor is an abrupt degradation in the slope of the error-rate performance of the code in the high signal-to-noise regime, where certain harmful structures generically termed as trapping sets present in the Tanner graph of the code, cause the decoder to fail. Moreover, the effects of finite precision that are introduced during hardware realizations of BP can further contribute to the error floor problem.In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the Binary Symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs) to signify that the message values belong to a finite alphabet, are capable of surpassing the BP in the error floor region. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder, which is in contrast to traditional quantized BP decoders. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability by using the knowledge of potentially harmful topologies that could be present in a given code. We show that on several column-weight-three codes of practical interest, there exist 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor without any compromise in decoding latency. Hence, they are able to achieve a superior performance compared to BP with only a fraction of its complexity.Additionally in this dissertation, we propose decimation-enhanced FAIDs for LDPC codes, where the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during the decoding process, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by maximum-likelihood decoding.
22

Mecânica estatística de sistemas de reputação em redes autônomas / Statistical mechanics of reputation systems in autonomous networks

Manoel, Antonio André Monteiro 20 April 2012 (has links)
Dá-se o nome de sistemas de reputação a mecanismos em que membros de uma comunidade emitem avaliações sobre os demais e a partir destas se inferem quais dos membros podem ou não ser considerados confiáveis. Apresentamos, nesta dissertação de mestrado, um estudo sobre estes sistemas. Modela-se o problema de calcular reputações a partir de avaliações não-confiáveis como um problema de inferência estatística, que é então analisado com o uso de uma técnica conhecida como propagação de crenças, permitindo que obtenhamos estimativas. Em seguida, utilizamo-nos da relação existente entre problemas de inferência e mecânica estatística para realizar um estudo analítico mais profundo, por meio de uma generalização do método de cavidade. São traçados diagramas de fase, em que se observam regiões de parâmetros para as quais o problema torna-se mais difícil de resolver; esta análise nos dá alguma intuição sobre o problema, possibilitando que sejam propostas melhorias aos métodos existentes para tratá-lo. / It\'s given the name of reputation system to mechanisms in which members of a community issue each other ratings and from these it is inferred which can be trusted and which can\'t. We present, in this master\'s dissertation, a study on these systems. The problem of calculating reputations from unreliable ratings is modeled as one of statistical inference, and then analyzed with the use of a technique known as belief propagation, allowing us to obtain estimatives. Next, we use the existing relation between inference problems and statistical mechanics to motivate a deeper study, by means of a generalization of the cavity method. Phase diagrams are drawn, making possible to identify regions of parameters for which the problem is harder to solve; this analysis brings insight to the problem, allowing one to propose improvements to the methods available for it\'s treatment.
23

Learning object boundary detection from motion data

Ross, Michael G., Kaelbling, Leslie P. 01 1900 (has links)
A significant barrier to applying the techniques of machine learning to the domain of object boundary detection is the need to obtain a large database of correctly labeled examples. Inspired by developmental psychology, this paper proposes that boundary detection can be learned from the output of a motion tracking algorithm that separates moving objects from their static surroundings. Motion segmentation solves the database problem by providing cheap, unlimited, labeled training data. A probabilistic model of the textural and shape properties of object boundaries can be trained from this data and then used to efficiently detect boundaries in novel images via loopy belief propagation. / Singapore-MIT Alliance (SMA)
24

Protein side-chain placement: probabilistic inference and integer programming methods

Hong, Eun-Jong, Lozano-Pérez, Tomás 01 1900 (has links)
The prediction of energetically favorable side-chain conformations is a fundamental element in homology modeling of proteins and the design of novel protein sequences. The space of side-chain conformations can be approximated by a discrete space of probabilistically representative side-chain conformations (called rotamers). The problem is, then, to find a rotamer selection for each amino acid that minimizes a potential energy function. This is called the Global Minimum Energy Conformation (GMEC) problem. This problem is an NP-hard optimization problem. The Dead-End Elimination theorem together with the A* algorithm (DEE/A*) has been successfully applied to this problem. However, DEE fails to converge for some complex instances. In this paper, we explore two alternatives to DEE/A* in solving the GMEC problem. We use a probabilistic inference method, the max-product (MP) belief-propagation algorithm, to estimate (often exactly) the GMEC. We also investigate integer programming formulations to obtain the exact solution. There are known ILP formulations that can be directly applied to the GMEC problem. We review these formulations and compare their effectiveness using CPLEX optimizers. We also present preliminary work towards applying the branch-and-price approach to the GMEC problem. The preliminary results suggest that the max-product algorithm is very effective for the GMEC problem. Though the max-product algorithm is an approximate method, its speed and accuracy are comparable to those of DEE/A* in large side-chain placement problems and may be superior in sequence design. / Singapore-MIT Alliance (SMA)
25

Nonparametric Message Passing Methods for Cooperative Localization and Tracking

Savic, Vladimir January 2012 (has links)
The objective of this thesis is the development of cooperative localization and tracking algorithms using nonparametric message passing techniques. In contrast to the most well-known techniques, the goal is to estimate the posterior probability density function (PDF) of the position of each sensor. This problem can be solved using Bayesian approach, but it is intractable in general case. Nevertheless, the particle-based approximation (via nonparametric representation), and an appropriate factorization of the joint PDFs (using message passing methods), make Bayesian approach acceptable for inference in sensor networks. The well-known method for this problem, nonparametric belief propagation (NBP), can lead to inaccurate beliefs and possible non-convergence in loopy networks. Therefore, we propose four novel algorithms which alleviate these problems: nonparametric generalized belief propagation (NGBP) based on junction tree (NGBP-JT), NGBP based on pseudo-junction tree (NGBP-PJT), NBP based on spanning trees (NBP-ST), and uniformly-reweighted NBP (URW-NBP). We also extend NBP for cooperative localization in mobile networks. In contrast to the previous methods, we use an optional smoothing, provide a novel communication protocol, and increase the efficiency of the sampling techniques. Moreover, we propose novel algorithms for distributed tracking, in which the goal is to track the passive object which cannot locate itself. In particular, we develop distributed particle filtering (DPF) based on three asynchronous belief consensus (BC) algorithms: standard belief consensus (SBC), broadcast gossip (BG), and belief propagation (BP). Finally, the last part of this thesis includes the experimental analysis of some of the proposed algorithms, in which we found that the results based on real measurements are very similar with the results based on theoretical models.
26

Affinity Propagation: Clustering Data by Passing Messages

Dueck, Delbert 24 September 2009 (has links)
Clustering data by identifying a subset of representative examples is important for detecting patterns in data and in processing sensory signals. Such "exemplars" can be found by randomly choosing an initial subset of data points as exemplars and then iteratively refining it, but this works well only if that initial choice is close to a good solution. This thesis describes a method called "affinity propagation" that simultaneously considers all data points as potential exemplars, exchanging real-valued messages between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. Affinity propagation takes as input a set of pairwise similarities between data points and finds clusters on the basis of maximizing the total similarity between data points and their exemplars. Similarity can be simply defined as negative squared Euclidean distance for compatibility with other algorithms, or it can incorporate richer domain-specific models (e.g., translation-invariant distances for comparing images). Affinity propagation’s computational and memory requirements scale linearly with the number of similarities input; for non-sparse problems where all possible similarities are computed, these requirements scale quadratically with the number of data points. Affinity propagation is demonstrated on several applications from areas such as computer vision and bioinformatics, and it typically finds better clustering solutions than other methods in less time.
27

Affinity Propagation: Clustering Data by Passing Messages

Dueck, Delbert 24 September 2009 (has links)
Clustering data by identifying a subset of representative examples is important for detecting patterns in data and in processing sensory signals. Such "exemplars" can be found by randomly choosing an initial subset of data points as exemplars and then iteratively refining it, but this works well only if that initial choice is close to a good solution. This thesis describes a method called "affinity propagation" that simultaneously considers all data points as potential exemplars, exchanging real-valued messages between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. Affinity propagation takes as input a set of pairwise similarities between data points and finds clusters on the basis of maximizing the total similarity between data points and their exemplars. Similarity can be simply defined as negative squared Euclidean distance for compatibility with other algorithms, or it can incorporate richer domain-specific models (e.g., translation-invariant distances for comparing images). Affinity propagation’s computational and memory requirements scale linearly with the number of similarities input; for non-sparse problems where all possible similarities are computed, these requirements scale quadratically with the number of data points. Affinity propagation is demonstrated on several applications from areas such as computer vision and bioinformatics, and it typically finds better clustering solutions than other methods in less time.
28

Estimation d'un mouvement de caméra et problèmes connexes

Jonchery, Claire 06 November 2006 (has links) (PDF)
Cette thèse aborde le problème de l'estimation du mouvement d'une caméra filmant une scène fixe, à partir de la séquence d'images obtenue.<br />La méthode proposée s'applique à l'estimation du mouvement entre deux images consécutives et repose sur la détermination d'une déformation 2D quadratique. <br />A partir du mouvement estimé, nous étudions ensuite le problème de l'estimation de la structure de la scène filmée. Pour cela, nous appliquons une méthode de Belief Propagation directement sur un couple d'images, sans rectification, en utilisant l'estimation du mouvement.<br />Enfin, nous examinons l'injectivité de la fonction associant un flot optique au mouvement d'une caméra et à la structure de la scène filmée. Deux mouvements de caméra étant donnés, nous décrivons le domaine d'observation où les flots générés sont susceptibles d'être identiques, et les surfaces filmées qui, associées aux deux mouvements, produiront ces flots ambigus.
29

Multi-scale error-correcting codes and their decoding using belief propagation

Yoo, Yong Seok 25 June 2014 (has links)
This work is motivated from error-correcting codes in the brain. To counteract the effect of representation noise, a large number of neurons participate in encoding even low-dimensional variables. In many brain areas, the mean firing rates of neurons as a function of represented variable, called the tuning curve, have unimodal shape centered at different values, defining a unary code. This dissertation focuses on a new type of neural code where neurons have periodic tuning curves, with a diversity of periods. Neurons that exhibit this tuning are grid cells of the entorhinal cortex, which represent self-location in two-dimensional space. First, we investigate mutual information between such multi-scale codes and the coded variable as a function of tuning curve width. For decoding, we consider maximum likelihood (ML) and plausible neural network (NN) based models. For unary neural codes, Fisher information increases with narrower tuning, regardless of the decoding method. By contrast, for the multi-scale neural code, the optimal tuning curve width depends on the decoding method. While narrow tuning is optimal for ML decoding, a finite width, matched to statistics of the noise, is optimal with a NN decoder. This finding may explain why actual neural tuning curves have relatively wide tuning. Next, motivated by the observation that multi-scale codes involve non-trivial decoding, we examine a decoding algorithm based on belief propagation (BP) because BP promises certain gains in decoding efficiency. The decoding problem is first formulated as a subset selection problem on a graph and then approximately solved by BP. Even though the graph has many cycles, BP converges to a fixed point after few iterations. The mean square error of BP approaches to that of ML at high signal-to-noise ratios. Finally, using the multi-scale code, we propose a joint source-channel coding scheme that allows separate senders to transmit complementary information over additive Gaussian noise channels without cooperation. The receiver decodes one sender's codeword using the other as side information and achieves a lower distortion using the same number of transmissions. The proposed scheme offers a new framework to design distributed joint source-channel codes for continuous variables. / text
30

BELIEF PROPAGATION DECODING OF FINITE-LENGTH POLAR CODES

RAJAIE, TARANNOM 01 February 2012 (has links)
Polar codes, recently invented by Arikan, are the first class of codes known to achieve the symmetric capacity for a large class of channels. The symmetric capacity is the highest achievable rate subject to using the binary input letters of the channel with equal probability. Polar code construction is based on a phenomenon called channel polarization. The encoding as well as the decoding operation of polar codes can be implemented with O(N logN) complexity, where N is the blocklength of the code. In this work, we study the factor graph representation of finite-length polar codes and their effect on the belief propagation (BP) decoding process over Binary Erasure Channel (BEC). Particularly, we study the parity-check-based (H-Based) as well as the generator based (G-based) factor graphs of polar codes. As these factor graphs are not unique for a code, we study and compare the performance of Belief Propagation (BP) decoders on number of well-known graphs. Error rates and complexities are reported for a number of cases. Comparisons are also made with the Successive Cancellation (SC) decoder. High errors are related to the so-called stopping sets of the underlying graphs. we discuss the pros and cons of BP decoder over SC decoder for various code lengths. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2012-01-31 17:10:59.955

Page generated in 0.0956 seconds