11 |
Computation with continuous mode CMOS circuits in image processing and probabilistic reasoningMroszczyk, Przemyslaw January 2014 (has links)
The objective of the research presented in this thesis is to investigate alternative ways of information processing employing asynchronous, data driven, and analogue computation in massively parallel cellular processor arrays, with applications in machine vision and artificial intelligence. The use of cellular processor architectures, with only local neighbourhood connectivity, is considered in VLSI realisations of the trigger-wave propagation in binary image processing, and in Bayesian inference. Design issues, critical in terms of the computational precision and system performance, are extensively analysed, accounting for the non-ideal operation of MOS devices caused by the second order effects, noise and parameter mismatch. In particular, CMOS hardware solutions for two specific tasks: binary image skeletonization and sum-product algorithm for belief propagation in factor graphs, are considered, targeting efficient design in terms of the processing speed, power, area, and computational precision. The major contributions of this research are in the area of continuous-time and discrete-time CMOS circuit design, with applications in moderate precision analogue and asynchronous computation, accounting for parameter variability. Various analogue and digital circuit realisations, operating in the continuous-time and discrete-time domains, are analysed in theory and verified using combined Matlab-Hspice simulations, providing a versatile framework suitable for custom specific analyses, verification and optimisation of the designed systems. Novel solutions, exhibiting reduced impact of parameter variability on the circuit operation, are presented and applied in the designs of the arithmetic circuits for matrix-vector operations and in the data driven asynchronous processor arrays for binary image processing. Several mismatch optimisation techniques are demonstrated, based on the use of switched-current approach in the design of current-mode Gilbert multiplier circuit, novel biasing scheme in the design of tunable delay gates, and averaging technique applied to the analogue continuous-time circuits realisations of Bayesian networks. The most promising circuit solutions were implemented on the PPATC test chip, fabricated in a standard 90 nm CMOS process, and verified in experiments.
|
12 |
[en] PERFORMANCE ANALYSIS OF TURBO CODES / [pt] ANÁLISE DE DESEMPENHO DE CÓDIGOS TURBOAMANDA CUNHA SILVA 08 January 2007 (has links)
[pt] Códigos turbo são uma técnica de correção de erro
eficiente que vem sendo proposta em diversos padrões de
comunicações atuais. Esta técnica apresenta um desempenho
que se aproxima dos limites teóricos estabelecidos na
Teoria de Codificação. A razão para o excelente desempenho
deste tipo de código baseia-se em dois fatores: uma
estrutura de codificação composta por codificadores
concatenados e uma estrutura de decodificação iterativa.
Neste trabalho é realizada uma revisão da literatura onde
a decodificação turbo é discutida segundo duas abordagens:
uma que baseia-se na estrutura dos codificadores
empregados e outra baseada na moderna teoria de grafos-
fatores. O desempenho destes códigos é avaliado através de
simulações. São considerados fatores como a estrutura dos
codificadores, o tipo de modulação empregada, o algoritmo
de decodificação utilizado, entre outros. / [en] Turbo codes are an efficient error correcting technique
that
has been
proposed for many communications standards. This technique
achieves a
performance that is near the theoretical limits
established by Information
Theory. The reason for this excellent performance of turbo
codes relies on
two aspects: a coding structure that is composed by
concatenated encoders
and an iterative decoding procedure. In the literature,
two approaches for
turbo decoding are presented: one that is based on the
encoder structure and
another that is built around the factor graphs theory.
Both approaches are
discussed in this work. Performance evaluation for these
codes are obtained
through simulations. Some aspects such as encoder
structure, modulation
scheme and decoding algorithm are considered and
evaluated. Also codes
derived from turbo codes by puncturing and shortening have
been studied
in this work.
|
13 |
[en] IRREGULAR REPEAT ACCUMULATE CODES: DESIGN AND EVALUATION / [pt] CÓDIGOS IRA: PROJETO E AVALIAÇÃOMAURO QUILES DE OLIVEIRA LUSTOSA 10 January 2018 (has links)
[pt] Os códigos IRA (Irregular Repeat-Accumulate) são uma classe de códigos criada com o objetivo de permitir codificação em tempo linear garantindo comunicação robusta a taxas próximas à capacidade do canal. Eles foram introduzidas por Jin, Khandekar and McEliece em 2000. O artigo no qual foram apresentados provou que os códigos IRA alcançavam a capacidade do canal de apagamento e mostravam desempenho cmparável ao dos códigos Turbo no canal AWGN (Additive White Gaussian Noise). Os desenvolvimentos teóricos por trás dos códigos IRA vieram da busca pelos primeiros códigos LDPC (Low Density Parity Check), ou códigos em grafos, que atingiriam a capacidade do canal AWGN. Os códigos LDPC - propostos originalmente por Robert Gallager em 1963 - se tornaram objeto de grande interesse nas últimas décadas após um longo período de ostracismo desde sua concepção, desenvolvendo seu potencial para codificação de canal em aplicações tão diversas quanto comunicações por satélite, redes sem fio e streaming via IP, bem como codificação distribuída de fonte. O objetivo desta dissertação é a avaliação dos códigos IRA e os efeitos de diferentes métodos de construção de grafos em seu desempenho. O uso das muitas variações do algoritmo PEG (Progressive Edge-Growth) foi testado em simulações no canal AWGN. / [en] Irregular Repeat-Accumulate codes are motivated by the challenge of providing a class of codes that use linear-time encoding and decoding while communicating reliably at rates close to channel capacity. They were introduced by Hui Jin, Khandekar and McEliece in 2000, their article proves that IRA codes achieve channel capacity for the binary erasure channel and exhibit remarkably good performance on the AWGN channel. The theoretical developments supporting IRA codes stem from the efforts ar the development of capacity achieving Low-Density Parity-Check codes. LDPC codes were first proposed by Robert Gallager in 1963 and became the subject of intense research during the past decade after being dormant for a long period since its conception. Efforts by many researchers have developed its potential for channel coding in applications as diverse as satellite communications, wireless networks and streaming over IP, as well as studies on its usage in Distributed Source Coding. The goal of this dissertation is the evaluation of IRA codes and the effects of different graph construction methods in its performance. The use of the many variations of the Progressive Edge-Growth algorithm with IRA codes was tested in simulations on the AWGN channel.
|
14 |
Adaptive Estimation using Gaussian MixturesPfeifer, Tim 25 October 2023 (has links)
This thesis offers a probabilistic solution to robust estimation using a novel adaptive estimator.
Reliable state estimation is a mandatory prerequisite for autonomous systems interacting with the real world.
The presence of outliers challenges the Gaussian assumption of numerous estimation algorithms, resulting in a potentially skewed estimate that compromises reliability.
Many approaches attempt to mitigate erroneous measurements by using a robust loss function – which often comes with a trade-off between robustness and numerical stability.
The proposed approach is purely probabilistic and enables adaptive large-scale estimation with non-Gaussian error models.
The introduced Adaptive Mixture algorithm combines a nonlinear least squares backend with Gaussian mixtures as the measurement error model.
Factor graphs as graphical representations allow an efficient and flexible application to real-world problems, such as simultaneous localization and mapping or satellite navigation.
The proposed algorithms are constructed using an approximate expectation-maximization approach, which justifies their design probabilistically.
This expectation-maximization is further generalized to enable adaptive estimation with arbitrary probabilistic models.
Evaluating the proposed Adaptive Mixture algorithm in simulated and real-world scenarios demonstrates its versatility and robustness.
A synthetic range-based localization shows that it provides reliable estimation results, even under extreme outlier ratios.
Real-world satellite navigation experiments prove its robustness in harsh urban environments.
The evaluation on indoor simultaneous localization and mapping datasets extends these results to typical robotic use cases.
The proposed adaptive estimator provides robust and reliable estimation under various instances of non-Gaussian measurement errors.
|
Page generated in 0.0518 seconds