211 |
Méthodologie de conception système à base de plateformes reconfigurables et programmablesGhali, Khemaies 01 March 2005 (has links) (PDF)
Les travaux présentés dans ce mémoire concernent l'exploration de l'espace de conception des architectures SOC pour des applications orientées télécommunication. L'évolution importante des semi-conducteurs a permis l'implémentation de systèmes complets sur une puce. Cette implémentation a été rendue possible par des méthodologies de conception basées sur la réutilisation des composants existants (IP - Intellectual Property) qui, combinées ensemble, constituent le système. La différentiation des systèmes est obtenue par l'ajout d'IP propriétaires rattachées au système. L'apport des technologies classiques basées sur le modèle en Y (Y-chart) et les techniques de co-design se sont avérées insuffisantes dès lors que ces IPs initialement sous forme dure (hard IP) donc non modifiables ont étés proposées dans leur version paramétrable (Soft IP), pour garantir un meilleur dimensionnement du système. En effet, la modularité des IPs soft par leurs paramétrisations, créent un espace d'exploration qui s'avère extrêmement important et donc inexploitable par des techniques de conception ad hoc ou interactives. Le problème posé est l'optimisation mathématique des paramètres de l'ensemble des IPs soft constituant le SOC. Ce problème multidimensionnel en performance est aggravé, dans le cadre des SOC pour systèmes embarqués, par la prise en compte de la consommation d'énergie et de la surface en silicium. Le problème devient alors une optimisation multiobjectifs. Cette thèse propose une résolution de ce problème en plusieurs étapes : Dans une première étape, des techniques d'exploration pour le dimensionnement d'IP de processeur SuperScalair sont proposées. Ces techniques tiennent compte de trois critères: performance, consommation d'énergie et surface en silicium. Les résultats obtenus par des benchmarks multimédia "MiBench" de taille significative résultent dans un sous ensemble optimal au sens de Pareto, permettant de sélectionner une ou plusieurs solutions efficaces pour les applications cibles. La seconde étape est une extension du cadre précédent par couplage de l'exploration multiobjectifs avec une implémentation matérielle sur circuits FPGA. Elle permet alors une exploration avec matériel dans la boucle. Le principe poursuivi, à l'inverse des explorations effectuées à des niveaux d'abstraction élevés (SystemC), est qu'une exploration est d'autant plus efficace que les valeurs injectées à l'algorithme d'exploration sont proches de la réalité. L'autre aspect est que l'exploration par simulation des SOC reste problématique, ceci étant dû aux temps prohibitifs de la simulation et que l'exécution directe est toujours plus rapide, donc permet des explorations larges et réalistes. Cette approche est appliquée au processeur LEON v2.0 de l' ESA sur des circuits Xilinx Virtex-II qui, de par leur reconfigurabilité, permet le chargement de nouvelles configurations lors de l'exploration. Enfin, l'importance des SOC mixtes analogiques/numériques, nous a poussés à nous intéresser à l'optimisation des circuits analogiques et ce, sur le même principe, mais en utilisant des circuits FPAA (Field Programmable Analog Array) qui permettent la conception et l'implémentation d'applications sur circuits analogiques re-programmables. Cette possibilité permet de répondre à une fonctionnalité donnée en testant et explorant de nombreuses configurations, en les implémentant physiquement dans un circuit programmable et cela à moindre coût. La thèse conclut sur les perspectives pouvant découler des contributions de ce travail sur les méthodologies de conception de SOC dans les environnements SOPC.
|
212 |
VLSI Implementation of Key Components in A Mobile Broadband ReceiverHuang, Yulin January 2009 (has links)
<p>Digital front-end and Turbo decoder are the two key components in the digital wireless communication system. This thesis will discuss the implementation issues of both digital front-end and Turbo decoder.The structure of digital front-end for multi-standard radio supporting wireless standards such as IEEE802.11n, WiMAX, 3GPP LTE is investigated in the thesis. A top-to-down design methods. 802.11n digital down-converter is designed from Matlab model to VHDL implementation. Both simulation and FPGA prototyping are carried out.As another significant part of the thesis, a parallel Turbo decoder is designed and implemented for 3GPPLTE. The block size supported ranges from 40 to 6144 and the maximum number of iteration is eight.The Turbo decoder will use eight parallel SISO units to reach a throughput up to 150Mits.</p>
|
213 |
Code-aided synchronization for digital burst communicationsHerzet, Cédric 21 April 2006 (has links)
This thesis deals with the synchronization of digital communication systems. Synchronization (from the Greek syn (together) and chronos (time)) denotes the task of making two systems running at the same time. In communication systems, the synchronization of the transmitter and the receiver requires to accurately estimate a number of parameters such as the carrier frequency and phase offsets, the timing epoch...
In the early days of digital communications, synchronizers used to operate in either data-aided (DA) or non-data-aided (NDA) modes. However, with the recent advent of powerful coding techniques, these conventional synchronization modes have been shown to be unable to properly synchronize state-of-the-art receivers.
In this context, we investigate in this thesis a new family of synchronizers referred to as code-aided (CA) synchronizers. The idea behind CA synchronization is to take benefit from the structure of the code used to protect the data to improve the estimation quality achieved by the synchronizers. In a first part of the thesis, we address the issue of turbo synchronization, i.e., the iterative synchronization of continuous parameters. In particular, we derive several mathematical frameworks enabling a systematic derivation of turbo synchronizers and a deeper understanding of their behavior. In a second part, we focus on the so-called CA hypothesis testing problem. More particularly, we derive optimal solutions to deal with this problem and propose efficient implementations of the proposed algorithms. Finally, in a last part of this thesis, we derive theoretical lower bounds on the performance of turbo synchronizers.
|
214 |
Joint source-channel turbo techniques and variable length codesJaspar, Xavier 08 April 2008 (has links)
Efficient multimedia communication over mobile or wireless channels remains a challenging problem. To deal with that problem so far, the industry has followed mostly a divide and conquer approach, by considering separately the source of data (text, image, video, etc.) and the communication channel (electromagnetic waves across the air, a telephone line, a coaxial cable, etc.). The goal is always the same: to transmit (or store) more data reliably per unit of time, of energy, of physical medium, etc. With today's applications, the divide and conquer approach has, in a sense, started to show its limits.
Let us consider, for example, the digital transmission of an image. At the transmitter, the first main step is data compression, at the source level. The number of bits that are necessary to represent the image with a given level of quality is reduced, usually by removing details in the image that are invisible (or less visible) to the human eye. The second main step is data protection, at the channel level. The transmission is made ideally resistant to deteriorations caused by the channel, by implementing techniques such as time/frequency/space expansions. In a sense, the two steps are quite antagonistic --- we first compress then expand the original signal --- and have different goals --- compression enables to transfer more data per unit of time/energy/medium while protection enables to transfer data reliably. At the receiver, the "reversed" operations are implemented.
This separation in two steps dates back to Shannon's source and channel coding separation theorem in 1948 and has encouraged the division of the research community in two groups, one focusing on data compression, the other on data protection. This separation has also seduced the industry for the design, thereby supported by theory, of layered communication protocols. But this theorem holds only under asymptotic conditions that are rarely satisfied with today's multimedia content and mobile channels. Therefore, it is usually wise in practice to drop this strict separation and to allow at least some cross-layer cooperation between the source and channel layers.
This is what lies behind the words joint source-channel techniques.
As the name suggests, these techniques are optimized jointly, without a strict separation. Intuitively, since the optimization is less constrained from a mathematical standpoint, the solution can only be better or equivalent.
In this thesis, we investigate a promising subset of these techniques, based on the turbo principle and on variable length codes. The potential of this subset has been illustrated for the first time in 2000, with an example that, since then, has been successfully improved in several directions. Unfortunately, most decoding algorithms have been so far developed on an ad hoc basis, without a unified view and often without specifying the approximations made. Besides, most code-related conclusions are based on simulations or on extrinsic information analysis. A theoretical framework on the error correcting properties of variable length codes in turbo systems is lacking.
The purpose of this work, in three parts, is to fill in these gaps up to a certain extent. The first part presents the literature in this field and attempts to give a unified overview. The second part proposes a transmission system that generalizes previous systems from the literature, with the simple addition of a repetition code. While most previous systems are designed for bit streams with a high level of residual redundancy, the proposed system has the interesting flexibility to handle easily different levels of redundancy. Its performance is then analyzed for small levels of redundancy, which is a case not tackled extensively in the literature. This analysis leads notably to the discovery of surprising interleaving gains with reversible variable length codes.
The third part develops the mathematical framework that was motivated during the second part but skipped on purpose for the sake of clarity. We first clarify several issues that arise with non-uniform bits and the extrinsic information charts, and propose and discuss two methods to compute these charts. Next, several theoretical results are stated on the robustness of variable length codes concatenated with linear error correcting codes. Notably, an approximate average distance spectrum of the concatenated code is rigorously developed. Together with the union bound, this spectrum provides upper bounds on the symbol and frame/packet error rates. These bounds are then analyzed from an interleaving gain standpoint and it is proved that the variable length code improves the interleaving gain if its spectrum is bounded.
|
215 |
Joint Equalization and Decoding via Convex OptimizationKim, Byung Hak 2012 May 1900 (has links)
The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC) codes on finite-state channels (FSCs). Hard-disk drives (or magnetic recording systems), where the required error rate (after decoding) is too low to be verifiable by simulation, are most important applications of this research.
Recently, LDPC codes have attracted a lot of attention in the magnetic storage industry and some hard-disk drives have started using iterative decoding. Despite progress in the area of reduced-complexity detection and decoding algorithms, there has been some resistance to the deployment of turbo-equalization (TE) structures (with iterative detectors/decoders) in magnetic-recording systems because of error floors and the difficulty of accurately predicting performance at very low error rates.
To address this problem for channels with memory, such as FSCs, we propose a new decoding algorithms based on a well-defined convex optimization problem. In particular, it is based on the linear-programing (LP) formulation of the joint decoding problem for LDPC codes over FSCs. It exhibits two favorable properties: provable convergence and predictable error-floors (via pseudo-codeword analysis).
Since general-purpose LP solvers are too complex to make the joint LP decoder feasible for practical purposes, we develop an efficient iterative solver for the joint LP
decoder by taking advantage of its dual-domain structure. The main advantage of this approach is that it combines the predictability and superior performance of joint LP decoding with the computational complexity of TE.
The second part of this dissertation considers the matrix completion problem for the recovery of a data matrix from incomplete, or even corrupted entries of an unknown matrix. Recommender systems are good representatives of this problem, and this research is important for the design of information retrieval systems which require very high scalability. We show that our IMP algorithm reduces the well-known cold-start problem associated with collaborative filtering systems in practice.
|
216 |
Parallel VLSI Architectures for Multi-Gbps MIMO Communication SystemsJanuary 2011 (has links)
In wireless communications, the use of multiple antennas at both the transmitter and the receiver is a key technology to enable high data rate transmission without additional bandwidth or transmit power. Multiple-input multiple-output (MIMO) schemes are widely used in many wireless standards, allowing higher throughput using spatial multiplexing techniques. MIMO soft detection poses significant challenges to the MIMO receiver design as the detection complexity increases exponentially with the number of antennas. As the next generation wireless system is pushing for multi-Gbps data rate, there is a great need for high-throughput low-complexity soft-output MIMO detector. The brute-force implementation of the optimal MIMO detection algorithm would consume enormous power and is not feasible for the current technology. We propose a reduced-complexity soft-output MIMO detector architecture based on a trellis-search method. We convert the MIMO detection problem into a shortest path problem. We introduce a path reduction and a path extension algorithm to reduce the search complexity while still maintaining sufficient soft information values for the detection. We avoid the missing counter-hypothesis problem by keeping multiple paths during the trellis search process. The proposed trellis-search algorithm is a data-parallel algorithm and is very suitable for high speed VLSI implementation. Compared with the conventional tree-search based detectors, the proposed trellis-based detector has a significant improvement in terms of detection throughput and area efficiency. The proposed MIMO detector has great potential to be applied for the next generation Gbps wireless systems by achieving very high throughput and good error performance. The soft information generated by the MIMO detector will be processed by a channel decoder, e.g. a low-density parity-check (LDPC) decoder or a Turbo decoder, to recover the original information bits. Channel decoder is another very computational-intensive block in a MIMO receiver SoC (system-on-chip). We will present high-performance LDPC decoder architectures and Turbo decoder architectures to achieve 1+ Gbps data rate. Further, a configurable decoder architecture that can be dynamically reconfigured to support both LDPC codes and Turbo codes is developed to support multiple 3G/4G wireless standards. We will present ASIC and FPGA implementation results of various MIMO detectors, LDPC decoders, and Turbo decoders. We will discuss in details the computational complexity and the throughput performance of these detectors and decoders.
|
217 |
Genomgång av Turbomin 100 : Förstudie och föreslagna förbättringar av undervisningsjetmotor Turbomin 100Lilliesköld, Anders January 2010 (has links)
ABSTRACT This project thesis has been written at the request of Mälardalens University, Västerås. The aeronautical engineering students at Mälardalens University and the pupils of Hässlö upper secondary school, all gets the opportunity to perform a computation lab with a real turbojet engine during their study. The goal of the lab from the University is that it should give the students applied experience from the theory part of which has been tought in the course “Aircraft Engine Technology”. The pupils of the Hässlö upper secondary school are performing simpler calculations from the measured values of the equipment. This turbojet engine is located at Hässlö airport in the premises of Hässlö upper seconday school. Since the installation 1989, the engine has lost both thrust and reliability. This makes the theoretical computations made by the students inaccurate. Computations don’t match up with measured values. Also if the engine is inoperational, that would affect the education adversely. The purpose of this project thesis is to find a suitable upgrade solution both economically and practically. The thesis was divided into three different bullet points: Find the costings to renovate the existing Turbomin 100 turbojet engine. Also to find a suitable upgrade of the presentation of the measuring instruments to better clarify the lab. Motivate the disadvantages and the benefints respectively. Find the costings to source new lab equipment matching the Turbomin 100 equipment and motivate why this would improve the lab. This purchase doesn’t need to be a purchase of an off-the-shelf solution, but can also imply the development of an inhouse solution. Motivate the disadvantages and the benefints respectively. Develop a new lab instruction which matching one of the choosen alternatives above. This study results in several solutions: Proposal 1: The existing Turbomin 100 has such a solid construction that only a few spare parts needs to be replaced to get its original characteristics back. The use of measure equipment from Campbell Scientific consisting of a datalogger and associated software makes the presentation possible on a computer, from which printing easily can be done. This type of presentation would improve the understanding amongst the students for where and why the measurements are being made in certain areas of the engine. Estimated price for this solution is: 46.060 kr Proposal 2: New lab equipment could consist of two different solutions. The first solution is to invest in two turbojet engines from JetCat with a thrust of 80N each. Having two engines would ensure the operations by having one operational and the other one as a spare when its time for the compulsory service after 50 hours of run time. This solution together with the above mentioned solution for measure equipment including pressure and temperature probes would cost around 104.300 kr.Another solution would be to invest in a complete engine and measure equipment from Turbine Technologies. Their turbojet engine comes in a test cabinet with all probes and instrument installed. Even a computer can be connected to get readings digitally. This makes it possible to print or even save the measured values. Quoted price for this solution is between 412.700 and 766.700 kr depending on solution. Recommended solution from above is Proposal 1 will the new lab instructions look like attachment H.
|
218 |
Iterative Timing Recovery for Magnetic Recording Channels with Low Signal-to-Noise RatioNayak, Aravind Ratnakar 07 July 2004 (has links)
Digital communication systems invariably employ an underlying analog communication channel. At the transmitter, data is modulated to obtain an analog waveform which is input to the channel. At the receiver, the output of the channel needs to be mapped back into the discrete domain. To this effect, the continuous-time received waveform is sampled at instants chosen by the timing recovery block. Therefore, timing recovery is an essential component of digital communication systems.
A widely used timing recovery method is based on a phase-locked loop (PLL), which updates its timing estimates based on a decision-directed device. Timing recovery performance is a strong function of the reliability of decisions, and hence, of the channel signal-to-noise ratio (SNR). Iteratively decodable error-control codes (ECCs) like turbo codes and LDPC codes allow operation at SNRs lower than ever before, thus exacerbating timing recovery.
We propose iterative timing recovery, where the timing recovery block, the equalizer and the ECC decoder exchange information, giving the timing recovery block access to decisions that are much more reliable than the instantaneous ones. This provides significant SNR gains at a marginal complexity penalty over a conventional turbo equalizer where the equalizer and the ECC decoder exchange information. We also derive the Cramer-Rao bound, which is a lower bound on the estimation error variance of any timing estimator, and propose timing recovery methods that outperform the conventional PLL and achieve the Cramer-Rao bound in some cases.
At low SNR, timing recovery suffers from cycle slips, where the receiver drops or adds one or more symbols, and consequently, almost always the ECC decoder fails to decode. Iterative timing recovery has the ability to corrects cycle slips. To reduce the number of iterations, we propose cycle slip detection and correction methods. With iterative timing recovery, the PLL with cycle slip detection and correction recovers most of the SNR loss of the conventional receiver that separates timing recovery and turbo equalization.
|
219 |
Multiple-Input Multiple-Output Wireless Systems: Coding, Distributed Detection and Antenna SelectionBahceci, Israfil 26 August 2005 (has links)
This dissertation studies a number of important issues that arise in multiple-input multiple-out wireless systems. First, wireless systems equipped with multiple-transmit multiple-receive antennas are considered where an energy-based antenna selection is performed at the receiver. Three different situations are considered: (i) selection over iid MIMO fading channel, (ii) selection over spatially correlated fading channel, and (iii) selection for space-time coded OFDM systems. In all cases, explicit upper bounds are derived and it is shown that using the proposed antenna selection, one can achieve the same diversity order as that attained by full-complexity MIMO systems. Next, joint source-channel coding problem for MIMO antenna systems is studied and a turbo-coded multiple description code for multiple antenna transmission is developed. Simulations indicate that by the proposed iterative joint source-channel decoding that exchanges the extrinsic information between the source code and the channel code, one can achieve better reconstruction quality than that can be achieved by the single-description codes at the same rate. The rest of the dissertation deals with wireless networks. Two problems are studied: channel coding for cooperative diversity in wireless networks, and distributed detection in wireless sensor networks. First, a turbo-code based channel code for three-terminal full-duplex wireless relay channels is proposed where both the source and the relay nodes employ turbo codes. An iterative turbo decoding algorithm exploiting the information arriving from both the source and relay nodes is proposed. Simulation results show that the proposed scheme can perform very close to the capacity of a wireless relay channel. Next the parallel and serial binary distributed detection problem in wireless sensor networks is investigated. Detection strategies based on single-bit and multiple-bit decisions are considered. The expressions for the detection and false alarm rates are derived and used for designing the optimal detection rules at all sensor nodes. Also, an analog approach to the distributed detection in wireless sensor networks is proposed where each sensor nodes simply amplifies-and-forwards its sufficient statistics to the fusion center. This method requires very simple processing at the local sensor. Numerical examples indicate that the analog approach is superior to the digital approach in many cases.
|
220 |
Reduced Complexity Sequential Monte Carlo Algorithms for Blind ReceiversOzgur, Soner 10 April 2006 (has links)
Monte Carlo algorithms can be used to estimate the state of a system given relative observations. In this dissertation, these algorithms are applied to physical layer communications system models to estimate channel state information, to obtain soft information about transmitted symbols or multiple access interference, or to obtain estimates of all of these by joint estimation.
Initially, we develop and analyze a multiple access technique utilizing mutually orthogonal complementary sets (MOCS) of sequences. These codes deliberately introduce inter-chip interference, which is naturally eliminated during processing at the receiver. However, channel impairments can destroy their orthogonality properties and additional processing becomes necessary.
We utilize Monte Carlo algorithms to perform joint channel and symbol estimation for systems utilizing MOCS sequences as spreading codes. We apply Rao-Blackwellization to reduce the required number of particles. However, dense signaling constellations, multiuser environments, and the interchannel interference introduced by the spreading codes all increase the dimensionality of the symbol state space significantly. A full maximum likelihood solution is computationally expensive and generally not practical. However, obtaining the optimum solution is critical, and looking at only a part of the symbol space is generally not a good solution. We have sought algorithms that would guarantee that the correct transmitted symbol is considered, while only sampling a portion of the full symbol space. The performance of the proposed method is comparable to the Maximum Likelihood (ML) algorithm. While the computational complexity of ML increases exponentially with the dimensionality of the problem, the complexity of our approach increases only quadratically.
Markovian structures such as the one imposed by MOCS spreading sequences can be seen in other physical layer structures as well. We have applied this partitioning approach with some modification to blind equalization of frequency selective fading channel and to multiple-input multiple output receivers that track channel changes.
Additionally, we develop a method that obtains a metric for quantifying the convergence rate of Monte Carlo algorithms. Our approach yields an eigenvalue based method that is useful in identifying sources of slow convergence and estimation inaccuracy.
|
Page generated in 0.0931 seconds