• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 26
  • 6
  • 6
  • 4
  • 4
  • 3
  • Tagged with
  • 145
  • 145
  • 70
  • 57
  • 26
  • 26
  • 24
  • 23
  • 22
  • 21
  • 21
  • 21
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Optimal erasure protection assignment for scalably compressed data over packet-based networks

Thie, Johnson, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2004 (has links)
This research is concerned with the reliable delivery of scalable compressed data over lossy communication channels. Recent works proposed several strategies for assigning optimal code redundancies to elements of scalable data, which form a linear structure of dependency, under the assumption that all source elements are encoded onto a common group of network packets. Given large data and small network packets, such schemes require very long channel codes with high computational complexity. In networks with high loss, small packets are more desirable than long packets. The first contribution of this thesis is to propose a strategy for optimally assigning elements of the scalable data to clusters of packets, subject to constraints on packet size and code complexity. Given a packet cluster arrangement, the scheme then assigns optimal code redundancies to the source elements, subject to a constraint on transmission length. Experimental results show that the proposed strategy can outperform the previous code assignment schemes subject to the above-mentioned constraints, particularly at high channel loss rates. Secondly, we modify these schemes to accommodate complex structures of dependency. Source elements are allocated to clusters of packets according to their dependency structure, subject to constraints on packet size and channel codeword length. Given a packet cluster arrangement, the proposed schemes assign optimal code redundancies to the source elements, subject to a constraint on transmission length. Experimental results demonstrate the superiority of the proposed strategies for correctly modelling the dependency structure. The last contribution of this thesis is to propose a scheme for optimizing protection of scalable data where limited retransmission is possible. Previous work assumed that retransmission is not possible. For most real-time or interactive applications, however, retransmission of lost data may be possible up to some limit. In the present work we restrict our attention to streaming sources (e.g., video) where each source element can be transmitted in one or both of two time slots. An optimization algorithm determines the transmission and level of protection for each source element, using information about the success of earlier transmissions. Experimental results confirm the benefit of limited retransmission.
92

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
93

Flexible Constraint Length Viterbi Decoders On Large Wire-area Interconnection Topologies

Garga, Ganesh 07 1900 (has links)
To achieve the goal of efficient ”anytime, anywhere” communication, it is essential to develop mobile devices which can efficiently support multiple wireless communication standards. Also, in order to efficiently accommodate the further evolution of these standards, it should be possible to modify/upgrade the operation of the mobile devices without having to recall previously deployed devices. This is achievable if as much functionality of the mobile device as possible is provided through software. A mobile device which fits this description is called a Software Defined Radio (SDR). Reconfigurable hardware-based solutions are an attractive option for realizing SDRs as they can potentially provide a favourable combination of the flexibility of a DSP or a GPP and the efficiency of an ASIC. The work presented in this thesis discusses the development of efficient reconfigurable hardware for one of the most energy-intensive functionalities in the mobile device, namely, Forward Error Correction (FEC). FEC is required in order to achieve reliable transfer of information at minimal transmit power levels. FEC is achieved by encoding the information in a process called channel coding. Previous studies have shown that the FEC unit accounts for around 40% of the total energy consumption of the mobile unit. In addition, modern wireless standards also place the additional requirement of flexibility on the FEC unit. Thus, the FEC unit of the mobile device represents a considerable amount of computing ability that needs to be accommodated into a very small power, area and energy budget. Two channel coding techniques have found widespread use in most modern wireless standards -namely convolutional coding and turbo coding. The Viterbi algorithm is most widely used for decoding convolutionally encoded sequences. It is possible to use this algorithm iteratively in order to decode turbo codes. Hence, this thesis specifically focusses on developing architectures for flexible Viterbi decoders. Chapter 2 provides a description of the Viterbi and turbo decoding techniques. The flexibility requirements placed on the Viterbi decoder by modern standards can be divided into two types -code rate flexibility and constraint length flexibility. The code rate dictates the number of received bits which are handled together as a symbol at the receiver. Hence, code rate flexibility needs to be built into the basic computing units which are used to implement the Viterbi algorithm. The constraint length dictates the number of computations required per received symbol as well as the manner of transfer of results between these computations. Hence, assuming that multiple processing units are used to perform the required computations, supporting constraint length flexibility necessitates changes in the interconnection network connecting the computing units. A constraint length K Viterbi decoder needs 2K−1computations to be performed per received symbol. The results of the computations are exchanged among the computing units in order to prepare for the next received symbol. The communication pattern according to which these results are exchanged forms a graph called a de Bruijn graph, with 2K−1nodes. This implies that providing constraint length flexibility requires being able to realize de Bruijn graphs of various sizes on the interconnection network connecting the processing units. This thesis focusses on providing constraint length flexibility in an efficient manner. Quite clearly, the topology employed for interconnecting the processing units has a huge effect on the efficiency with which multiple constraint lengths can be supported. This thesis aims to explore the usefulness of interconnection topologies similar to the de Bruijn graph, for building constraint length flexible Viterbi decoders. Five different topologies have been considered in this thesis, which can be discussed under two different headings, as done below: De Bruijn network-based architectures The interconnection network that is of chief interest in this thesis is the de Bruijn interconnection network itself, as it is identical to the communication pattern for a Viterbi decoder of a given constraint length. The problem of realizing flexible constraint length Viterbi decoders using a de Bruijn network has been approached in two different ways. The first is an embedding-theoretic approach where the problem of supporting multiple constraint lengths on a de Bruijn network is seen as a problem of embedding smaller sized de Bruijn graphs on a larger de Bruijn graph. Mathematical manipulations are presented to show that this embedding can generally be accomplished with a maximum dilation of, where N is the number of computing nodes in the physical network, while simultaneously avoiding any congestion of the physical links. In this case, however, the mapping of the decoder states onto the processing nodes is assumed fixed. Another scheme is derived based on a variable assignment of decoder states onto computing nodes, which turns out to be more efficient than the embedding-based approach. For this scheme, the maximum number of cycles per stage is found to be limited to 2 irrespective of the maximum contraint length to be supported. In addition, it is also found to be possible to execute multiple smaller decoders in parallel on the physical network, for smaller constraint lengths. Consequently, post logic-synthesis, this architecture is found to be more area-efficient than the architecture based on the embedding theoretic approach. It is also a more efficiently scalable architecture. Alternative architectures There are several interconnection topologies which are closely connected to the de Bruijn graph, and hence could form attractive alternatives for realizing flexbile constraint length Viterbi decoders. We consider two more topologies from this class -namely, the shuffle-exchange network and the flattened butterfly network. The variable state assignment scheme developed for the de Bruijn network is found to be directly applicable to the shuffle-exchange network. The average number of clock cycles per stage is found to be limited to 4 in this case. This is again independent of the constraint length to be supported. On the flattened butterfly (which is actually identical to the hypercube), a state scheduling scheme similar to that of bitonic sorting is used. This architecture is found to offer the ideal throughput of one decoded bit every clock cycle, for any constraint length. For comparison with a more general purpose topology, we consider a flexible constraint length Viterbi decoder architecture based on a 2D-mesh, which is a popular choice for general purpose applications, as well as many signal processing applications. The state scheduling scheme used here is also similar to that used for bitonic sorting on a mesh. All the alternative architectures are capable of executing multiple smaller decoders in parallel on the larger interconnection network. Inferences Following logic synthesis and power estimation, it is found that the de Bruijn network-based architecture with the variable state assignment scheme yields the lowest (area)−(time) product, while the flattened butterfly network-based architecture yields the lowest (area) - (time)2product. This means, that the de Bruijn network-based architecture is the best choice for moderate throughput applications, while the flattened butterfly network-based architecture is the best choice for high throughput applications. However, as the flattened butterfly network is less scalable in terms of size compared to the de Bruijn network, it can be concluded that among the architectures considered in this thesis, the de Bruijn network-based architecture with the variable state assignment scheme is overall an attractive choice for realizing flexible constraint length Viterbi decoders.
94

Control over Low-Rate Noisy Channels

Bao, Lei January 2009 (has links)
Networked embedded control systems are present almost everywhere. A recent trendis to introduce radio communication in these systems to increase mobility and flex-ibility. Network nodes, such as the sensors, are often simple devices with limitedcomputing and transmission power and low storage capacity, so an important prob-lem concerns how to optimize the use of resources to provide sustained overall sys-tem performance. The approach to this problem taken in the thesis is to analyzeand design the communication and control application layers in an integrated man-ner. We focus in particular on cross-layer design techniques for closed-loop controlover non-ideal communication channels, motivated by future control systems withvery low-rate and highly quantized sensor communication over noisy links. Severalfundamental problems in the design of source–channel coding and optimal controlfor these systems are discussed.The thesis consists of three parts. The first and main part is devoted to the jointdesign of the coding and control for linear plants, whose state feedback is trans-mitted over a finite-rate noisy channel. The system performance is measured by afinite-horizon linear quadratic cost. We discuss equivalence and separation proper-ties of the system, and conclude that although certainty equivalence does not holdin general it can still be utilized, under certain conditions, to simplify the overalldesign by separating the estimation and the control problems. An iterative opti-mization algorithm for training the encoder–controller pairs, taking channel errorsinto account in the quantizer design, is proposed. Monte Carlo simulations demon-strate promising improvements in performance compared to traditional approaches.In the second part of the thesis, we study the rate allocation problem for statefeedback control of a linear plant over a noisy channel. Optimizing a time-varyingcommunication rate, subject to a maximum average-rate constraint, can be viewedas a method to overcome the limited bandwidth and energy resources and to achievebetter overall performance. The basic idea is to allow the sensor and the controllerto communicate with a higher data rate when it is required. One general obstacle ofoptimal rate allocation is that it often leads to a non-convex and non-linear problem.We deal with this challenge by using high-rate theory and Lagrange duality. It isshown that the proposed method gives a good performance compared to some otherrate allocation schemes.In the third part, encoder–controller design for Gaussian channels is addressed.Optimizing for the Gaussian channel increases the controller complexity substan-tially because the channel output alphabet is now infinite. We show that an efficientcontroller can be implemented using Hadamard techniques. Thereafter, we proposea practical controller that makes use of both soft and hard channel outputs. / QC 20100623
95

Key Agreement over Wiretap Models with Non-Causal Side Information

Zibaeenejad, Ali January 2012 (has links)
The security of information is an indispensable element of a communication system when transmitted signals are vulnerable to eavesdropping. This issue is a challenging problem in a wireless network as propagated signals can be easily captured by unauthorized receivers, and so achieving a perfectly secure communication is a desire in such a wiretap channel. On the other hand, cryptographic algorithms usually lack to attain this goal due to the following restrictive assumptions made for their design. First, wiretappers basically have limited computational power and time. Second, each authorized party has often access to a reasonably large sequence of uniform random bits concealed from wiretappers. To guarantee the security of information, Information Theory (IT) offers the following two approaches based on physical-layer security. First, IT suggests using wiretap (block) codes to securely and reliably transmit messages over a noisy wiretap channel. No confidential common key is usually required for the wiretap codes. The secrecy problem investigates an optimum wiretap code that achieves the secrecy capacity of a given wiretap channel. Second, IT introduces key agreement (block) codes to exchange keys between legitimate parties over a wiretap model. The agreed keys are to be reliable, secure, and (uniformly) random, at least in an asymptotic sense, such that they can be finally employed in symmetric key cryptography for data transmission. The key agreement problem investigates an optimum key agreement code that obtains the key capacity of a given wiretap model. In this thesis, we study the key agreement problem for two wiretap models: a Discrete Memoryless (DM) model and a Gaussian model. Each model consists of a wiretap channel paralleled with an authenticated public channel. The wiretap channel is from a transmitter, called Alice, to an authorized receiver, called Bob, and to a wiretapper, called Eve. The Probability Transition Function (PTF) of the wiretap channel is controlled by a random sequence of Channel State Information (CSI), which is assumed to be non-causally available at Alice. The capacity of the public channel is C_P₁∈[0,∞) in the forward direction from Alice to Bob and C_P₂∈[0,∞) in the backward direction from Bob to Alice. For each model, the key capacity as a function of the pair (C_P₁, C_P₂) is denoted by C_K(C_P₁, C_P₂). We investigate the forward key capacity of each model, i.e., C_K(C_P₁, 0) in this thesis. We also study the key generation over the Gaussian model when Eve's channel is less noisy than Bob's. In the DM model, the wiretap channel is a Discrete Memoryless State-dependent Wiretap Channel (DM-SWC) in which Bob and Eve each may also have access to a sequence of Side Information (SI) dependent on the CSI. We establish a Lower Bound (LB) and an Upper Bound (UB) on the forward key capacity of the DM model. When the model is less noisy in Bob's favor, another UB on the forward key capacity is derived. The achievable key agreement code is asymptotically optimum as C_P₁→ ∞. For any given DM model, there also exists a finite capacity C⁰_P₁, which is determined by the DM-SWC, such that the forward key capacity is achievable if C_P₁≥ C⁰_P₁. Moreover, the key generation is saturated at capacity C_P₁= C⁰_P₁, and thus increasing the public channel capacity beyond C⁰_P₁ makes no improvement on the forward key capacity of the DM model. If the CSI is fully known at Bob in addition to Alice, C⁰_P₁=0, and so the public channel has no contribution in key generation when the public channel is in the forward direction. The achievable key agreement code of the DM model exploits both a random generator and the CSI as resources for key generation at Alice. The randomness property of channel states can be employed for key generation, and so the agreed keys depend on the CSI in general. However, a message is independent of the CSI in a secrecy problem. Hence, we justify that the forward key capacity can exceed both the main channel capacity and the secrecy capacity of the DM-SWC. In the Gaussian model, the wiretap channel is a Gaussian State-dependent Wiretap Channel (G-SWC) with Additive White Gaussian Interference (AWGI) having average power Λ. For simplicity, no side information is assumed at Bob and Eve. Bob's channel and Eve's channel suffer from Additive White Gaussian Noise (AWGN), where the correlation coefficient between noise of Bob's channel and that of Eve's channel is given by ϱ. We prove that the forward key capacity of the Gaussian model is independent of ϱ. Moreover, we establish that the forward key capacity is positive unless Eve's channel is less noisy than Bob's. We also prove that the key capacity of the Gaussian model vanishes if the G-SWC is physically degraded in Eve's favor. However, we justify that obtaining a positive key capacity is feasible even if Eve's channel is less noisy than Bob's according to our achieved LB on the key capacity for case (C_P₁, C_P₂)→ (∞, ∞). Hence, the key capacity of the Gaussian model is a function of ϱ. In this thesis, an LB on the forward key capacity of the Gaussian model is achieved. For a fixed Λ, the achievable key agreement code is optimum for any C_P₁∈[0,∞) in both low Signal-to-Interference Ratio (SIR) and high SIR regimes. We show that the forward key capacity is asymptotically independent of C_P₁ and Λ as the SIR goes to infinity, and thus the public channel and the interference have negligible contributions in key generation in the high SIR regime. On the other hand, the forward key capacity is a function of C_P₁ and Λ in the low SIR regime. Contributions of the interference and the public channel in key generation are significant in the low SIR regime that will be illustrated by simulations. The proposed key agreement code asymptotically achieves the forward key capacity of the Gaussian model for any SIR as C_P₁→ ∞. Hence, C_K(∞,0) is calculated, and it is suggested as a UB on C_K(C_P₁,0). Using simulations, we also compute the minimum required C_P₁ for which the forward key capacity is upper bounded within a given tolerance. The achievable key agreement code is designed based on a generalized version of the Dirty Paper Coding (DPC) in which transmitted signals are correlated with the CSI. The correlation coefficient is to be determined by C_P₁. In contrast to the DM model, the LB on the forward key capacity of a Gaussian model is a strictly increasing function of C_P₁ according to our simulations. This fact is an essential difference between this model and the DM model. For C_P₁=0 and a fixed Λ, the forward key capacity of the Gaussian model exceeds the main channel capacity of the G-SWC in the low SIR regime. By simulations, we show that the interference enhances key generation in the low SIR regime. In this regime, we also justify that the positive effect of the interference on the (forward) key capacity is generally more than its positive effect on the secrecy capacity of the G-SWC, while the interference has no influence on the main channel capacity of the G-SWC.
96

Coding Theorems via Jar Decoding

Meng, Jin January 2013 (has links)
In the development of digital communication and information theory, every channel decoding rule has resulted in a revolution at the time when it was invented. In the area of information theory, early channel coding theorems were established mainly by maximum likelihood decoding, while the arrival of typical sequence decoding signaled the era of multi-user information theory, in which achievability proof became simple and intuitive. Practical channel code design, on the other hand, was based on minimum distance decoding at the early stage. The invention of belief propagation decoding with soft input and soft output, leading to the birth of turbo codes and low-density-parity check (LDPC) codes which are indispensable coding techniques in current communication systems, changed the whole research area so dramatically that people started to use the term "modern coding theory'' to refer to the research based on this decoding rule. In this thesis, we propose a new decoding rule, dubbed jar decoding, which would be expected to bring some new thoughts to both the code performance analysis and the code design. Given any channel with input alphabet X and output alphabet Y, jar decoding rule can be simply expressed as follows: upon receiving the channel output y^n ∈ Y^n, the decoder first forms a set (called a jar) of sequences x^n ∈ X^n considered to be close to y^n and pick any codeword (if any) inside this jar as the decoding output. The way how the decoder forms the jar is defined independently with the actual channel code and even the channel statistics in certain cases. Under this jar decoding, various coding theorems are proved in this thesis. First of all, focusing on the word error probability, jar decoding is shown to be near optimal by the achievabilities proved via jar decoding and the converses proved via a proof technique, dubbed the outer mirror image of jar, which is also quite related to jar decoding. Then a Taylor-type expansion of optimal channel coding rate with finite block length is discovered by combining those achievability and converse theorems, and it is demonstrated that jar decoding is optimal up to the second order in this Taylor-type expansion. Flexibility of jar decoding is then illustrated by proving LDPC coding theorems via jar decoding, where the bit error probability is concerned. And finally, we consider a coding scenario, called interactive encoding and decoding, and show that jar decoding can be also used to prove coding theorems and guide the code design in the scenario of two-way communication.
97

Improving Error Performance in Bandwidth-Limited Baseband Channels

Alfaro Zavala, Juan Wilfredo January 2012 (has links)
Channel coding has been largely used for the purpose of improving error performance on a communications system. Typical methods based on added redundancy allow for error detection and correction, this improvement however comes at a cost of bandwidth. This thesis focuses on channel coding for the bandwidth-limited channel where no bandwidth expansion is allowed. We first discuss the idea of coding for the bandwidth-limited channel as seen from the signal space point of view where the purpose of coding is to maximize the Euclidian distance between constellation points without increasing the total signal power and under the condition that no extra bits can be added. We then see the problem from another angle and identify the tradeoffs related to bandwidth and error performance. This thesis intends to find a simple way of achieving an improvement in error performance for the bandwidth-limited channel without the use of lattice codes or trellis-coded modulation. The proposed system is based on convolutional coding followed by multilevel transmission. It achieved a coding gain of 2 dB on Eb/No or equivalently, a coding gain of approximately 2.7 dB on SNRnorm without increase in bandwidth. This coding gain is better than that obtained by a more sophisticated lattice code Gosset E8 at the same error rate.
98

Bit-interleaved coded modulation for hybrid rf/fso systems

He, Xiaohui 05 1900 (has links)
In this thesis, we propose a novel architecture for hybrid radio frequency (RF)/free–space optics (FSO) wireless systems. Hybrid RF/FSO systems are attractive since the RF and FSO sub–systems are affected differently by weather and fading phenomena. We give a thorough introduction to the RF and FSO technology, respectively. The state of the art of hybrid RF/FSO systems is reviewed. We show that a hybrid system robust to different weather conditions is obtained by joint bit–interleaved coded modulation (BICM) of the bit streams transmitted over the RF and FSO sub–channels. An asymptotic performance analysis reveals that a properly designed convolutional code can exploit the diversity offered by the independent sub–channels. Furthermore, we develop code design and power assignment criteria and provide an efficient code search procedure. The cut–off rate of the proposed hybrid system is also derived and compared to that of hybrid systems with perfect channel state information at the transmitter. Simulation results show that hybrid RF/FSO systems with BICM outperform previously proposed hybrid systems employing a simple repetition code and selection diversity.
99

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
100

Lattice-Based Precoding And Decoding in MIMO Fading Systems

Taherzadeh, Mahmoud January 2008 (has links)
In this thesis, different aspects of lattice-based precoding and decoding for the transmission of digital and analog data over MIMO fading channels are investigated: 1) Lattice-based precoding in MIMO broadcast systems: A new viewpoint for adopting the lattice reduction in communication over MIMO broadcast channels is introduced. Lattice basis reduction helps us to reduce the average transmitted energy by modifying the region which includes the constellation points. The new viewpoint helps us to generalize the idea of lattice-reduction-aided precoding for the case of unequal-rate transmission, and obtain analytic results for the asymptotic behavior of the symbol-error-rate for the lattice-reduction-aided precoding and the perturbation technique. Also, the outage probability for both cases of fixed-rate users and fixed sum-rate is analyzed. It is shown that the lattice-reduction-aided method, using LLL algorithm, achieves the optimum asymptotic slope of symbol-error-rate (called the precoding diversity). 2) Lattice-based decoding in MIMO multiaccess systems and MIMO point-to-point systems: Diversity order and diversity-multiplexing tradeoff are two important measures for the performance of communication systems over MIMO fading channels. For the case of MIMO multiaccess systems (with single-antenna transmitters) or MIMO point-to-point systems with V-BLAST transmission scheme, it is proved that lattice-reduction-aided decoding achieves the maximum receive diversity (which is equal to the number of receive antennas). Also, it is proved that the naive lattice decoding (which discards the out-of-region decoded points) achieves the maximum diversity in V-BLAST systems. On the other hand, the inherent drawbacks of the naive lattice decoding for general MIMO fading systems is investigated. It is shown that using the naive lattice decoding for MIMO systems has considerable deficiencies in terms of the diversity-multiplexing tradeoff. Unlike the case of maximum-likelihood decoding, in this case, even the perfect lattice space-time codes which have the non-vanishing determinant property can not achieve the optimal diversity-multiplexing tradeoff. 3) Lattice-based analog transmission over MIMO fading channels: The problem of finding a delay-limited schemes for sending an analog source over MIMO fading channels is investigated in this part. First, the problem of robust joint source-channel coding over an additive white Gaussian noise channel is investigated. A new scheme is proposed which achieves the optimal slope for the signal-to-distortion-ratio (SDR) curve (unlike the previous known coding schemes). Then, this idea is extended to MIMO channels to construct lattice-based codes for joint source-channel coding over MIMO channels. Also, similar to the diversity-multiplexing tradeoff, the asymptotic performance of MIMO joint source-channel coding schemes is characterized, and a concept called diversity-fidelity tradeoff is introduced in this thesis.

Page generated in 0.0636 seconds