In this thesis, we propose several cross-layer operation aided schemes conceived for wireless networks. Cross layer design may overcome the disadvantages of the network's layered architecture, where layering is most typically represented by the Transport Control Protocol (TCP) / Internet Protocol (IP) suite. We invoke Fountain codes for protecting file transfer at the application layer, since they are suitable for erasure channels. They are also often referred to as rateless codes. When implementing Fountain code aided file transfer, the file will be firstly partitioned into a number of blocks, each of which contains K packets. Fountain codes randomly select several packets from a block and then combine them using exclusive- OR additions for generating an encoded packet. The encoding continues until all blocks are successfully received. Considering an 802.11 Wireless Local Area Network (WLAN) scenario, the packet size has to be appropriately chosen, since there exists a trade-off between the packet size and the transmission efficiency, which is defined as the number of primary information bits to the total number of all transmitted bits including headers, control packets and retransmitted replicas. In order to find the optimum packet size, the transmission efficiency is formulated as a function of the Packet Loss Ratio (PLR) at the application layer and of the total load imposed by a single packet. The PLR at the application layer is related both to the packet size, as well as to the 802.11 MAC retransmission mechanism and to the modulation scheme adopted by the physical layer. Apart from its source data, the total load imposed by an information packet also contains the control packets of the 802.11 Media Access Control (MAC) protocol such as the Request To Send (RTS) / Clear To Send (CTS) messages, the retransmitted replicas and the Acknowledgement (ACK) messages. According to these relations, the transmission effciency may finally be expressed as a function of packet size. Based on the numerical analysis of this function, the optimum packet size may be determined. Our simulation results confirmed that indeed the highest transmission efficiency may be achieved, when using the optimum packet size. Since turbo codes are capable of achieving near capacity performance, they may be successfully combined with Hybrid Automatic Repeat reQuest (HARQ) schemes. In this thesis, the classic Twin Component Turbo Codes (TCTCs) are extended to Multiple Component Turbo Codes (MCTCs). In order to apply classic two-dimensional Extrinsic Information Transfer (EXIT) charts for analyzing them, we divided an N-component MCTC into two logical parts. This partitioning was necessary, because otherwise an N-component scheme would require an N-dimensional EXIT chart. One of the parts is constituted by an individual Bahl, Cocke, Jelinek and Raviv (BCJR) decoder, while the other so-called composite decoder consists of the remaining (N-1) components. The EXIT charts visualized the extrinsic information exchange between these two logical parts of MCTCs. Aided by this partitioning technique, we may find the so-called `open tunnel SNR threshold' for MCTCs, which is defined as the minimum SNR for which the EXIT chart at the specific coding rate used has an open tunnel. It may be used as a metric to compare the achievable performance to the Discreteinput Continuous-output Memoryless Channel's (DCMC) capacity. Our simulation results showed that the achievable performance of MCTCs is closer to the DCMC capacity than that of non-systematic TCTCs, but a bit further than that of systematic TCTCs, if generator polynomials having an arbitrary memory length - and hence complexity - are considered. However, for the lowest-memory octally represented polynomial (2; 3)o, which implies having the lowest possible complexity, MCTCs outperform non-systematic and systematic TCTCs. Furthermore, MCTC aided HARQ schemes using the polynomial of (2; 3)o exhibit significantly better PLRs and throughput performances than systematic as well as non-systematic TCTC aided HARQ schemes using the same polynomial. If systematic TCTC aided HARQ schemes relying on the polynomial of (17; 15)o are used as benchmarkers, MCTC aided HARQ schemes may significantly reduce the complexity, without a substantial degradation of the PLR and throughput. When combining turbo codes with HARQ, the associated complexity becomes a critical issue, since iterative decoding is immediately activated after each transmission. In order to reduce the associated complexity, an Early Stopping (ES) strategy was proposed in this thesis to substitute the fixed number of BCJR operations invoked for each iterative decoding. By observing the EXIT charts of turbo codes, we note that the extrinsic information increases along the decoding trajectory of an open or closed tunnel. The ES aided MCTC HARQ scheme curtails iterative decoding, when the Mutual Information (MI) increase becomes less than a given threshold. This threshold was determined by an off-line training in order to achieve a trade-off between the throughput and complexity. Our simulation results verified that the complexity of MCTC aided HARQ schemes may be reduced by as much as 80%, compared to that of systematic TCTC aided HARQ schemes using a fixed number of 10 BCJR operations. Moreover, the complexity of turbo coded HARQ schemes may be further reduced by our Look-Up Table (LUT) based Deferred Iteration (DI) method. The DI method delays the iterative decoding until the receiver estimates that it has received sufficient information for successful decoding, which may be represented by the emergence of an open tunnel in the EXIT chart corresponding to all received replicas. Therefore, the specific MI that a `just' open tunnel appears when combining all previous (i-1) MIs will be the threshold that has to be satisfed by the ith reception. More specifically, if the MI received during the ith reception is higher than this threshold, the EXIT tunnel is deemed to be open and hence the iterative decoding is triggered. Otherwise, iterative decoding will be disabled when the tunnel is deemed to be closed. This reduces the complexity. The LUT stores all possible MI thresholds for N-component MCTCs, which results in a large storage requirement, if N becomes high. Hence, an efficient LUT design was also proposed in this thesis. Our simulation results demonstrated the achievable complexity reduction may be as high as 50%, compared to the schemes operating without the DI method.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:561493 |
Date | January 2010 |
Creators | Chen, Hong |
Contributors | Hanzo, Lajos ; Maunder, Robert |
Publisher | University of Southampton |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | https://eprints.soton.ac.uk/300510/ |
Page generated in 0.0026 seconds