• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Sparse Kernel feature extraction

Dhanjal, Charanpal January 2008 (has links)
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks, since it can decrease accuracy, make it harder to understand the learned model and increase computational and memory requirements. One approach to this problem is to extract appropriate features. General approaches such as Principal Components Analysis (PCA) are successful for a variety of applications, however they can be improved upon by targeting feature extraction towards more specific problems. More recent work has been more focused and considers sparser formulations which potentially have improved generalisation. However, sparsity is not always efficiently implemented and frequently requires complex optimisation routines. Furthermore, one often does not have a direct control on the sparsity of the solution. In this thesis, we address some of these problems, first by proposing a general framework for feature extraction which possesses a number of useful properties. The framework is based on Partial Least Squares (PLS), and one can choose a user defined criterion to compute projection directions. It draws together a number of existing results and provides additional insights into several popular feature extraction methods. More specific feature extraction is considered for three objectives: matrix approximation, supervised feature extraction and learning the semantics of two-viewed data. Computational and memory efficiency is prioritised, as well as sparsity in a direct manner and simple implementations. For the matrix approximation case, an analysis of different orthogonalisation methods is presented in terms of the optimal choice of projection direction. The analysis results in a new derivation for Kernel Feature Analysis (KFA) and the formation of two novel matrix approximation methods based on PLS. In the supervised case, we apply the general feature extraction framework to derive two new methods based on maximising covariance and alignment respectively. Finally, we outline a novel sparse variant of Kernel Canonical Correlation Analysis (KCCA) which approximates a cardinality constrained optimisation. This method, as well as a variant which performs feature selection in one view, is applied to an enzyme function prediction case study.
502

An analytical inspection framework for evaluating the search tactics and user profiles supported by information seeking interfaces

Wilson, Max L. January 2009 (has links)
Searching is something we do everyday both in digital and physical environments. Whether we are searching for books in a library or information on the web, search is becoming increasingly important. For many years, however, the standard for search in software has been to provide a keyword search box that has, over time, been embellished with query suggestions, Boolean operators, and interactive feedback. More recent research has focused on designing search interfaces that better support exploration and learning. Consequently, the aim of this research has been to develop a framework that can reveal to designers how well their search interfaces support different styles of searching behaviour. The primary contribution of this research has been to develop a usability evaluation method, in the form of a lightweight analytical inspection framework, that can assess both search designs and fully implemented systems. The framework, called Sii, provides three types of analyses: 1) an analysis of the amount of support the different features of a design provide; 2) an analysis of the amount of support provided for 32 known search tactics; and 3) an analysis of the amount of support provided for 16 different searcher profiles, such as those who are finding, browsing, exploring, and learning. The design of the framework was validated by six independent judges, and the results were positively correlated against the results of empirical user studies. Further, early investigations showed that Sii has a learning curve that begins at around one and a half hours, and, when using identical analysis results, different evaluators produce similar design revisions. For Search experts, building interfaces for their systems, Sii provides a Human-Computer Interaction evaluation method that addresses searcher needs rather than system optimisation. For Human-Computer Interaction experts, designing novel interfaces that provide search functions, Sii provides the opportunity to assess designs using the knowledge and theories generated by the Information Seeking community. While the research reported here is under controlled environments, future work is planned that will investigate the use of Sii by independent practitioners on their own projects.
503

Designing a resource-allocating codebook for patch-based visual object recognition

Ramanan, Amirthalingam January 2010 (has links)
The state-of-the-art approach in visual object recognition is the use of local information extracted at several points or image patches from an image. Local information at specific points can deal with object shape variability and partial occlusions. The underlying idea is that, in different images, the statistical distribution of the patches is different, which can be effectively exploited for recognition. In such a patch-based object recognition system, the key role of a visual codebook is to provide a way to map the low-level features into a fixed-length vector in histogram space to which standard classifiers can be directly applied. The discriminative power of a visual codebook determines the quality of the codebook model, whereas the size of the codebook controls the complexity of the model. Thus, the construction of a codebook plays a central role that affects the model’s complexity. The construction of a codebook is an important step which is usually done by cluster analysis. However, clustering is a process that retains regions of high density in a distribution and it follows that the resulting codebook need not have discriminant properties. This is also recognised as a computational bottleneck of such systems. This thesis demonstrates a novel approach, that we call resource-allocating codebook (RAC), to constructing a discriminant codebook in a one-pass design procedure inspired by the resource-allocation network family of algorithms. The RAC approach slightly outperforms more traditional approaches due to its tendency to spread out the cluster centres over a broader range of the feature space thereby including rare low-level features in the codebook than density-preserving clustering-based codebooks. Our algorithm achieves this performance at drastically reduced computing times, because apart from an initial scan through a small subset to determine length scales, each data item is processed only once. We illustrate some properties of our method and compare it to a closely related approach known as the mean-shift clustering technique. A pruning strategy has been employed to tackle a few outliers when assigning each feature in images to the closest codeword to create a histogram representation for each image. Features whose distance from the closest codeword exceeds an empirical distance maximum are neglected. A recognition system that learns incrementally with training images and the output classifier accounting for class-specific discriminant features is also presented. Furthermore, we address an approach which, instead of clustering, adaptively constructs a codebook by computing Fisher scores between the classes of interest. This thesis also demonstrates a novel sequential hierarchical clustering technique that initially builds a hierarchical tree from a small subset of the data, while the remaining data are processed sequentially and the tree adapted constructively. Evaluations performed with this approach show that the performance is comparable while reducing the computational needs. Finally, during the process of classification, we demonstrate a new learning architecture for multi-class classification tasks using support vector machines. This technique is faster in testing compared to directed acyclic graph (DAG) SVMs, while maintaining comparable performance to the standard multi-class classification techniques.
504

Near-capacity co-located and distributed MIMO systems

Kong, Lingkun January 2010 (has links)
Space-time transmission based colocated and distributed Multiple-InputMultiple-Output (MIMO) systems are investigated. Generally speaking, there are two types of fundamental gains, when using multiple antennas in wireless communications systems: the multiplexing gain and the diversity gain. Spatial multiplexing techniques such as the Vertical Bell-labs LAyered Space-Time (V-BLAST) scheme exploit the associated multiplexing gain in terms of an increased bit rate, whereas spatial diversity techniques such as Space-Time Coding (STC) aim for achieving a diversity gain, which results in a reduced error rate. Firstly, we concentrate our attention on a novel space-time transmission scheme, namely on Generalized Multi-Layer Space-Time Codes (GMLST), which may be viewed as a composite of V-BLAST and STC, hence they provide both multiplexing and diversity gains. The basic decoding procedure conceived for our GMLST arrangement is a certain ordered successive decoding scheme, which combines group interference nulling and interference cancellation. We apply a specifically designed power allocation scheme, in order to avoid the overall system performance degradation in the case of equal power allocation. Furthermore, the optimal decoding order is found, in order to enhance the system’s performance with the aid of the channel state information (CSI) at the receiver. However, our decoding scheme relying on power allocation or on the optimal decoding order does not take full advantage of the attainable receive antenna diversity. In order to make the most of this source of diversity, an iterative multistage Successive Interference Cancellation (SIC) detected GMLST scheme was proposed, which may achieve the full receive diversity after a number of iterations, while imposing only a fraction of the computational complexity of Maximum Likelihood (ML)-style joint detection. Furthermore, for the sake of taking full advantage of the available colocated MIMO channel capacity, we present a low-complexity iteratively detected space-time transmission architecture based on GMLST codes and IRregular onvolutional Codes (IRCCs). The GMLST arrangement is serially concatenated with a Unity-Rate Code (URC) and an IRCC, which are used to facilitate near-capacity operation with the aid of an EXtrinsic Information Transfer (EXIT) chart based design. Reduced-complexity iterative multistage SIC is employed in the GMLST decoder instead of the significantly more complex ML detection. For the sake of approaching the maximum attainable rate, iterative decoding is invoked to achieve decoding convergence by exchanging extrinsic information across the three serially concatenated component decoders. Finally, it is shown that the iteratively detected IRCC-URC-GMLST scheme using SIC strikes an attractive trade-off between the complexity imposed and the effective throughput attained, while achieving a near-capacity performance. The above-mentioned advances were also exploited in the context of near-capacity communications in distributed MIMO systems. Specifically, we proposed an Irregular Cooperative Space-Time Coding (Ir-CSTC) scheme, which combines the benefits of Distributed Turbo Codes (DTC) and serially concatenated schemes. Firstly, a serial concatenated scheme comprising an IRCC, a recursive URC and a STC was designed for the conventional single-relay-aided network for employment at the source node. The IRCC is optimized with the aid of EXIT charts for the sake of achieving a near-error-free decoding at the relay node at a minimum source transmit power. During the relay’s transmit period, another IRCC is amalgamated with a further STC, where the IRCC employed at the relay is further improved with the aid of a joint source-and-relay mode design procedure for the sake of approaching the relay channel’s capacity. At the destination node, a novel three-stage iterative decoding scheme is constructed in order to achieve decoding convergence to an infinitesimally low Bit Error Ratio (BER) at channel Signal-to-Noise Ratios (SNRs) close to the relay channel’s capacity. As a further contribution, an extended Ir-CSTC scheme is studied in the context of a twin-relay aided network, where a successive relaying protocol is employed. As a benefit, the factor two multiplexing loss of the single-relay-aided network - which is imposed by the creation of two-phase cooperation - is recovered by the successive relaying protocol with the aid of an additional relay. This technique is more practical than the creation of a full-duplex system, which is capable of transmitting and receiving at the same time. The eneralized joint sourceand-relay mode design procedure advocated relies on the proposed procedure of finding the optimal cooperative coding scheme, which performs close to the twin-relay-aided network’s capacity. The corresponding simulation results verify that our proposed Ir-CSTC schemes are capable of near-capacity communications in both the single-relay-aided and the twin-relay-aided networks. Having studied diverse noise-limited single-user systems, we finally investigate a multiuser space divisionmultiple access (SDMA) uplink system designed for an unterferencelimited scenario, where the multiple access interference (MAI) significantly degrades the overall system performance. For the sake of supporting rank-deficient overloaded systems, a maximumsignal-to-interference-plus-noise ratio (MaxSINR) based SIC multiuser detection (MUD) algorithm is proposed for the multiple-antenna aided multi-user SDMA system, which is capable of striking a trade-off between the interference suppression and noise enhancement. Furthermore, the multiuser SDMA system is combined with channel codes, which assist us in eliminating the typical error floors of rank-deficient systems. Referring to the Ir-CSTC scheme designed for the single-user scenario, relaying techniques are invoked in our channel-coded SDMA systems, which benefit from extra spatial diversity gains. In contrast to the single-user Ir-CSTC schemes, interference suppression is required at both the base station (BS) and the relaying mobile station (MS). Finally, a more practical scenario is considered where the MSs have spatially correlated transmit antennas. In contrast to the conventional views, our simulation results suggest that the spatial correlation experienced at the transmitter is potentially beneficial in multiuser SDMA uplink systems, provided that efficient MUDs are invoked.
505

Hardware level countermeasures against differential power analysis

Baddam, Karthik January 2012 (has links)
Hardware implementations of mathematically secure algorithms unintentionally leak side channel information, that can be used to attack the device. Such attacks, known as side channel attacks, are becoming an increasingly important aspect of designing security systems. In this thesis, power analysis attacks are discussed along with existing countermeasures. In the first part of the thesis, the theory and practice of side-channel attacks is introduced. In particular, it is shown that plain implementations of block ciphers are highly susceptible to power-analysis attacks. Dual rail precharge (DRP) circuits have already been proposed as an effective countermeasure against power analysis attacks. DRP circuits suffer from an implementation problem; balancing the routing capacitance of differential signals. In this thesis we propose a new countermeasure, path switching, to address the routing problem in DRP circuits which has very low overheads compared to existing methods. The proposed countermeasure is tested with simulations and experimentally on an FPGA board. Results from these tests show a minimum of 75 times increase in the power traces required for a first order DPA attack. Some of the existing countermeasures to address the routing problem in DRP circuits do not consider coupling capacitance between differential signals. In this thesis we propose a new method, divided backend duplication that effectively addresses balanced the routing problem of DRP circuits. The proposed countermeasure is tested with simulations and results show a minimum of 300 times increase in the power traces required for a first order DPA attack. Randomisation as a DPA countermeasure is also explored. It is found that randomising the power consumption of the cryptographic device itself has little impact on DPA. Randomising the occurrence of intermediate results, on which DPA relies on, has better effect at mitigating DPA.
506

Comprehensive review of classification algorithms for high dimensional datasets

Syarif, Iwan January 2014 (has links)
Machine Learning algorithms have been widely used to solve various kinds of data classification problems. Classification problem especially for high dimensional datasets have attracted many researchers in order to find efficient approaches to address them. However, the classification problem has become very complicated and computationally expensive, especially when the number of possible different combinations of variables is so high. In this research, we evaluate the performance of four basic classifiers (naïve Bayes, k-nearest neighbour, decision tree and rule induction), ensemble classifiers (bagging and boosting) and Support Vector Machine. We also investigate two widely-used feature selection algorithms which are Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). Our experiments show that feature selection algorithms especially GA and PSO significantly reduce the number of features needed as well as greatly reduce the computational cost. Furthermore, these algorithms do not severely reduce the classification accuracy and in some cases they can improve the accuracy as well. PSO has successfully reduced the number of attributes of 9 datasets to 12.78% of original attributes on average while GA is only 30.52% on average. In terms of classification performance, GA is better than PSO. The datasets reduced by GA have better classification performance than their original ones on 5 of 9 datasets while the datasets reduced by PSO have their classification performance improved in only 3 of 9 datasets. The total running time of four basic classifiers (NB, kNN, DT and RI) on 9 original datasets is 68,169 seconds while the total running time of the same classifiers on GA-reduced datasets is 3,799 seconds and on PSO-reduced dataset is only 326 seconds (more than 209 times faster). We applied ensemble classifiers such as bagging and boosting as a comparison. Our experiment shows that bagging and boosting do not give a significant improvement. The average improvement of bagging when applied to nine datasets is only 0.85% while boosting average improvement is 1.14%. Ensemble classifiers (both bagging and boosting) outperforms single classifier in 6 of 9 datasets. SVM has been proven to perform much better when dealing with high dimensional datasets and numerical features. Although SVM work well with default value, the performance of SVM can be improved significantly using parameter optimization. Our experiment shows SVM parameter optimization using grid search always finds near optimal parameter combination within the given ranges. SVM parameter optimization using grid search is very powerful and it is able to improve the accuracy significantly. Unfortunately, grid search is very slow; therefore it is very reliable only in low dimensional dataset with few parameters. SVM parameter optimization using Evolutionary Algorithm (EA) can be used to solve the problem of grid search. EA has proven to be more stable than grid search. Based on average running time, EA is almost 16 times faster than grid search (294 seconds compare to 4680 seconds). Overall, SVM with parameter optimization outperforms other algorithms in 5 of 9 datasets. However, SVM does not perform well in datasets which have non-numerical attributes.
507

Serialization and asynchronous techniques for reliable network-on-chip communication

Ogg, Simon January 2009 (has links)
The Network-on-Chip (NoC) paradigm has been proposed as a potentially viable onchip communication infrastructure for multiprocessor SoC. This thesis investigates the development and validation of efficient links that improve NoC performance, power consumption and reliability. There is emphasis on low-level simulation and validation of the NoC links throughout and gate level circuits are given to provide practical implementations. The first part of the thesis investigates the use of compression in bit-serial point-to-point links as a means of increasing the available bandwidth of the links in NoC. A bit-serial link reduces the cost of interconnect by reducing the number of wires, but at the expense of reduced throughput. Compression is used to improve the throughput of the serial link by reducing the amount of data transmitted through unused significant bit removal. The compression is performed in real time and the overhead of the extra circuitry is small. The link is modelled in VHDL and simulated to check functionality and correct operation. The second part of the thesis investigates the development of serial asynchronous links to overcome issues such as power and interconnect area overhead in NoC links. Serialization is used to reduce the interconnect cost of a link by reducing the number of wires needed. The combination of asynchronous circuitry and serialization allows for a lower wiring area and reduced power NoC link, in particular for increased link length. The serial asynchronous link is compared to a fully synchronous link of similar characteristics. Power, area and throughput is compared between the asynchronous and synchronous solutions. Validation is performed on FPGA to confirm the correct functionality of the serialized asynchronous link. Unreliability due to soft errors is becoming an issue with scaling of technology. The third part of the thesis investigates a novel data coding technique for the asynchronous links developed earlier which offers resilience to soft errors. Resilience is achieved by coding the data using symbols for each bit and a common reference so that transient errors on the NoC link wires can be detected by comparing the symbols and reference to obtain validity of the data and the value of the data. Practical circuits are shown and simulated as well as the area and power estimates.
508

Towards an integrated atom chip

Lewis, Gareth Neil January 2009 (has links)
The field of atom chips is a relatively new area of research which is rapidly becoming of great interest to the scientific community. It started out as a small branch of cold atom physics which has quickly grown into a multidisciplinary subject. It now encompasses topics from fundamental atomic and quantum theory, optics and laser science, to the engineering of ultra sensitive sensors. In this thesis the first steps are taken towards a truly integrated atom chip device for real world applications. Multiple devices are presented that allow the trapping, cooling, manipulation and counting of atoms. Each device presents a new component required for the integration and miniaturisation of atom chips into a single device, capable of being used as a sensor. Initially, a wire trap was created capable of trapping and splitting a cloud of BoseEinstein condensate (BEC) for use in atom interferometry. Using this chip a BEC has been successfully created, trapped and coherent splitting of this cloud has been achieved. Subsequently, the integration and simplification of the initial trapping process was approached. In all the experiments to date, atoms are initially collected from a warm vapour by a magneto-optical trap (MOT). This thesis presents a new approach in which microscopic pyramidal MOTs’ are integrated into the chip itself. This greatly reduces the number of optical components and helps to simplify the process significantly. Also presented is a method for creating a planar-concave micro-cavity capable of single atom detection. One such cavity consists of a concave mirror fabricated in silicon and the planar tip of an optical fibre. The performance of the resonators is highly dependent on the surface roughness and shape profile of the concave mirrors therefore a detailed study into the fabrication technique and its effects on these parameters was undertaken. Using such cavities single atom detection has been shown to be possible. These cavities have also been sccessfully integrated into an atom wire guide. Finally a co-sputtered amorphous silicon/titanium (a-Si/Ti) nanocomposite material was created and studied for its use as a novel structural material. This material is potentially suitable for integrated circuitry (IC)/Micro-electromechanical- systems (MEMS) integration. The material’s electrical and structural properties were investigated and initial results suggest that a-Si/Ti has the potential to be a compelling structural material for future IC/MEMS integration. To build all of these devices, a full range of standard microfabrication techniques was necessary as well as some non standard processes that required considerable process development such as the electrochemical deposition. This thesis presents a tool box of fabrication techniques for creating various components capable of different tasks that can be integrated into a single device. Each component has been successfully demonstrated in laboratory conditions. This represents a significant step toward a real world atom chip device.
509

Beyond multi-class : structured learning for machine translation

Ni, Yizhao January 2010 (has links)
In this thesis, we explore and present machine learning (ML) approaches to a particularly challenging research area – machine translation (MT). The study aims at replacing or developing each component in the MT system with an appropriate discriminative model, where the ultimate goal is to create a powerful MT system with cutting-edge ML techniques. The study regards each sub-problem encountered in the MT field as a classification or regression problem. To model specific mappings in MT tasks, the modern machine learning paradigm known as “structured learning” is pursued. This approach goes beyond classic multiclass pattern classification and explicitly models certain dependencies in the target domain. Different algorithmic variants are then proposed for constructing the ML-based MT systems: the first application is a kernel-based MT system, that projects both input and output into a very high-dimensional linguistic feature space and makes use of the maximum margin regression (MMR) technique to learn the relations between input and output. It is amongst the first MT systems that work with pure ML techniques. The second application is the proposal of a max-margin structure (MMS) approach to phrase translation probability modelling in an MT system. The architecture of this approach is shown to capture structural aspects of the problem domains, leading to demonstrable performance improvements on machine translation. Finally the thesis describes the development of a phrase reordering model for machine translation, where we have compared different ML methods and discovered a particularly efficient paradigm to solve this problem.
510

Closed-loop multiple antenna aided wireless communications using limited feedback

Yang, Du January 2010 (has links)
The aim of this thesis is to study the design of closed-loop multiple antenna aided wireless communications relying on limited feedback. Multiple antennas may be employed either/both at the transmitter or/and at the receiver, here the latter periodically feeds back some information about the time-varying wireless channel using a limited number of bits. Furthermore, the transmitter then pre-processes the signals to be transmitted according to the received feedback information. This closed-loop multiple antenna aided communication scheme is capable of significantly improving the attainable system performance in terms of increasing the transmission rate or enhancing the transmission integrity. The goal of our research is the efficient acquisition and exploitation of the Channel State Information at the Transmitter (CSIT), with the aid of different transmit preprocessing algorithms. The transmission schemes investigated in this thesis include the Transmit Matched Filter (TxMF), the Transmit Eigen-Beamformer (TxEBF), the linear Multi-User Transmitter (MUT) and a recently proposed MIMO scheme called Spatial Modulation (SM). The entire process of CSIT acquisition is investigated in this thesis, which includes pilot assisted CSI estimation, CSI quantisation at the receiver, as well as CSI reconstruction at the transmitter. A number of novel designs are proposed in order to increase the CSI acquisition efficiency. A range of different CSI quantisers are detailed in Chapter 2, and their performances are evaluated throughout Chapter 3 to Chapter 6. Moreover, a pilot overhead reduction scheme is proposed for pilot assisted CSI estimation in Chapter 3 for rapidly fading channels. A pilot symbol assisted rateless code is also proposed in Chapter 3, which exploits the available pilot symbols not only for channel estimation but also for channel decoding. Furthermore, an Extrinsic Information Transfer (EXIT) Chart optimised Channel Impulse Response (CIR) Quantizer is proposed in Chapter 5, which assists the system in maintaining the lowest possible CSI feedback overhead, while ensuring that an open EXIT-tunnel is still attainable for the sake of achieving an infinitesimally low BER. A soft decoding assisted MIMO CIR recovery scheme is proposed in Chapter 5, which minimises the CIRs’ reconstruction error at the transmitter for noise contaminated feedback. Last but not least, a CSI feedback scheme using channel prediction and predictive vector quantization is also proposed in Chapter 5 for delayed feedback channels. Given feedback CSIT, a number of algorithms are proposed in order to efficiently exploit it. In Chapter 4 a novel Linear Dispersion Code (LDC) aided TxEBF scheme is proposed, which is capable of striking the required trade-off between the maximum attainable diversity gain and the capacity for an arbitrary number of transmit and receive antennas. In the same chapter, an application example using a novel scheme referred to as a TxEBF aided video transmission scheme is proposed, where the encoded video source bits are transmitted through different eigen-beams according to their error sensitivity, so as to improve the decoded video quality at the receiver by employing unequal error protection. Moreover, a feedback-aided phase rotation and a feedback-aided power allocation scheme are proposed in Chapter 6,which achieves beneficial transmit diversity and enhances the robustness of a SM aided MIMO system. By examining the various schemes investigated throughout Chapter 3 to Chapter 6, our five-step guidelines conceived for the design of closed-loop MIMO systems using limited feedback are summarised as follows. The first step is to design appropriate transmit preprocessing schemes under the assumption of having perfect CSIT. Then, the second step is to determine the specific type of the required feedback information, whose entropy has to be as low as possible. Next we design an efficient quantiser based on the statistical properties of the required feedback information. The distortion metric of the quantiser may be the conventional MSE metric, but the employment of a direct data-link-performance related metric is preferable. Moreover, the fourth step is to improve the efficiency and robustness of the quantiser by employing conventional source compression. Finally, the fifth step is the joint optimisation of the data transmission link and the CSI feedback link based on the ultimate target performance metric. This thesis is concentrated on a Frequency Division Duplex (FDD) cellular communication scenario using digitalized feedback information, where theMobile Terminals (MTs) estimate the Down-Link (DL) channel, quantise and feed back the required information to the Base Station (BS) using a bandwidth-limited feedback link, and the BS reconstructs the received CSI feedback information for the sake of throughput or integrity improving the DL transmission. Moreover, the wireless channels are assumed to be slow frequency-flat/narrow-band Rayleigh fading channels. They are mostly assumed to be spatially independent. The scenario of having spatial correlation and Line Of Sight (LOS) transmissions are also considered. Furthermore, both the achievable capacity and the BER performance are evaluated. Iterative decoding is employed in conjunction with channel coding in order to approach the achievable capacity and improve the BER performance.

Page generated in 0.0614 seconds