• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 177
  • 32
  • 2
  • Tagged with
  • 211
  • 211
  • 107
  • 73
  • 50
  • 44
  • 38
  • 38
  • 37
  • 36
  • 31
  • 30
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Who Spoke What And Where? A Latent Variable Framework For Acoustic Scene Analysis

Sundar, Harshavardhan 26 March 2016 (has links) (PDF)
Speech is by far the most natural form of communication between human beings. It is intuitive, expressive and contains information at several cognitive levels. We as humans, are perceptive to several of these cognitive levels of information, as we can gather the information pertaining to the identity of the speaker, the speaker's gender, emotion, location, the language, and so on, in addition to the content of what is being spoken. This makes speech based human machine interaction (HMI), both desirable and challenging for the same set of reasons. For HMI to be natural for humans, it is imperative that a machine understands information present in speech, at least at the level of speaker identity, language, location in space, and the summary of what is being spoken. Although one can draw parallels between the human-human interaction and HMI, the two differ in their purpose. We, as humans, interact with a machine, mostly in the context of getting a task done more efficiently, than is possible without the machine. Thus, typically in HMI, controlling the machine in a specific manner is the primary goal. In this context, it can be argued that, HMI, with a limited vocabulary containing specific commands, would suffice for a more efficient use of the machine. In this thesis, we address the problem of ``Who spoke what and where", in the context of a machine understanding the information pertaining to identities of the speakers, their locations in space and the keywords they spoke, thus considering three levels of information - speaker identity (who), location (where) and keywords (what). This can be addressed with the help of multiple sensors like microphones, video camera, proximity sensors, motion detectors, etc., and combining all these modalities. However, we explore the use of only microphones to address this issue. In practical scenarios, often there are times, wherein, multiple people are talking at the same time. Thus, the goal of this thesis is to detect all the speakers, their keywords, and their locations in mixture signals containing speech from simultaneous speakers. Addressing this problem of ``Who spoke what and where" using only microphone signals, forms a part of acoustic scene analysis (ASA) of speech based acoustic events. We divide the problem of ``who spoke what and where" into two sub-problems: ``Who spoke what?" and ``Who spoke where". Each of these problems is cast in a generic latent variable (LV) framework to capture information in speech at different levels. We associate a LV to represent each of these levels and model the relationship between the levels using conditional dependency. The sub-problem of ``who spoke what" is addressed using single channel microphone signal, by modeling the mixture signal in terms of LV mass functions of speaker identity, the conditional mass function of the keyword spoken given the speaker identity, and a speaker-specific-keyword model. The LV mass functions are estimated in a Maximum likelihood (ML) framework using the Expectation Maximization (EM) algorithm using Student's-t Mixture Model (tMM) as speaker-specific-keyword models. Motivated by HMI in a home environment, we have created our own database. In mixture signals, containing two speakers uttering the keywords simultaneously, the proposed framework achieves an accuracy of 82 % for detecting both the speakers and their respective keywords. The other sub-problem of ``who spoke where?" is addressed in two stages. In the first stage, the enclosure is discretized into sectors. The speakers and the sectors in which they are located are detected in an approach similar to the one employed for ``who spoke what" using signals collected from a Uniform Circular Array (UCA). However, in place of speaker-specific-keyword models, we use tMM based speaker models trained on clean speech, along with a simple Delay and Sum Beamformer (DSB). In the second stage, the speakers are localized within the active sectors using a novel region constrained localization technique based on time difference of arrival (TDOA). Since the problem being addressed is a multi-label classification task, we use the average Hamming score (accuracy) as the performance metric. Although the proposed approach yields an accuracy of 100 % in an anechoic setting for detecting both the speakers and their corresponding sectors in two-speaker mixture signals, the performance degrades to an accuracy of 67 % in a reverberant setting, with a $60$ dB reverberation time (RT60) of 300 ms. To improve the performance under reverberation, prior knowledge of the location of multiple sources is derived using a novel technique derived from geometrical insights into TDOA estimation. With this prior knowledge, the accuracy of the proposed approach improves to 91 %. It is worthwhile to note that, the accuracies are computed for mixture signals containing more than 90 % overlap of competing speakers. The proposed LV framework offers a convenient methodology to represent information at broad levels. In this thesis, we have shown its use with three different levels. This can be extended to several such levels to be applicable for a generic analysis of the acoustic scene consisting of broad levels of events. It will turn out that not all levels are dependent on each other and hence the LV dependencies can be minimized by independence assumption, which will lead to solving several smaller sub-problems, as we have shown above. The LV framework is also attractive to incorporate prior knowledge about the acoustic setting, which is combined with the evidence from the data to derive the information about the presence of an acoustic event. The performance of the framework, is dependent on the choice of stochastic models, which model the likelihood function of the data given the presence of acoustic events. However, it provides an access to compare and contrast the use of different stochastic models for representing the likelihood function.
162

ASIC Implementation of A High Throughput, Low Latency, Memory Optimized FFT Processor

Kala, S 12 1900 (has links) (PDF)
The rapid advancements in semiconductor technology have led to constant shrinking of transistor sizes as per Moore's Law. Wireless communications is one field which has seen explosive growth, thanks to the cramming of more transistors into a single chip. Design of these systems involve trade-offs between performance, area and power. Fast Fourier Transform is an important component in most of the wireless communication systems. FFTs are widely used in applications like OFDM transceivers, Spectrum sensing in Cognitive Radio, Image Processing, Radar Signal Processing etc. FFT is the most compute intensive and time consuming operation in most of the above applications. It is always a challenge to develop an architecture which gives high throughput while reducing the latency without much area overhead. Next generation wireless systems demand high transmission efficiency and hence FFT processor should be capable of doing computations much faster. Architectures based on smaller radices for computing longer FFTs are inefficient. In this thesis, a fully parallel unrolled FFT architecture based on novel radix-4 engine is proposed which is catered for wide range of applications. The radix-4 butterfly unit takes all four inputs in parallel and can selectively produce one out of the four outputs. The proposed architecture uses Radix-4^3 and Radix-4^4 algorithms for computation of various FFTs. The Radix-4^4 block can take all 256 inputs in parallel and can use the select control signals to generate one out of the 256 outputs. In existing Cooley-Tukey architectures, the output from each stage has to be reordered before the next stage can start computation. This needs intermediate storage after each stage. In our architecture, each stage can directly generate the reordered outputs and hence reduce these buffers. A solution for output reordering problem in Radix-4^3 and Radix-4^4 FFT architectures are also discussed in this work. Although the hardware complexity in terms of adders and multipliers are increased in our architecture, a significant reduction in intermediate memory requirement is achieved. FFTs of varying sizes starting from 64 point to 64K point have been implemented in ASIC using UMC 130nm CMOS technology. The data representation used in this work is fixed point format and selected word length is 16 bits to get maximum Signal to Quantization Noise Ratio (SQNR). The architecture has been found to be more suitable for computing FFT of large sizes. For 4096 point and 64K point FFTs, this design gives comparable throughput with considerable reduction in area and latency when compared to the state-of-art implementations. The 64K point FFT architecture resulted in a throughput of 1332 mega samples per second with an area of 171.78 mm^2 and total power of 10.7W at 333 MHz.
163

Delay Differentiation By Balancing Weighted Queue Lengths

Chakraborty, Avijit 05 1900 (has links) (PDF)
Scheduling policies adopted for statistical multiplexing should provide delay differentiation between different traffic classes, where each class represents an aggregate traffic of individual applications having same target-queueing-delay requirements. We propose scheduling to optimally balance weighted mean instanteneous queue lengths and later weighted mean cumulative queue lengths as an approach to delay differentiation, where the class weights are set inversely proportional to the respective products of target delays and packet arrival rates. In particular, we assume a discrete-time, two-class, single-server queueing model with unit service time per packet and provide mathematical frame-work throughout our work. For iid Bernoulli packet arrivals, using a step-wise cost-dominance analytical approach using instantaneous queue lengths alone, for a class of one-stage cost functions not necessarily convex, we find the structure of the total-cost optimal policies for a part of the state space. We then consider two particular one-stage cost functions for finding two scheduling policies that are total-cost optimal for the whole state-space. The policy for the absolute weighted difference cost function minimizes the stationary mean, and the policy for the weighted sum-of-square cost function minimizes the stationary second-order moment, of the absolute value of the weighted difference of queue lengths. For the case of weighted sum-of-square cost function, the ‘iid Bernoulli arrivals’ assumption can be relaxed to either ‘iid arrivals with general batch sizes’ or to ‘Markovian zero-one arrivals’ for all of the state space, but for the linear switching curve. We then show that the average cost, starting from any initial state, exists, and is finite for every stationary work-conserving policy for our choices of the one-stage cost-function. This is shown for arbitrary number of class queues and for any i.i.d. batch arrival processes with finite appropriate moments. We then use cumulative queue lengths information in the one-step cost function of the optimization formulation and obtain an optimal myopic policy with 3 stages to go for iid arrivals with general batch sizes. We show analytically that this policy achieves the given target delay ratio in the long run under finite buffer assumption, given that feasibility conditions are satisfied. We take recourse to numerical value iteration to show the existence of average-cost for this policy. Simulations with varied class-weights for Bernoulli arrivals and batch arrivals with Poisson batch sizes show that this policy achieves mean queueing delays closer to the respective target delays than the policy obtained earlier. We also note that the coefficients of variation of the queueing delays of both the classes using cumulative queue lengths are of the same order as those using instantaneous queue lengths. Moreover, the short-term behaviour of the optimal myopic policy using cumulative queue lengths is superior to the existing standard policy reported by Coffman and Mitrani by a factor in the range of 3 to 8. Though our policy performs marginally poorer compared to the value-iterated, sampled, and then stationarily employed policy, the later lacks any closed-form structure. We then modify the definition of the third state variable and look to directly balance weighted mean delays. We come up with another optimal myopic policy with 3 stages to go, following which the error in the ratio of mean delays decreases as the window-size, as opposed to the policy mentioned in the last paragraph, wherein the error decreases as the square-root of the window-size. We perform numerical value-iteration to show the existence of average-cost and study the performance by simulation. Performance of our policy is comparable with the value-iterated, sampled, and then stationarily employed policy, reported by Mallesh. We have then studied general inter-arrival time processes and obtained the optimal myopic policy for the Pareto inter-arrival process, in particular. We have supported with simulation that our policy fares similarly to the PAD policy, reported by Dovrolis et. al., which is primarily heuristic in nature. We then model the possible packet errors in the multiplexed channel by either a Bernoulli process, or a Markov modulated Bernoulli process with two possible channel states. We also consider two possible round-trip-time values for control information, namely zero and one-slot. The policies that are next-stage optimal (for zero round-trip-time), and two-stage optimal (for one-slot round-trip-time) are obtained. Simulations with varied class-weights for Bernoulli arrivals and batch arrivals with Poisson batch sizes show that these policies indeed achieve mean queueing delays very close to the respective target delays. We also obtain the structure for optimal policies with N = 2 + ⌈rtt⌉ stages-to-go for generic values of rtt, and which need not be multiple of time-slots.
164

On The Best-m Feedback Scheme In OFDM Systems With Correlated Subchannels

Ananya, S N 03 1900 (has links) (PDF)
Orthogonal frequency division multiplexing (OFDM) in next generation wireless systems provides high downlink data rates by employing frequency-domain scheduling and rate adaptation at the base station (BS). However, in order to control the significant feedback overhead required by these techniques, feedback reduction schemes are essential. Best-m feedback is one such scheme that is implemented in OFDM standards such as Long Term Evolution. In it, the sub channel (SC) power gains of only the m strongest SCs and their corresponding indices are fed back to the BS. However, two assumptions pervade most of the literature that analyze best-m feedback in OFDM systems. The first one is that the SC gains are uncorrelated. In practice, however, the SC gains are highly correlated, even for dispersive multipath channels. The second assumption deals with the treatment of unreported SCs, which are not fed back by the best-m scheme. If no user reports an SC, then no data transmission is assumed to occur. In this thesis, we eschew these assumptions and investigate best-m feedback in OFDM systems with correlated SC gains. We, first, characterize the average throughput as a function of correlation and m. A uniform correlation model is assumed, i.e., the SC gains are correlated with each other by the same correlation coefficient. The system model incorporates greedy, modified proportional- fair, and round robin schedulers, discrete rate adaptation, and non-identically distributed SC gains of different users. We, then, generalize the model to account for feedback delay. We show in all these cases that correlation degrades the average throughput. We also show that this effect does not arise when users report all the SC power gains to the BS. In order to mitigate the reduction in the average throughput caused by unreported SCs, we derive a novel, constrained minimum mean square error channel estimator for the best-m scheme to estimate the gains of these unreported SCs. The estimator makes use of the additional information, which is unique to the best-m scheme, that the estimated SC power gains must be less than those that were reported. We, then, study its implications on the downlink average cell throughput, again for different schedulers. We show that our approach reduces the root mean square error and increases the average throughput compared to several approaches pursued in the literature. The more correlated the SC gains, greater is the improvement.
165

Finding A Subset Of Non-defective Items From A Large Population : Fundamental Limits And Efficient Algorithms

Sharma, Abhay 05 1900 (has links) (PDF)
Consider a large population containing a small number of defective items. A commonly encountered goal is to identify the defective items, for example, to isolate them. In the classical non-adaptive group testing (NAGT) approach, one groups the items into subsets, or pools, and runs tests for the presence of a defective itemon each pool. Using the outcomes the tests, a fundamental goal of group testing is to reliably identify the complete set of defective items with as few tests as possible. In contrast, this thesis studies a non-defective subset identification problem, where the primary goal is to identify a “subset” of “non-defective” items given the test outcomes. The main contributions of this thesis are: We derive upper and lower bounds on the number of nonadaptive group tests required to identify a given number of non-defective items with arbitrarily small probability of incorrect identification as the population size goes to infinity. We show that an impressive reduction in the number of tests is achievable compared to the approach of first identifying all the defective items and then picking the required number of non-defective items from the complement set. For example, in the asymptotic regime with the population size N → ∞, to identify L nondefective items out of a population containing K defective items, when the tests are reliable, our results show that O _ K logK L N _ measurements are sufficient when L ≪ N − K and K is fixed. In contrast, the necessary number of tests using the conventional approach grows with N as O _ K logK log N K_ measurements. Our results are derived using a general sparse signal model, by virtue of which, they are also applicable to other important sparse signal based applications such as compressive sensing. We present a bouquet of computationally efficient and analytically tractable nondefective subset recovery algorithms. By analyzing the probability of error of the algorithms, we obtain bounds on the number of tests required for non-defective subset recovery with arbitrarily small probability of error. By comparing with the information theoretic lower bounds, we show that the upper bounds bounds on the number of tests are order-wise tight up to a log(K) factor, where K is the number of defective items. Our analysis accounts for the impact of both the additive noise (false positives) and dilution noise (false negatives). We also provide extensive simulation results that compare the relative performance of the different algorithms and provide further insights into their practical utility. The proposed algorithms significantly outperform the straightforward approaches of testing items one-by-one, and of first identifying the defective set and then choosing the non-defective items from the complement set, in terms of the number of measurements required to ensure a given success rate. We investigate the use of adaptive group testing in the application of finding a spectrum hole of a specified bandwidth in a given wideband of interest. We propose a group testing based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent sub-bands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios. Energy-based hypothesis tests are used to provide an occupancy decision over the group of sub-bands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme. We illustrate the performance of the proposed algorithms via Monte Carlo simulations.
166

Optimal Amplify-And-Forward Relaying For Cooperative Communications And Underlay Cognitive Radio

Sainath, B 04 1900 (has links) (PDF)
Relay-assisted cooperative communication exploits spatial diversity to combat wireless fading, and is an appealing technology for next generation wireless systems. Several relay cooperation protocols have been proposed in the literature. In amplify-and-forward (AF)relaying, which is the focus of this thesis, the relay amplifies the signal it receives from the source and forwards it to the destination. AF has been extensively studied in the literature on account of its simplicity since the relay does not need to decode the received signal. We propose a novel optimal relaying policy for two-hop AF cooperative relay systems. In this, an average power-constrained relay adapts its gain and transmit power to minimize the fading-averaged symbol error probability (SEP) at the destination. Next, we consider a generalization of the above policy in which the relay operates as an underlay cognitive radio (CR). This mode of communication is relevant because it promises to address the spectrum shortage constraint. Here, the relay adapts its gain as a function of its local channel gain to the source and destination and also the primary such that the average interference it causes to the primary receiver is also constrained. For both the above policies, we also present near-optimal, simpler relay gain adaptation policies that are easy to implement and that provide insights about the optimal policies. The SEPs and diversity order of the policies are analyzed to quantify their performance. These policies generalize the conventional fixed-power and fixed-gain AF relaying policies considered in cooperative and CR literature, and outperform them by 2.0-7.7 dB. This translates into significant energy savings at the source and relay, and motivates their use in next generation wireless systems.
167

Bitrate Reduction Techniques for Low-Complexity Surveillance Video Coding

Gorur, Pushkar January 2016 (has links) (PDF)
High resolution surveillance video cameras are invaluable resources for effective crime prevention and forensic investigations. However, increasing communication bandwidth requirements of high definition surveillance videos are severely limiting the number of cameras that can be deployed. Higher bitrate also increases operating expenses due to higher data communication and storage costs. Hence, it is essential to develop low complexity algorithms which reduce data rate of the compressed video stream without affecting the image fidelity. In this thesis, a computer vision aided H.264 surveillance video encoder and four associated algorithms are proposed to reduce the bitrate. The proposed techniques are (I) Speeded up foreground segmentation, (II) Skip decision, (III) Reference frame selection and (IV) Face Region-of-Interest (ROI) coding. In the first part of the thesis, a modification to the adaptive Gaussian Mixture Model (GMM) based foreground segmentation algorithm is proposed to reduce computational complexity. This is achieved by replacing expensive floating point computations with low cost integer operations. To maintain accuracy, we compute periodic floating point updates for the GMM weight parameter using the value of an integer counter. Experiments show speedups in the range of 1.33 - 1.44 on standard video datasets where a large fraction of pixels are multimodal. In the second part, we propose a skip decision technique that uses a spatial sampler to sample pixels. The sampled pixels are segmented using the speeded up GMM algorithm. The storage pattern of the GMM parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. In the third part, a reference frame selection algorithm is proposed to maximize the number of background Macroblocks (MB’s) (i.e. MB’s that contain background image content) in the Decoded Picture Buffer. This reduces the cost of coding uncovered background regions. Distortion over foreground pixels is measured to quantify the performance of skip decision and reference frame selection techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence. In the final part of the thesis, face and shadow region detection is combined with the skip decision algorithm to perform ROI coding for pedestrian surveillance videos. Since person identification requires high quality face images, MB’s containing face image content are encoded with a low Quantization Parameter setting (i.e. high quality). Other regions of the body in the image are considered as RORI (Regions of reduced interest) and are encoded at low quality. The shadow regions are marked as Skip. Techniques that use only facial features to detect faces (e.g. Viola Jones face detector) are not robust in real world scenarios. Hence, we propose to initially detect pedestrians using deformable part models. The face region is determined using the deformed part locations. Detected pedestrians are tracked using an optical flow based tracker combined with a Kalman filter. The tracker improves the accuracy and also avoids the need to run the object detector on already detected pedestrians. Shadow and skin detector scores are computed over super pixels. Bilattice based logic inference is used to combine multiple likelihood scores and classify the super pixels as ROI, RORI or RONI. The coding mode and QP values of the MB’s are determined using the super pixel labels. The proposed techniques provide a further reduction in bitrate of up to 50.2%.
168

Data Fusion Based Physical Layer Protocols for Cognitive Radio Applications

Venugopalakrishna, Y R January 2016 (has links) (PDF)
This thesis proposes and analyzes data fusion algorithms that operate on the physical layer of a wireless sensor network, in the context of three applications of cognitive radios: 1. Cooperative spectrum sensing via binary consensus; 2. Multiple transmitter localization and communication footprint identification; 3.Target self-localization using beacon nodes. For the first application, a co-phasing based data combining scheme is studied under imperfect channel knowledge. The evolution of network consensus state is modeled as a Markov chain, and the average transition probability matrix is derived. Using this, the average hitting time and average consensus duration are obtained, which are used to determine and optimize the performance of the consensus procedure. Second, using the fact that a typical communication footprint map admits a sparse representation, two novel compressed sensing based schemes are proposed to construct the map using 1-bit decisions from sensors deployed in a geographical area. The number of transmitters is determined using the K-means algorithm and a circular fitting technique, and a design procedure is proposed to determine the power thresholds for signal detection at sensors. Third, an algorithm is proposed for self-localization of a target node using power measurements from beacon nodes transmitting from known locations. The geographical area is overlaid with a virtual grid, and the problem is treated as one of testing overlapping subsets of grid cells for the presence of the target node. The column matching algorithm from group testing literature is considered for devising the target localization algorithm. The average probability of localizing the target within a grid cell is derived using the tools from Poisson point processes and order statistics. This quantity is used to determine the minimum required node density to localize the target within a grid cell with high probability. The performance of all the proposed algorithms is illustrated through Monte Carlo simulations.
169

Source And Channel Coding Techniques for The MIMO Reverse-link Channel

Ganesan, T January 2014 (has links) (PDF)
In wireless communication systems, the use of multiple antennas, also known as Multiple-Input Multiple-Output(MIMO) communications, is now a widely accepted and important technology for improving their reliability and throughput performance. However, in order to achieve the performance gains predicted by the theory, the transmitter and receiver need to have accurate and up-to-date Channel State Information(CSI) to overcome the vagaries of the fading environment. Traditionally, the CSI is obtained at the receiver by sending a known training sequence in the forward-link direction. This CSI has to be conveyed to the transmitter via a low-rate, low latency and noisy feedback channel in the reverse-link direction. This thesis addresses three key challenges in sending the CSI to the transmitter of a MIMO communication system over the reverse-link channel, and provides novel solutions to them. The first issue is that the available CSI at the receiver has to be quantized to a finite number of bits, sent over a noisy feedback channel, reconstructed at the transmitter, and used by the transmitter for precoding its data symbols. In particular, the CSI quantization technique has to be resilient to errors introduced by the noisy reverse-link channel, and it is of interest to design computationally simple, linear filters to mitigate these errors. The second issue addressed is the design of low latency and low decoding complexity error correction codes to provide protection against fading conditions and noise in the reverse-link channel. The third issue is to improve the resilience of the reverse-link channel to fading. The solution to the first problem is obtained by proposing two classes of receive filtering techniques, where the output of the source decoder is passed through a filter designed to reduce the overall distortion including the effect of the channel noise. This work combines the high resolution quantization theory and the optimal Minimum Mean Square Error(MMSE) filtering formulation to analyze, and optimize, the total end-to-end distortion. As a result, analytical expressions for the linear receive filters are obtained that minimize the total end-to-end distortion, given the quantization scheme and source(channel state) distribution. The solution to the second problem is obtained by proposing a new family of error correction codes, termed trellis coded block codes, where a trellis code and block code are concatenated in order to provide good coding gain as well as low latency and low complexity decoding. This code construction is made possible due to the existence of a uniform partitioning of linear block codes. The solution to the third problem is obtained by proposing three novel transmit precoding methods that are applicable to time-division-duplex systems, where the channel reciprocity can be exploited in designing the precoding scheme. The proposed precoding methods convert the Rayleigh fading MIMO channel into parallel Additive White Gaussian Noise(AWGN) channels with fixed gain, while satisfying an average transmit power constraint. Moreover, the receiver does not need to have knowledge of the CSI in order to decode the received data. These precoding methods are also extended to Rayleigh fading multi-user MIMO channels. Finally, all the above methods are applied to the problem of designing a low-rate, low-latency code for the noisy and fading reverse-link channel that is used for sending the CSI. Simulation results are provided to demonstrate the improvement in the forward-link data rate due to the proposed methods. Note that, although the three solutions are presented in the context of CSI feedback in MIMO communications, their development is fairly general in nature, and, consequently, the solutions are potentially applicable in other communication systems also.
170

Classical Binary Codes And Subspace Codes in a Lattice Framework

Pai, Srikanth B January 2015 (has links) (PDF)
The classical binary error correcting codes, and subspace codes for error correction in random network coding are two different forms of error control coding. We identify common features between these two forms and study the relations between them using the aid of lattices. Lattices are partial ordered sets where every pair of elements has a least upper bound and a greatest lower bound in the lattice. We shall demonstrate that many questions that connect these forms have a natural motivation from the viewpoint of lattices. We shall show that a lattice framework captures the notion of Singleton bound where the bound is on the size of the code as a function of its parameters. For the most part, we consider a special type of a lattice which has the geometric modular property. We will use a lattice framework to combine the two different forms. And then, in order to demonstrate the utility of this binding view, we shall derive a general version of Singleton bound. We will note that the Singleton bounds behave differently in certain respects because the binary coding framework is associated with a lattice that is distributive. We shall demonstrate that lack of distributive gives rise to a weaker bound. We show that Singleton bound for classical binary codes, subspace codes, rank metric codes and Ferrers diagram rank metric codes can be derived using a common technique. In the literature, Singleton bounds are derived for Ferrers diagram rank metric codes where the rank metric codes are linear. We introduce a generalized version of Ferrers diagram rank metric codes and obtain a Singleton bound for this version. Next, we shall prove a conjecture concerning the constraints of embedding a binary coding framework into a subspace framework. We shall prove a conjecture by Braun, Etzion and Vardy, which states that any such embedding which contains the full space in its range is constrained to have a particular size. Our proof will use a theorem due to Lovasz, a subspace counting theorem for geometric modular lattices, to prove the conjecture. We shall further demonstrate that any code that achieves the conjectured size must be of a particular type. This particular type turns out to be a natural distributive sub-lattice of a given geometric modular lattice.

Page generated in 0.1693 seconds