• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analyses and Cost Evaluation of Code Tree Search Algorithms

Mohan, Seshadri 09 1900 (has links)
<p> Codes with a tree structure find wide use in data compression and error correction. It is generally impractical to view and weigh all the branches in a code tree, so a search algorithm is employed which considers some but not others in a predetermined fashion. Traditionally, the efficiency of code tree search algorithms has been measured by the number of tree branches visited for a given level of performance. This measure does not indicate the true consumption of resources. Cost functions are defined based on the number of code tree paths retained, S, the length of the paths, L, and the number of code tree branches searched per branch released as output, E[C]. Using these cost functions, most of the existing algorithms as well as some new algorithms proposed here are compared.</p> <p> These new algorithms include three metric-first algorithms. The first one, the merge algorithm, uses, in addition to the main list used by the stack algorithm, an auxiliary list to store paths. The merge algorithm reduces the dependence on S for the product resource cost from O(S^2) for the stack algorithm to O(S^4/3 ) for the merge algorithm. A generalization of this algorithm reduces the product cost to O(S log S). The second algorithm uses a class of height-balanced trees, known as AVL trees, to store code tree paths, resulting in an alternate method to the merge algorithm achieving O(S log S) cost.</p> <p> The third algorithm, using the concepts of dynamic hashing and trie searching, provides important modifications to the Jelinek bucket algorithm by incorporating dynamic splitting and merging of buckets. This strategy provides a balanced data structure and reduces the product cost still further compared to the first two algorithms.</p> <p> We next turn to analysis of the number of nodes visited during a search. Using the theory of multitype branching processes in random environments an equation for node computation is derived for asymmetric source coding by the single stack algorithm. This equation is shown to be the stochastic analog of an equation for symmetric sources. Simulation results, obtained by encoding the Hamming source by the single stack algorithm, are used to optimize the performance of the algorithm with respect to the bias factor, stack length, and limit on computation. A modification to the algorithm that raises the barrier during forward motion provides a better distortion performance.</p> <p> The metric-first stack algorithm is used to encode a voiced speech sound. From experimental evidence, it is shown how to optimize the algorithm's SNR performance with respect to the algorithm's storage, execution time, and node computation. For each of these, the optimal parameterizing of the algorithm differs markedly. Similarities are pointed out between the results for speech and earlier theoretical results for the binary i.i.d. source with Hamming distortion measure. It is shown that metric-first algorithms may perform better with "real life" sources like speech than they do with artificial sources, and in view of this the algorithms proposed here take on added significance.</p> / Thesis / Doctor of Philosophy (PhD)
2

Coherent and non-coherent data detection algorithms in massive MIMO

Alshamary, Haider Ali Jasim 01 May 2017 (has links)
Over the past few years there has been an extensive growth in data traffic consumption devices. Billions of mobile data devices are connected to the global wireless network. Customers demand revived services and up-to-date developed applications, like real-time video and games. These applications require reliable and high data rate wireless communication with high throughput network. One way to meet these requirements is by increasing the number of transmit and/or receive antennas of the wireless communication systems. Massive multiple-input multiple-output (MIMO) has emerged as a promising candidate technology for the next generation (5G) wireless communication. Massive MIMO increases the spatial multiplexing gain and the data rate by adding an excessive number of antennas to the base station (BS) terminals of wireless communication systems. However, building efficient algorithms able to decode a coherently or non-coherently large flow of transmitted signal with low complexity is a big challenge in massive MIMO. In this dissertation, we propose novel approaches to achieve optimal performance for joint channel estimation and signal detection for massive MIMO systems. The dissertation consists of three parts depending on the number of users at the receiver side. In the first part, we introduce a probabilistic approach to solve the problem of coherent signal detection using the optimized Markov Chain Monte Carlo (MCMC) technique. Two factors contribute to the speed of finding the optimal solution by the MCMC detector: The probability of encountering the optimal solution when the Markov chain converges to the stationary distribution, and the mixing time of the MCMC detector. First, we compute the optimal value of the “temperature'' parameter such that the MC encounters the optimal solution in a polynomially small probability. Second, we study the mixing time of the underlying Markov chain of the proposed MCMC detector. We assume the channel state information is known in the first part of the dissertation; in the second part we consider non-coherent signal detection. We develop and design an optimal joint channel estimation and signal detection algorithms for massive (single-input multiple-output) SIMO wireless systems. We propose exact non-coherent data detection algorithms in the sense of generalized likelihood ratio test (GLRT). In addition to their optimality, these proposed tree based algorithms perform low expected complexity and for general modulus constellations. More specifically, despite the large number of the unknown channel coefficients for massive SIMO systems, we show that the expected computational complexity of these algorithms is linear in the number of receive antennas (N) and polynomial in channel coherence time (T). We prove that as $N \rightarrow \infty$, the number of tested hypotheses for each coherent block equals $T$ times the cardinality of the modulus constellation. Simulation results show that the optimal non-coherent data detection algorithms achieve significant performance gains (up to 5 dB improvement in energy efficiency) with low computational complexity. In the part three, we consider massive MIMO uplink wireless systems with time-division duplex (TDD) operation. We propose an optimal algorithm in terms of GLRT to solve the problem of joint channel estimation and data detection for massive MIMO systems. We show that the expected complexity of our algorithm grows polynomially in the channel coherence time (T). The proposed algorithm is novel in two terms: First, the transmitted signal can be chosen from any modulus constellation, constant and non-constant. Second, the algorithm decodes the received noisy signal, which is transmitted a from multiple-antenna array, offering exact solution with polynomial complexity in the coherent block interval. Simulation results demonstrate significant performance gains of our approach compared with suboptimal non-coherent detection schemes. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.

Page generated in 0.0759 seconds