• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Conjugate Residual Solver with Kernel Fusion for massive MIMO Detection

Broumas, Ioannis January 2023 (has links)
This thesis presents a comparison of a GPU implementation of the Conjugate Residual method as a sequence of generic library kernels against implementations ofthe method with custom kernels to expose the performance gains of a keyoptimization strategy, kernel fusion, for memory-bound operations which is to makeefficient reuse of the processed data. For massive MIMO the iterative solver is to be employed at the linear detection stageto overcome the computational bottleneck of the matrix inversion required in theequalization process, which is 𝒪(𝑛3) for direct solvers. A detailed analysis of howone more of the Krylov subspace methods that is feasible for massive MIMO can beimplemented on a GPU as a unified kernel is given. Further, to show that kernel fusion can improve the execution performance not onlywhen the input data is large matrices-vectors as in scientific computing but also inthe case of massive MIMO and possibly similar cases where the input data is a largenumber of small matrices-vectors that must be processed in parallel.In more details, focusing on the small number of iterations required for the solver toachieve a close enough approximation of the exact solution in the case of massiveMIMO, and the case where the number of users matches the size of a warp, twodifferent approaches that allow to fully unroll the algorithm and gradually fuse allthe separate kernels into a single, until reaching a top-down hardcodedimplementation are proposed and tested. Targeting to overcome the algorithms computational burden which is the matrixvector product, further optimization techniques such as two ways to utilize the faston-chip memories, preloading the matrix in shared memory and preloading thevector in shared memory, are tested and proposed to achieve high efficiency andhigh parallelism.

Page generated in 0.0747 seconds