• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 10
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 11
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Iterative equalization and decoding using reduced-state sequence estimation based soft-output algorithms

Tamma, Raja Venkatesh 30 September 2004 (has links)
We study and analyze the performance of iterative equalization and decoding (IED) using an M-BCJR equalizer. We use bit error rate (BER), frame error rate simulations and extrinsic information transfer (EXIT) charts to study and compare the performances of M-BCJR and BCJR equalizers on precoded and non-precoded channels. Using EXIT charts, the achievable channel capacities with IED using the BCJR, M-BCJR and MMSE LE equalizers are also compared. We predict the BER performance of IED using the M-BCJR equalizer from EXIT charts and explain the discrepancy between the observed and predicted performances by showing that the extrinsic outputs of the $M$-BCJR algorithm are not true logarithmic-likelihood ratios (LLR's). We show that the true LLR's can be estimated if the conditional distributions of the extrinsic outputs are known and finally we design a practical estimator for computing the true LLR's from the extrinsic outputs of the M-BCJR equalizer.
22

Low complexity differential geometric computations with applications to human activity analysis

January 2012 (has links)
abstract: In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can get very involved due to the underlying non-linearity associated with the space. As a result a complex task such as manifold sequence matching would require very large number of computations making it hard to use in practice. We believe that one can device smart approximation algorithms for several classes of such problems which take into account the geometry of the manifold and maintain the favorable properties of the exact approach. This problem has several applications in areas of human activity discovery and recognition, where several features and representations are naturally studied in a non-Euclidean setting. We propose a novel solution to the problem of indexing manifold-valued sequences by proposing an intrinsic approach to map sequences to a symbolic representation. This is shown to enable the deployment of fast and accurate algorithms for activity recognition, motif discovery, and anomaly detection. Toward this end, we present generalizations of key concepts of piece-wise aggregation and symbolic approximation for the case of non-Euclidean manifolds. Experiments show that one can replace expensive geodesic computations with much faster symbolic computations with little loss of accuracy in activity recognition and discovery applications. The proposed methods are ideally suited for real-time systems and resource constrained scenarios. / Dissertation/Thesis / M.S. Electrical Engineering 2012
23

Low Complexity Optical Flow Using Neighbor-Guided Semi-Global Matching

January 2017 (has links)
abstract: Many real-time vision applications require accurate estimation of optical flow. This problem is quite challenging due to extremely high computation and memory requirements. This thesis focuses on designing low complexity dense optical flow algorithms. First, a new method for optical flow that is based on Semi-Global Matching (SGM), a popular dynamic programming algorithm for stereo vision, is presented. In SGM, the disparity of each pixel is calculated by aggregating local matching costs over the entire image to resolve local ambiguity in texture-less and occluded regions. The proposed method, Neighbor-Guided Semi-Global Matching (NG-fSGM) achieves significantly less complexity compared to SGM, by 1) operating on a subset of the search space that has been aggressively pruned based on neighboring pixels’ information, 2) using a simple cost aggregation function, 3) approximating aggregated cost array and embedding pixel-wise matching cost computation and flow computation in aggregation. Evaluation on the Middlebury benchmark suite showed that, compared to a prior SGM extension for optical flow, the proposed basic NG-fSGM provides robust optical flow with 0.53% accuracy improvement, 40x reduction in number of operations and 6x reduction in memory size. To further reduce the complexity, sparse-to-dense flow estimation method is proposed. The number of operations and memory size are reduced by 68% and 47%, respectively, with only 0.42% accuracy degradation, compared to the basic NG-fSGM. A parallel block-based version of NG-fSGM is also proposed. The image is divided into overlapping blocks and the blocks are processed in parallel to improve throughput, latency and power efficiency. To minimize the amount of overlap among blocks with minimal effect on the accuracy, temporal information is used to estimate a flow map that guides flow vector selections for pixels along block boundaries. The proposed block-based NG-fSGM achieves significant reduction in complexity with only 0.51% accuracy degradation compared to the basic NG-fSGM. / Dissertation/Thesis / Masters Thesis Computer Science 2017
24

Diversidade multiusuÃrio em sistemas cooperativos com mÃltiplos relays: um esquema de seleÃÃo eficiente e de baixa complexidade / Multiuser Diversity in Cooperative Multi-relay Systems: An Efficient Low-Complexity Selection Scheme

Marco Antonio Beserra de Melo 17 August 2012 (has links)
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico / Nesse trabalho, propÃe-se um esquema de seleÃÃo eficiente e de baixa complexidade para redes cooperativas multiusuÃrio multi-relay compostas de um nà fonte, L nÃs destinos e N nÃs relays. O esquema proposto primeiro seleciona o melhor destino baseado na qualidade de canal dos links diretos e entÃo seleciona o melhor relay que provà o melhor caminho da fonte para o destino selecionado. Considerando-se os protocolos de cooperaÃÃo decodifica-e-encaminha e amplifica-e-encaminha, o desempenho do sistema à investigado. ExpressÃes em forma fechada para a probabilidade de bloqueio sÃo obtidas e validadas por simulaÃÃes de Monte Carlo. ComparaÃÃes com o esquema de seleÃÃo Ãtimo sÃo realizadas e demonstram que o desempenho do esquema de seleÃÃo proposto à bem prÃximo ao do esquema Ãtimo, com a vantagem de o primeiro possuir uma complexidade menor que o Ãltimo. AlÃm disso, em nossa anÃlise, a fonte pode ser equipada com uma Ãnica antena ou com M mÃltiplas antenas. Uma anÃlise assintÃtica à realizada e revela que, independentemente da estratÃgia de cooperaÃÃo empregada, a ordem de diversidade à de L+N para o caso da fonte com uma Ãnica antena, enquanto que para o caso multiantena a diversidade à igual a ML+N. Os efeitos do nÃmero de nÃs relays e destinos no desempenho do sistema e sua influÃncia na posiÃÃo Ãtima do relay sÃo examinados. AlÃm disso, um compromisso entre desempenho e eficiÃncia espectral à observado para o caso em que mÃltiplas antenas sÃo empregadas. / On this work, it is proposed an efficient low-complexity selection scheme for multiuser multi-relay downlink cooperative networks comprised of one source node, L destination nodes, and N relay nodes. The proposed scheme first selects the best destination node based on the channel quality of the direct links and then selects the best relay that yields the best path from the source to the selected destination. Assuming both decode-and-forward and amplify-and-forward relaying strategies, the performance of the considered system is investigated. Closed-form expressions for the outage probability are obtained and validated by means of Monte Carlo simulations. Comparisons with the optimal selection scheme are performed and shows that the performance of the proposed scheme is very close to that of the optimal selection scheme, with the proposed scheme having the advantage of lower complexity than the optimal scheme. Furthermore, in our analysis, the source node may be equipped with either a single antenna or M multiple antennas. An asymptotic analysis is carried out, and it reveals that, regardless of the relaying strategy employed, the diversity order reduces to L+N for the single-antenna source case, whereas it is equal to ML+N for the multiple-antenna source case. The effects of the number of relay and destination nodes on the system performance and its influence on the best relay position are examined. In addition, a trade-off concerning the system performance and spectral efficiency is observed when multiple antennas are employed at the source node.
25

Low Complexity Precoder and Receiver Design for Massive MIMO Systems: A Large System Analysis using Random Matrix Theory

Sifaou, Houssem 05 1900 (has links)
Massive MIMO systems are shown to be a promising technology for next generations of wireless communication networks. The realization of the attractive merits promised by massive MIMO systems requires advanced linear precoding and receiving techniques in order to mitigate the interference in downlink and uplink transmissions. This work considers the precoder and receiver design in massive MIMO systems. We first consider the design of the linear precoder and receiver that maximize the minimum signal-to-interference-plus-noise ratio (SINR) subject to a given power constraint. The analysis is carried out under the asymptotic regime in which the number of the BS antennas and that of the users grow large with a bounded ratio. This allows us to leverage tools from random matrix theory in order to approximate the parameters of the optimal linear precoder and receiver by their deterministic approximations. Such a result is of valuable practical interest, as it provides a handier way to implement the optimal precoder and receiver. To reduce further the complexity, we propose to apply the truncated polynomial expansion (TPE) concept on a per-user basis to approximate the inverse of large matrices that appear on the expressions of 4 the optimal linear transceivers. Using tools from random matrix theory, we determine deterministic approximations of the SINR and the transmit power in the asymptotic regime. Then, the optimal per-user weight coefficients that solve the max-min SINR problem are derived. The simulation results show that the proposed precoder and receiver provide very close to optimal performance while reducing significantly the computational complexity. As a second part of this work, the TPE technique in a per-user basis is applied to the optimal linear precoding that minimizes the transmit power while satisfying a set of target SINR constraints. Due to the emerging research field of green cellular networks, such a problem is receiving increasing interest nowadays. Closed form expressions of the optimal parameters of the proposed low complexity precoding for power minimization are derived. Numerical results show that the proposed power minimization precoding approximates well the performance of the optimal linear precoding while being more practical for implementation.
26

On the Performance of Jpeg2000 and Principal Component Analysis in Hyperspectral Image Compression

Zhu, Wei 05 May 2007 (has links)
Because of the vast data volume of hyperspectral imagery, compression becomes a necessary process for hyperspectral data transmission, storage, and analysis. Three-dimensional discrete wavelet transform (DWT) based algorithms are particularly of interest due to their excellent rate-distortion performance. This thesis investigates several issues surrounding efficient compression using JPEG2000. Firstly, the rate-distortion performance is studied when Principal Component Analysis (PCA) replaces DWT for spectral decorrelation with the focus on the use of a subset of principal components (PCs) rather than all the PCs. Secondly, the algorithms are evaluated in terms of data analysis performance, such as anomaly detection and linear unmixing, which is directly related to the useful information preserved. Thirdly, the performance of compressing radiance and reflectance data with or without bad band removal is compared, and instructive suggestions are provided for practical applications. Finally, low-complexity PCA algorithms are presented to reduce the computational complexity and facilitate the future hardware design.
27

FPGA realization of low register systolic all one-polynomial multipliers over GF (2m) and their applications in trinomial multipliers

Chen, Pingxiuqi 08 June 2016 (has links)
No description available.
28

Design of Efficient Resource Allocation Algorithms for Wireless Networks: High Throughput, Small Delay, and Low Complexity

Ji, Bo 19 December 2012 (has links)
No description available.
29

Implementation Of A Distributed Video Codec

Isik, Cem Vedat 01 February 2008 (has links) (PDF)
Current interframe video compression standards such as the MPEG4 and H.264, require a high-complexity encoder for predictive coding to exploit the similarities among successive video frames. This requirement is acceptable for cases where the video sequence to be transmitted is encoded once and decoded many times. However, some emerging applications such as video-based sensor networks, power-aware surveillance and mobile video communication systems require computational complexity to be shifted from encoder to decoder. Distributed Video Coding (DVC) is a new coding paradigm, based on two information-theoretic results, Slepian-Wolf and Wyner-Ziv, which allows exploiting source statistics at the decoder only. This architecture, therefore, enables very simple encoders to be used in video coding. Wyner-Ziv video coding is a particular case of DVC which deals with lossy source coding where side information is available at the decoder only. In this thesis, we implemented a DVC codec based on the DISCOVER (DIStributed COding for Video sERvices) project and carried out a detailed analysis of each block. Several algorithms have been implemented for each block and results are compared in terms of rate-distortion. The implemented architecture is aimed to be used as a testbed for future studies.
30

Codage d'images avec et sans pertes à basse complexité et basé contenu / Lossy and lossless image coding with low complexity and based on the content

Liu, Yi 18 March 2015 (has links)
Ce projet de recherche doctoral vise à proposer solution améliorée du codec de codage d’images LAR (Locally Adaptive Resolution), à la fois d’un point de vue performances de compression et complexité. Plusieurs standards de compression d’images ont été proposés par le passé et mis à profit dans de nombreuses applications multimédia, mais la recherche continue dans ce domaine afin d’offrir de plus grande qualité de codage et/ou de plus faibles complexité de traitements. JPEG fut standardisé il y a vingt ans, et il continue pourtant à être le format de compression le plus utilisé actuellement. Bien qu’avec de meilleures performances de compression, l’utilisation de JPEG 2000 reste limitée due à sa complexité plus importe comparée à JPEG. En 2008, le comité de standardisation JPEG a lancé un appel à proposition appelé AIC (Advanced Image Coding). L’objectif était de pouvoir standardiser de nouvelles technologies allant au-delà des standards existants. Le codec LAR fut alors proposé comme réponse à cet appel. Le système LAR tend à associer une efficacité de compression et une représentation basée contenu. Il supporte le codage avec et sans pertes avec la même structure. Cependant, au début de cette étude, le codec LAR ne mettait pas en oeuvre de techniques d’optimisation débit/distorsions (RDO), ce qui lui fut préjudiciable lors de la phase d’évaluation d’AIC. Ainsi dans ce travail, il s’agit dans un premier temps de caractériser l’impact des principaux paramètres du codec sur l’efficacité de compression, sur la caractérisation des relations existantes entre efficacité de codage, puis de construire des modèles RDO pour la configuration des paramètres afin d’obtenir une efficacité de codage proche de l’optimal. De plus, basée sur ces modèles RDO, une méthode de « contrôle de qualité » est introduite qui permet de coder une image à une cible MSE/PSNR donnée. La précision de la technique proposée, estimée par le rapport entre la variance de l’erreur et la consigne, est d’environ 10%. En supplément, la mesure de qualité subjective est prise en considération et les modèles RDO sont appliqués localement dans l’image et non plus globalement. La qualité perceptuelle est visiblement améliorée, avec un gain significatif mesuré par la métrique de qualité objective SSIM. Avec un double objectif d’efficacité de codage et de basse complexité, un nouveau schéma de codage LAR est également proposé dans le mode sans perte. Dans ce contexte, toutes les étapes de codage sont modifiées pour un meilleur taux de compression final. Un nouveau module de classification est également introduit pour diminuer l’entropie des erreurs de prédiction. Les expérimentations montrent que ce codec sans perte atteint des taux de compression équivalents à ceux de JPEG 2000, tout en économisant 76% du temps de codage et de décodage. / This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding.

Page generated in 0.0327 seconds