• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 11
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 98
  • 65
  • 47
  • 45
  • 38
  • 35
  • 31
  • 23
  • 22
  • 20
  • 18
  • 18
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Algorithmes itératifs à faible complexité pour le codage de canal et le compressed sensing / Low Complexity Iterative Algorithms for Channel Coding and Compressed Sensing

Danjean, Ludovic 29 November 2012 (has links)
L'utilisation d'algorithmes itératifs est aujourd'hui largement répandue dans tous les domaines du traitement du signal et des communications numériques. Dans les systèmes de communications modernes, les algorithmes itératifs sont utilisés dans le décodage des codes ``low-density parity-check`` (LDPC), qui sont une classe de codes correcteurs d'erreurs utilisés pour leurs performances exceptionnelles en terme de taux d'erreur. Dans un domaine plus récent qu'est le ``compressed sensing``, les algorithmes itératifs sont utilisés comme méthode de reconstruction afin de recouvrer un signal ''sparse`` à partir d'un ensemble d'équations linéaires, appelées observations. Cette thèse traite principalement du développement d'algorithmes itératifs à faible complexité pour les deux domaines mentionnés précédemment, à savoir le design d'algorithmes de décodage à faible complexité pour les codes LDPC, et le développement et l'analyse d'un algorithme de reconstruction à faible complexité, appelé ''Interval-Passing Algorithm (IPA)'', dans le cadre du ``compressed sensing``.Dans la première partie de cette thèse, nous traitons le cas des algorithmes de décodage des codes LDPC. Il est maintenu bien connu que les codes LDPC présentent un phénomène dit de ''plancher d'erreur`` en raison des échecs de décodage des algorithmes de décodage traditionnels du types propagation de croyances, et ce en dépit de leurs excellentes performances de décodage. Récemment, une nouvelle classe de décodeurs à faible complexité, appelés ''finite alphabet iterative decoders (FAIDs)'' ayant de meilleures performances dans la zone de plancher d'erreur, a été proposée. Dans ce manuscrit nous nous concentrons sur le problème de la sélection de bons décodeurs FAID pour le cas de codes LDPC ayant un poids colonne de 3 et le cas du canal binaire symétrique. Les méthodes traditionnelles pour la sélection des décodeurs s'appuient sur des techniques asymptotiques telles que l'évolution de densité, mais qui ne garantit en rien de bonnes performances sur un code de longueurs finies surtout dans la région de plancher d'erreur. C'est pourquoi nous proposons ici une méthode de sélection qui se base sur la connaissance des topologies néfastes au décodage pouvant être présente dans un code en utilisant le concept de ``trapping sets bruités''. Des résultats de simulation sur différents codes montrent que les décodeurs FAID sélectionnés grâce à cette méthode présentent de meilleures performance dans la zone de plancher d'erreur comparé au décodeur à propagation de croyances.Dans un second temps, nous traitons le sujet des algorithmes de reconstruction itératifs pour le compressed sensing. Des algorithmes itératifs ont été proposés pour ce domaine afin de réduire la complexité induite de la reconstruction par ``linear programming''. Dans cette thèse nous avons modifié et analysé un algorithme de reconstruction à faible complexité dénommé IPA utilisant les matrices creuses comme matrices de mesures. Parallèlement aux travaux réalisés dans la littérature dans la théorie du codage, nous analysons les échecs de reconstruction de l'IPA et établissons le lien entre les ``stopping sets'' de la représentation binaire des matrices de mesure creuses. Les performances de l'IPA en font un bon compromis entre la complexité de la reconstruction sous contrainte de minimisation de la norme $ell_1$ et le très simple algorithme dit de vérification. / Iterative algorithms are now widely used in all areas of signal processing and digital communications. In modern communication systems, iterative algorithms are used for decoding low-density parity-check (LDPC) codes, a popular class of error-correction codes that are now widely used for their exceptional error-rate performance. In a more recent field known as compressed sensing, iterative algorithms are used as a method of reconstruction to recover a sparse signal from a linear set of measurements. This thesis primarily deals with the development of low-complexity iterative algorithms for the two aforementioned fields, namely, the design of low-complexity decoding algorithms for LDPC codes, and the development and analysis of a low complexity reconstruction algorithm called Interval-Passing Algorithm (IPA) for compressed sensing.In the first part of this thesis, we address the area of decoding algorithms for LDPC codes. It is well-known that LDPC codes suffer from the error floor phenomenon in spite of their exceptional performance, where traditional iterative decoders based on the belief propagation (BP) fail for certain low-noise configurations. Recently, a novel class of decoders called ''finite alphabet iterative decoders (FAIDs)'' were proposed that are capable of surpassing BP in the error floor at much lower complexity. In this work, we focus on the problem of selection of particularly good FAIDs for column-weight-three codes over the Binary Symmetric channel (BSC). Traditional methods for decoder selection use asymptotic techniques such as the density evolution method, which do not guarantee a good performance on finite-length codes especially in theerror floor region. Instead, we propose a methodology for selection that relies on the knowledge of potentially harmful topologies that could be present in a code, using the concept of noisy trapping set. Numerical results are provided to show that FAIDs selected based on our methodology outperform BP in the error floor on several codes.In the second part of this thesis, we address the area of iterative reconstruction algorithms for compressed sensing. Iterative algorithms have been proposed for compressed sensing in order to tackle the complexity of the LP reconstruction method. In this work, we modify and analyze a low complexity reconstruction algorithm called the IPA which uses sparse matrices as measurement matrices. Similar to what has been done for decoding algorithms in the area of coding theory, we analyze the failures of the IPA and link them to the stopping sets of the binary representation of the sparse measurement matrices used. The performance of the IPA makes it a good trade-off between the complex L1-minimization reconstruction and the very simple verification decoding.
82

Ανάλυση επιπτώσεων αριθμητικών προσεγγίσεων σε επαναληπτικούς αποκωδικοποιητές για γραμμικούς κώδικες διόρθωσης σφαλμάτων

Αστάρας, Στέφανος 21 February 2015 (has links)
Σε αυτή την εργασία μελετάμε τους αλγορίθμους που χρησιμοποιούνται στην αποκωδικοποίηση των LDPC, με έμφαση στους κώδικες του προτύπου 802.11n. Αντιμετωπίζουμε τις δυσκολίες που αντιμετωπίζουν στην υλοποίηση στο υλικό, κυρίως στην εκτέλεση αριθμητικών πράξεων, και προτείνουμε πρακτικές λύσεις. Χρησιμοποιώντας τα αποτελέσματα εκτενών εξομοιώσεων, καταλήγουμε στις βέλτιστες παραμέτρους που θα έχουν οι προτεινόμενες υλοποιήσεις. / In this thesis, we study the LDPC decoding algorithms, with emphasis on the 802.11n standard codes. We tackle the hardware implementation difficulties, especially those related to arithmetic computations, and propose practical solutions. Leveraging the results of extensive simulations, we find the optimal parameters of the proposed implementations.
83

Ανάλυση, σχεδιασμός και υλοποίηση κωδίκων διόρθωσης λαθών για τηλεπικοινωνιακές εφαρμογές υψηλών ταχυτήτων

Αγγελόπουλος, Γεώργιος 20 October 2009 (has links)
Σχεδόν όλα τα σύγχρονα τηλεπικοινωνιακά συστήματα, τα οποία προορίζονται για αποστολή δεδομένων σε υψηλούς ρυθμούς, έχουν υιοθετήσει κώδικες διόρθωσης λαθών για την αύξηση της αξιοπιστίας και τη μείωση της απαιτούμενης ισχύος εκπομπής τους. Μια κατηγορία κωδίκων, και μάλιστα με εξαιρετικές επιδόσεις, είναι η οικογένεια των LPDC κωδίκων (Low-Density-Parity-Check codes). Οι κώδικες αυτοί είναι γραμμικοί block κώδικες με απόδοση πολύ κοντά στο όριο του Shannon. Επιπλέον, ο εύκολος παραλληλισμός της διαδικασίας αποκωδικοποίησής τους, τους καθιστά κατάλληλους για υλοποίηση σε υλικό. Στην παρούσα διπλωματική μελετούμε τα ιδιαίτερα χαρακτηριστικά και τις παραμέτρους των κωδίκων αυτών, ώστε να κατανοήσουμε την εκπληκτική διορθωτική ικανότητά τους. Στη συνέχεια, επιλέγουμε μια ειδική κατηγορία κωδίκων LDPC, της οποίας οι πίνακες ελέγχου ισοτιμίας έχουν δημιουργηθεί ώστε να διευκολύνουν την υλοποίησή τους, και προχωρούμε στο σχεδιασμό αυτής σε υλικό. Πιο συγκεκριμένα, υλοποιούμε σε VHDL έναν αποκωδικοποιητή σύμφωνα με τον rate ½ και block_lenght 576 bits πίνακα του προτύπου WiMax 802.16e, με στόχο κυρίως την επίτευξη πολύ υψηλού throughput. Στο χρονοπρογραμματισμό της μετάδοσης των μηνυμάτων μεταξύ των κόμβων του κυκλώματος χρησιμοποιούμε το two-phase scheduling και προτείνουμε δύο τροποποιήσεις αυτού για την επιτάχυνση της διαδικασίας αποκωδικοποίησης, οι οποίες καταλήγουν σε 24 και 50% βελτίωση του απαιτούμενου χρόνου μιας επανάληψης με μηδενική και σχετικά μικρή αύξηση της επιφάνειας ολοκλήρωσης αντίστοιχα. Ο όλος σχεδιασμός είναι πλήρως συνθέσιμος και η σωστή λειτουργία αυτού έχει επιβεβαιωθεί σε επίπεδο λογικής εξομοίωσης. Κατά τη διάρκεια σχεδιασμού, χρησιμοποιήθηκαν εργαλεία της Xilinx και MentorGraphics. / Αlmost all the modern telecommunication systems, which are designed for high data rate transmissions, have adopted error correction codes for improving the reliability and the required power of transmission. One special group of these codes, with extremely good performance, is the LDPC codes (Low-Density-Parity-Check codes). These codes are linear block codes with performance near to the theoretical Shannon limit. Furthermore, the inherent parallelism of the decoding procedure makes them suitable for implementation on hardware. In this thesis, we study the special characteristics of these codes in order to understand their astonishing correcting capability. Then, we choose a special category of these codes, whose parity check matrix are special designed to facilitate their implementation on hardware, and we design a high-throughput decoder. More specifically, we implement in VHDL an LDPC decoder according to the rate ½ and block_length 576 bits code of WiMax IEEE802.16e standard, with main purpose to achieve very high throughput. We use the two-phase scheduling at the message passing and we propose 2 modifications for reducing the required decoding time, which result in 25 and 50% improving of the required decoding time of one iteration with zero and little increasing in the decoder’s area respectively. Our design has been successfully simulated and synthesized. During the design process, we used Xiinx and MentorGraphics’s tools.
84

Αρχιτεκτονικές διόρθωσης λαθών βασισμένες σε κώδικες BCH

Σπουρλής, Γεώργιος 19 July 2012 (has links)
Στη σύγχρονη εποχή η ανάγκη για αξιοπιστία των δεδομένων στις νέες τηλεπικοινωνιακές εφαρμογές έχει οδηγήσει στη ανάπτυξη και βελτιστοποίηση των λεγόμενων κωδικών διόρθωσης λαθών. Πρόκειται για συστήματα που έχουν την δυνατότητα ανίχνευσης και διόρθωσης λαθών που εισέρχονται σε τμήμα της πληροφορίας που μεταφέρεται μέσω τηλεπικοινωνιακών κυρίως δικτύων λόγω του θορύβου από το περιβάλλον και πιο συγκεκριμένα από το κανάλι μετάδοσης. Υπάρχουν αρκετές κατηγορίες από τέτοιους κώδικες διόρθωσης ανάλογα της δομής και της φύσης των αλγορίθμων που χρησιμοποιούν. Οι δύο κυριότερες κατηγορίες είναι οι συνελικτικοί κώδικες και οι γραμμικοί μπλοκ κώδικες με τους οποίους θα ασχοληθούμε.Οι δύο κώδικες που θα χρησιμοποιηθούν στα πλαίσια αυτής της εργασίας είναι οι κώδικες LDPC και οι BCH. Ανήκουν και οι δυο στους γραμμικούς μπλοκ κώδικες. Σκοπός της παρούσας διπλωματικής εργασίας αποτελεί αρχικά ο σχεδιασμός και η υλοποίηση ενός παραμετρικού συστήματος κωδικοποίησης και αποκωδικοποίησης για δυαδικούς BCH κώδικες διαφόρων μεγεθών. Εκτός της παραμετροποίησης έμφαση δόθηκε στην χαμηλή πολυπλοκότητα του συστήματος, στον υψηλό ρυθμό επεξεργασίας και στην δυνατότητα χρήσης shortening. Σε δεύτερη φάση πραγματοποιήθηκε σύνδεση μεταξύ, του παραπάνω κώδικα BCH, με έναν έτοιμο κώδικα LDPC και ένα κανάλι λευκού προσθετικού θορύβου (AWGN) που σχεδιάστηκαν στα πλαίσια άλλων διπλωματικών εργασιών με τελικό αποτέλεσμα την μελέτη της συμπεριφοράς του συνολικού συστήματος σε θέματα διόρθωσης λαθών και πιο συγκεκριμένα στην μείωση του φαινομένου του error-floor που παρατηρείται στον LDPC κώδικα. Επιπλέον μελετήθηκε η απαίτηση του συστήματος σε πόρους καθώς και ο ρυθμός επεξεργασίας που επιτυγχάνεται. Οι κύριες παράμετροι του κώδικα BCH που μπορούν να μεταβληθούν είναι το μέγεθος της κωδικής λέξης και η διορθωτική ικανότητα που επιτυγχάνεται. / -
85

Αποκωδικοποιητής μέγιστης πιθανοφάνειας για κώδικες LDPC και υλοποίηση σε FPGA

Μέρμιγκας, Παναγιώτης 07 June 2013 (has links)
Στο πρώτο μέρος της παρούσας Διπλωματικής Εργασίας εισάγονται οι βασικές έννοιες της Θεωρίας Κωδικοποίησης και των Τηλεπικοινωνιακών Συστημάτων. Για τη διόρθωση λαθών στην περίπτωση της μετάδοσης μέσω ενός θορυβώδους καναλιού εφαρμόζεται κωδικοποίηση καναλιού με Γραμμικούς Μπλοκ Κώδικες, και πιο συγκεκριμένα Κώδικες Χαμηλής Πυκνότητας Ελέγχου Ισοτιμίας (Low-Density Parity-Check Codes, LDPC). Ορίζεται η μαθηματική περιγραφή των κωδίκων αυτών και διατυπώνονται σχετικοί ορισμοί και θεωρήματα. Επίσης, διατυπώνεται το κριτήριο Μέγιστης Πιθανοφάνειας, στο οποίο βασίζεται η ανάπτυξη του αντίστοιχου αποκωδικοποιητή. Το δεύτερο μέρος περιλαμβάνει την εξομοίωση του αποκωδικοποιητή Μέγιστης Πιθανοφάνειας στο λογισμικό και την υλοποίησή του σε FPGA, στις περιπτώσεις όπου χρησιμοποιούνται Soft ή Hard είσοδοι στον αποκωδικοποιητή. Ακόμη, παρουσιάζεται η Αρχιτεκτονική του αποκωδικοποιητή και η Μεθοδολογία Σχεδίασής του. Παρουσιάζονται βελτιώσεις στη σχεδίαση του αποκωδικοποιητή που οδηγούν σε μείωση της απαιτούμενης επιφάνειας στο υλικό. Τα αποτελέσματα που προκύπτουν από τις μετρήσεις των δύο υλοποιήσεων συγκρίνονται με την περίπτωση αποκωδικοποιητή βασισμένο σε επαναλήψεις και εξάγονται τα διαγράμματα ρυθμού σφαλμάτων bit και τα αντίστοιχα συμπεράσματα. / In the first part of this thesis, the basic principles of Coding Theory and Communication Systems are introduced. In order to correct errors in the case of transmission through a noisy channel, channel coding with Linear Block Codes is applied, and more specifically Low-Density Parity-Check (LDPC) codes. The mathematical description of such codes is defined and useful definitions and theorems are specified. In addition, the Maximum Likelihood (ML) criterion is specified, on which the development of the relevant decoder is based. The second part consists of the simulation of the ML decoder in software and its hardware implementation on FPGA, in the cases where either Soft or Hard information is used as the decoder's input. Furthermore, the decoder's Architecture and the Design Methodology used are presented. Improvements concerning the implementation of the decoder are introduced, which lead to a reduction in the required area on chip. The experimental results of the two implementations are compared to the case of the iterative decoder and the Bit Error Rate plots are produced, as well as the appropriate conclusions.
86

Uma proposta de método para melhoria de desempenho do codificador x264 baseada na análise do acesso ao barramento externo de memória

Duma, Luiz Henrique 26 August 2011 (has links)
A codificação de vídeo digital é um recurso essencial para a produção de vídeo para a Internet, canais de TV e outras mídias. Através da codificação é possível melhorar a utilização de recursos de armazenamento, transmissão e recepção, como por exemplo, banda. Em sistemas embarcados, a limitação de recursos impacta no desempenho dos codificadores, como por exemplo, as câmeras de vídeo de telefones celulares. Este trabalho analisa o uso de técnicas para a diminuição de acesso a memória externa (RAM) especificamente para o codificador x264. Através do uso de ferramentas para software profiling e análise da performance do codificador a partir dos contadores de performance (HPC) disponíveis em muitos processadores modernos, foi possível estabelecer um método de análise de dados para direcionar a implementação do codificador para um melhor desempenho. Os resultados obtidos mostram uma melhora entre 16% e 18% no tempo de codificação em relação a um codificador não otimizado, mantendo-se os mesmos valores de qualidade de vídeo obtidos através de métricas objetivas. / This study attempts to systematize the use of techniques to reduce access to external memory (RAM) for the x264 encoder, as well the use of software profiling tools with focus on the usage of hardware performance counters (HPC), available in many modern processors. The results show up a reduction between 16% and 18% for execution time of the encoder, without noticeable changes on objective video quality metrics. Digital video coding is an essential resource to produce video for Internet, TV, and other media. Through video coding, it is possible to improve storage and bandwidth utilization for transmission and reception of video streams. On embedded devices, hardware resources impact on the encoder performance, for example, in video cameras of cellphones. This study analyzes the external memory access (RAM) at the x264 encoder implementation, aiming to identify ways to improve the encoding process performance. With software profiling tools and encoder performance analysis was possible to establish a data analysis method which results can be used to improve the overall encoder performance. The method implementation results show an improvement of 16% to 18% over a non-optimized encoder while keeping the same video quality measured from objective metrics.
87

Uma abordagem computacional para a análise de sequências de DNA por meio dos códigos corretores de erros / A computational approach for the analysis of DNA sequences using error correcting codes

Pereira, Diogo Guilherme, 1981- 08 January 2014 (has links)
Orientador: Reginaldo Palazzo Júnior / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T03:06:58Z (GMT). No. of bitstreams: 1 Pereira_DiogoGuilherme_M.pdf: 2278721 bytes, checksum: d52e9ddea8e27d992073c5cf8ba3674f (MD5) Previous issue date: 2014 / Resumo: É evidente os benefícios proporcionados pela aplicação da teoria da informação nas análises dos processos de codificação genética. Este trabalho propõe o desenvolvimento de algoritmos, e sua implementação computacional, para a realização de análises em sequências de DNA por meio dos códigos BCH. O primeiro programa irá calcular diversos polinômios geradores que serão utilizados pelos outros programas. O segundo programa se utiliza destes polinômios geradores para realizar análises em sequências de DNA e identificar palavras-código na forma de novas sequências de DNA. Já o terceiro programa, de iniciativa inédita, se utiliza tanto dos polinômios geradores quanto as palavras-código e realiza um processo de decodificação com o intuito de rastrear as mutações passiveis de ocorrer em sequências de DNA / Abstract: The benefits provided by the application of information theory in the analyses of genetic coding processes are evident. In this work the development of algorithms and their computational implementations are proposed, with the aim at performing analyses of DNA sequences by use of BCH codes. The first program calculates several generator polynomials which are used by other programs. The second program uses generator polynomials to perform DNA sequence analyses and to identify the codewords in the form of new DNA sequences. The third program by using both the generator polynomials as well as the codewords to perform a decoding process in order to predict mutations that may occur in DNA sequences / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
88

Reliable Communications under Limited Knowledge of the Channel

Yazdani, Raman Unknown Date
No description available.
89

Polymorphic ASIC : For Video Decoding

Adarsha Rao, S J January 2013 (has links) (PDF)
Video applications are becoming ubiquitous in recent times due to an explosion in the number of devices with video capture and display capabilities. Traditionally, video applications are implemented on a variety of devices with each device targeting a specific application. However, the advances in technology have created a need to support multiple applications from a single device like a smart phone or tablet. Such convergence of applications necessitates support for interoperability among various applications, scalable performance meet the requirements of different applications and a high degree of reconfigurability to accommodate rapid evolution in applications features. In addition, low power consumption requirement is also very stringent for many video applications. The conventional custom hardware implementations of video applications deliver high performance at low power consumption while the recent MPSoC implementations enable high degree of interoperability and are useful to support application evolution. In this thesis, we combine the best features of custom hardware and MPSoC approaches to design a Polymorphic ASIC. A Polymorphic ASIC is an integrated circuit designed to meet the requirements of several applications belonging to a particular domain. A polymorphic ASIC consists of a fabric of computation, storage and communication resources, using which applications are composed dynamically. Although different video applications differ widely in the internal de-tails of operation, at the heart of almost every video application is a video codec (encoder and decoder). The requirements of scalability, high performance and low power consumption are very stringent for video decoding. Therefore this thesis focuses mainly on the architectural design of a Polymorphic ASIC for video decoding. We present an unified software and hardware architecture (USHA) for Polymorphic ASIC. USHA is a tiled architecture which uses loosely coupled processor and hardware tiles that are software programmable and hardware reconfigurable respectively. The distinctive feature of Polymorphic ASIC is the static partitioning of the application and dynamic mapping of ap-plication processes onto the computational tiles. Depending on the application scenarios, a process may be mapped onto one of the hardware or processor tiles. Polymorphic ASIC incor-porates a network–on–chip (NoC) to achieve flexible communication across different tiles. Formulation of a programming framework for Polymorphic ASIC requires an implementation model that captures the structure of video decoder applications as well as the properties of the Polymorphic ASIC architecture. We derive an implementation model based on a combination of parametric polyhedral process networks, stream based functions and windowed dataflow models of computation. The implementation model leads to a process network oriented compilation flow that achieves realization agnostic application partitioning and enables seamless migration across uniprocessor, multi–processor, semi hardware and full hardware configurations of a video decoder. The thesis also presents an application QoS aware scheduler that selects a decoder configuration that best meets the application performance requirements, thereby enabling dynamic performance scaling. The memory hierarchy of Polymorphic ASIC makes use of an application specific cache. Through a combined analysis of miss rate and external memory bandwidth, we show that the degradation in decoder performance due to memory stall cycles depends on the properties of the video being decoded as well as the behavior of the external memory interface. Based on this observation, we present the design of a reconfigurable 2–D cache architecture which can adjust its parameters in accordance with the characteristics of the video stream being decoded. We validate the Polymorphic ASIC using a proof–of–concept implementation on an FPGA. The performance of H.264 decoder on Polymorphic ASIC is evaluated for uniprocessor, multi processor, hardware accelerated and full hardware configurations. The scaling in performance delivered by these configurations shows that the Polymorphic ASIC enables the application to achieve super linear speedups [1]. The experimental results show that different implementations of a H.264 video decoder on the Polymorphic ASIC can deliver performance comparable to a wide spectrum of devices ranging from embedded processor like ARM 9 to MPSoCs like IBM Cell. We also present the energy consumption of various configurations of video decoders on Polymorphic ASIC and an application to configuration mapping aimed at minimizing the overall energy consumption of a Polymorphic ASIC.
90

Polymorphic ASIC : For Video Decoding

Adarsha Rao, S J January 2013 (has links) (PDF)
Video applications are becoming ubiquitous in recent times due to an explosion in the number of devices with video capture and display capabilities. Traditionally, video applications are implemented on a variety of devices with each device targeting a specific application. However, the advances in technology have created a need to support multiple applications from a single device like a smart phone or tablet. Such convergence of applications necessitates support for interoperability among various applications, scalable performance meet the requirements of different applications and a high degree of reconfigurability to accommodate rapid evolution in applications features. In addition, low power consumption requirement is also very stringent for many video applications. The conventional custom hardware implementations of video applications deliver high performance at low power consumption while the recent MPSoC implementations enable high degree of interoperability and are useful to support application evolution. In this thesis, we combine the best features of custom hardware and MPSoC approaches to design a Polymorphic ASIC. A Polymorphic ASIC is an integrated circuit designed to meet the requirements of several applications belonging to a particular domain. A polymorphic ASIC consists of a fabric of computation, storage and communication resources, using which applications are composed dynamically. Although different video applications differ widely in the internal de-tails of operation, at the heart of almost every video application is a video codec (encoder and decoder). The requirements of scalability, high performance and low power consumption are very stringent for video decoding. Therefore this thesis focuses mainly on the architectural design of a Polymorphic ASIC for video decoding. We present an unified software and hardware architecture (USHA) for Polymorphic ASIC. USHA is a tiled architecture which uses loosely coupled processor and hardware tiles that are software programmable and hardware reconfigurable respectively. The distinctive feature of Polymorphic ASIC is the static partitioning of the application and dynamic mapping of ap-plication processes onto the computational tiles. Depending on the application scenarios, a process may be mapped onto one of the hardware or processor tiles. Polymorphic ASIC incor-porates a network–on–chip (NoC) to achieve flexible communication across different tiles. Formulation of a programming framework for Polymorphic ASIC requires an implementation model that captures the structure of video decoder applications as well as the properties of the Polymorphic ASIC architecture. We derive an implementation model based on a combination of parametric polyhedral process networks, stream based functions and windowed dataflow models of computation. The implementation model leads to a process network oriented compilation flow that achieves realization agnostic application partitioning and enables seamless migration across uniprocessor, multi–processor, semi hardware and full hardware configurations of a video decoder. The thesis also presents an application QoS aware scheduler that selects a decoder configuration that best meets the application performance requirements, thereby enabling dynamic performance scaling. The memory hierarchy of Polymorphic ASIC makes use of an application specific cache. Through a combined analysis of miss rate and external memory bandwidth, we show that the degradation in decoder performance due to memory stall cycles depends on the properties of the video being decoded as well as the behavior of the external memory interface. Based on this observation, we present the design of a reconfigurable 2–D cache architecture which can adjust its parameters in accordance with the characteristics of the video stream being decoded. We validate the Polymorphic ASIC using a proof–of–concept implementation on an FPGA. The performance of H.264 decoder on Polymorphic ASIC is evaluated for uniprocessor, multi processor, hardware accelerated and full hardware configurations. The scaling in performance delivered by these configurations shows that the Polymorphic ASIC enables the application to achieve super linear speedups [1]. The experimental results show that different implementations of a H.264 video decoder on the Polymorphic ASIC can deliver performance comparable to a wide spectrum of devices ranging from embedded processor like ARM 9 to MPSoCs like IBM Cell. We also present the energy consumption of various configurations of video decoders on Polymorphic ASIC and an application to configuration mapping aimed at minimizing the overall energy consumption of a Polymorphic ASIC.

Page generated in 0.4195 seconds