• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 21
  • 19
  • 19
  • 16
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 190
  • 40
  • 38
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Détection et conciliation d'erreurs intégrées dans un décodeur vidéo : utilisation des techniques d'analyse statistique / Error detection and concealment integrated in a video decoder : using technics of statistical analysis

Ekobo Akoa, Brice 31 October 2014 (has links)
Ce manuscrit présente les travaux de recherche réalisés au cours de ma thèse, dont le but est de développer des algorithmes de correction d'erreurs dans un décodage numérique d'images et d'assurer un haut niveau de la qualité visuelle des images décodées. Nous avons utilisé des techniques d'analyse statistique pour détecter et dissimuler les artefacts. Une boucle de contrôle de la qualité est implémentée afin de surveiller et de corriger la qualité visuelle de l'image. Le manuscrit comprend six chapitres. Le premier chapitre présente les principales méthodes d'évaluation de la qualité des images trouvées dans l'état de l'art et introduit notre proposition. Cette proposition est en fait un outil de mesure de la qualité des vidéos (OMQV) qui utilise le système visuel humain pour indiquer la qualité visuelle d'une vidéo (ou d'une image). Trois modèles d'OMQV sont conçus. Ils sont basés sur la classification, les réseaux de neurones artificiels et la régression non linéaire, et sont développés dans le deuxième, troisième et quatrième chapitre respectivement. Le cinquièmechapitre présente quelques techniques de dissimulation d'artefacts présents dans l'état de l'art. Le sixième et dernier chapitre utilise les résultats des quatre premiers chapitres pour mettre au point un algorithme de correction d'erreurs dans les images. La démonstration considère uniquement les artefacts flou et bruit et s'appuie sur le filtre de Wiener, optimisé sur le critère du minimum linéaire local de l'erreur quadratique moyenne. Les résultats sont présentés et discutés afin de montrer comment l'OMQV améliore les performances de l'algorithme mis en œuvre pour la dissimulation des artefacts. / This report presents the research conducted during my PhD, which aims to develop an efficient algorithm for correcting errors in a digital image decoding process and ensure a high level of visual quality of decoded images. Statistical analysis techniques are studied to detect and conceal the artefacts. A control loop is implemented for the monitoring of image visual quality. The manuscript consists in six chapters. The first chapter presents the principal state of art image quality assessment methods and introduces our proposal. This proposal consists in a video quality measurement tool (VQMT) using the Human Visual System to indicate the visual quality of a video (or an image). Three statistical learning models of VQMT are designed. They are based on classification, artificial neural networks and non-linear regression and are developed in the second, third and fourth chapter respectively. The fifth chapter presents the principal state of art image error concealment technics. The latter chapter uses the results of the four former chapters to design an algorithm for error concealment in images. The demonstration considers blur and noise artefacts and is based on the Wiener filter optimized on the criterion of local linear minimum mean square error. The results are presented and discussed to show how the VQMT improves the performances of the implemented algorithm for error concealment.
152

MPEG Z/Alpha and high-resolution MPEG / MPEG Z/Alpha och högupplösande MPEG-video

Ziegler, Gernot January 2003 (has links)
<p>The progression of technical development has yielded practicable camera systems for the acquisition of so called depth maps, images with depth information. </p><p>Images and movies with depth information open the door for new types of applications in the area of computer graphics and vision. That implies that they will need to be processed in all increasing volumes.</p><p>Increased depth image processing puts forth the demand for a standardized data format for the exchange of image data with depth information, both still and animated. Software to convert acquired depth data to such videoformats is highly necessary. </p><p>This diploma thesis sheds light on many of the issues that come with this new task group. It spans from data acquisition over readily available software for the data encoding to possible future applications. </p><p>Further, a software architecture fulfilling all of the mentioned demands is presented. </p><p>The encoder is comprised of a collection of UNIX programs that generate MPEG Z/Alpha, an MPEG2 based video format. MPEG Z/Alpha contains beside MPEG2's standard data streams one extra data stream to store image depth information (and transparency). </p><p>The decoder suite, called TexMPEG, is a C library for the in-memory decompression of MPEG Z/Alpha. Much effort has been put into video decoder parallelization, and TexMPEG is now capable of decoding multiple video streams, not only in parallel internally, but also with inherent frame synchronization between parallely decoded MPEG videos.</p>
153

Performance Analysis of JavaScript

Smedberg, Fredrik January 2010 (has links)
<p>In the last decade, web browsers have seen a remarkable increase of performance, especially in the JavaScript engines. JavaScript has over the years gone from being a slow and rather limited language, to today have become feature-rich and fast. It’s speed can be around the same or half of comparable code written in C++, but this speed is directly dependent on the choice of the web browser, and the best performance is seen in browsers using JIT compilation techniques.</p><p>Even though the language has seen a dramatic increase in performance, there’s still major problems regarding memory usage. JavaScript applications typically consume 3-4 times more memory than similar applications written in C++. Many browser vendors, like Opera Software, acknowledge this and are currently trying to optimize their memory usage. This issue is hopefully non-existent within a near future.</p><p>Because the majority of scientific papers written about JavaScript only compare performance using the industry benchmarks SunSpider and V8, this thesis have chosen to widen the scope. The benchmarks really give no information about how JavaScript stands in comparison to C#, C++ and other popular languages. To be able to compare that, I’ve implemented a GIF decoder, an XML parser and various elementary tests in both JavaScript and C++ to compare how far apart the languages are in terms of speed, memory usage and responsiveness.</p>
154

Energy-Efficient Turbo Decoder for 3G Wireless Terminals

Al-Mohandes, Ibrahim January 2005 (has links)
Since its introduction in 1993, the turbo coding error-correction technique has generated a tremendous interest due to its near Shannon-limit performance. Two key innovations of turbo codes are parallel concatenated encoding and iterative decoding. In its IMT-2000 initiative, the International Telecommunication Union (ITU) adopted turbo coding as a channel coding standard for Third-Generation (3G) wireless high-speed (up to 2 Mbps) data services (cdma2000 in North America and W-CDMA in Japan and Europe). For battery-powered hand-held wireless terminals, energy consumption is a major concern. In this thesis, a new design for an energy-efficient turbo decoder that is suitable for 3G wireless high-speed data terminals is proposed. The Log-MAP decoding algorithm is selected for implementation of the constituent Soft-Input/Soft-Output (SISO) decoder; the algorithm is approximated by a fixed-point representation that achieves the best performance/complexity tradeoff. To attain energy reduction, a two-stage design approach is adopted. First, a novel dynamic-iterative technique that is appropriate for both good and poor channel conditions is proposed, and then applied to reduce energy consumption of the turbo decoder. Second, a combination of architectural-level techniques is applied to obtain further energy reduction; these techniques also enhance throughput of the turbo decoder and are area-efficient. The turbo decoder design is coded in the VHDL hardware description language, and then synthesized and mapped to a 0. 18<i>&mu;</i>m CMOS technology using the standard-cell approach. The designed turbo decoder has a maximum data rate of 5 Mb/s (at an upper limit of five iterations) and is 3G-compatible. Results show that the adopted two-stage design approach reduces energy consumption of the turbo decoder by about 65%. A prototype for the new turbo codec (encoder/decoder) system is implemented on a Xilinx XC2V6000 FPGA chip; then the FPGA is tested using the CMC Rapid Prototyping Platform (RPP). The test proves correct functionality of the turbo codec implementation, and hence feasibility of the proposed turbo decoder design.
155

Energy-Efficient Turbo Decoder for 3G Wireless Terminals

Al-Mohandes, Ibrahim January 2005 (has links)
Since its introduction in 1993, the turbo coding error-correction technique has generated a tremendous interest due to its near Shannon-limit performance. Two key innovations of turbo codes are parallel concatenated encoding and iterative decoding. In its IMT-2000 initiative, the International Telecommunication Union (ITU) adopted turbo coding as a channel coding standard for Third-Generation (3G) wireless high-speed (up to 2 Mbps) data services (cdma2000 in North America and W-CDMA in Japan and Europe). For battery-powered hand-held wireless terminals, energy consumption is a major concern. In this thesis, a new design for an energy-efficient turbo decoder that is suitable for 3G wireless high-speed data terminals is proposed. The Log-MAP decoding algorithm is selected for implementation of the constituent Soft-Input/Soft-Output (SISO) decoder; the algorithm is approximated by a fixed-point representation that achieves the best performance/complexity tradeoff. To attain energy reduction, a two-stage design approach is adopted. First, a novel dynamic-iterative technique that is appropriate for both good and poor channel conditions is proposed, and then applied to reduce energy consumption of the turbo decoder. Second, a combination of architectural-level techniques is applied to obtain further energy reduction; these techniques also enhance throughput of the turbo decoder and are area-efficient. The turbo decoder design is coded in the VHDL hardware description language, and then synthesized and mapped to a 0. 18<i>&mu;</i>m CMOS technology using the standard-cell approach. The designed turbo decoder has a maximum data rate of 5 Mb/s (at an upper limit of five iterations) and is 3G-compatible. Results show that the adopted two-stage design approach reduces energy consumption of the turbo decoder by about 65%. A prototype for the new turbo codec (encoder/decoder) system is implemented on a Xilinx XC2V6000 FPGA chip; then the FPGA is tested using the CMC Rapid Prototyping Platform (RPP). The test proves correct functionality of the turbo codec implementation, and hence feasibility of the proposed turbo decoder design.
156

Area and energy efficient VLSI architectures for low-density parity-check decoders using an on-the-fly computation

Gunnam, Kiran Kumar 15 May 2009 (has links)
The VLSI implementation complexity of a low density parity check (LDPC) decoder is largely influenced by the interconnect and the storage requirements. This dissertation presents the decoder architectures for regular and irregular LDPC codes that provide substantial gains over existing academic and commercial implementations. Several structured properties of LDPC codes and decoding algorithms are observed and are used to construct hardware implementation with reduced processing complexity. The proposed architectures utilize an on-the-fly computation paradigm which permits scheduling of the computations in a way that the memory requirements and re-computations are reduced. Using this paradigm, the run-time configurable and multi-rate VLSI architectures for the rate compatible array LDPC codes and irregular block LDPC codes are designed. Rate compatible array codes are considered for DSL applications. Irregular block LDPC codes are proposed for IEEE 802.16e, IEEE 802.11n, and IEEE 802.20. When compared with a recent implementation of an 802.11n LDPC decoder, the proposed decoder reduces the logic complexity by 6.45x and memory complexity by 2x for a given data throughput. When compared to the latest reported multi-rate decoders, this decoder design has an area efficiency of around 5.5x and energy efficiency of 2.6x for a given data throughput. The numbers are normalized for a 180nm CMOS process. Properly designed array codes have low error floors and meet the requirements of magnetic channel and other applications which need several Gbps of data throughput. A high throughput and fixed code architecture for array LDPC codes has been designed. No modification to the code is performed as this can result in high error floors. This parallel decoder architecture has no routing congestion and is scalable for longer block lengths. When compared to the latest fixed code parallel decoders in the literature, this design has an area efficiency of around 36x and an energy efficiency of 3x for a given data throughput. Again, the numbers are normalized for a 180nm CMOS process. In summary, the design and analysis details of the proposed architectures are described in this dissertation. The results from the extensive simulation and VHDL verification on FPGA and ASIC design platforms are also presented.
157

Low-power discrete Fourier transform and soft-decision Viterbi decoder for OFDM receivers

Suh, Sangwook 31 August 2011 (has links)
The purpose of this research is to present a low-power wireless communication receiver with an enhanced performance by relieving the system complexity and performance degradation imposed by a quantization process. With an overwhelming demand for more reliable communication systems, the complexity required for modern communication systems has been increased accordingly. A byproduct of this increase in complexity is a commensurate increase in power consumption of the systems. Since the Shannon's era, the main stream of the methodologies for promising the high reliability of communication systems has been based on the principle that the information signals flowing through the system are represented in digits. Consequently, the system itself has been heavily driven to be implemented with digital circuits, which is generally beneficial over analog implementations when digitally stored information is locally accessible, such as in memory systems. However, in communication systems, a receiver does not have a direct access to the originally transmitted information. Since the received signals from a noisy channel are already continuous values with continuous probability distributions, we suggest a mixed-signal system in which the received continuous signals are directly fed into the analog demodulator and the subsequent soft-decision Viterbi decoder without any quantization involved. In this way, we claim that redundant system complexity caused by the quantization process is eliminated, thus gives better power efficiency in wireless communication systems, especially for battery-powered mobile devices. This is also beneficial from a performance perspective, as it takes full advantage of the soft information flowing through the system.
158

MPEG Z/Alpha and high-resolution MPEG / MPEG Z/Alpha och högupplösande MPEG-video

Ziegler, Gernot January 2003 (has links)
The progression of technical development has yielded practicable camera systems for the acquisition of so called depth maps, images with depth information. Images and movies with depth information open the door for new types of applications in the area of computer graphics and vision. That implies that they will need to be processed in all increasing volumes. Increased depth image processing puts forth the demand for a standardized data format for the exchange of image data with depth information, both still and animated. Software to convert acquired depth data to such videoformats is highly necessary. This diploma thesis sheds light on many of the issues that come with this new task group. It spans from data acquisition over readily available software for the data encoding to possible future applications. Further, a software architecture fulfilling all of the mentioned demands is presented. The encoder is comprised of a collection of UNIX programs that generate MPEG Z/Alpha, an MPEG2 based video format. MPEG Z/Alpha contains beside MPEG2's standard data streams one extra data stream to store image depth information (and transparency). The decoder suite, called TexMPEG, is a C library for the in-memory decompression of MPEG Z/Alpha. Much effort has been put into video decoder parallelization, and TexMPEG is now capable of decoding multiple video streams, not only in parallel internally, but also with inherent frame synchronization between parallely decoded MPEG videos.
159

Performance Analysis of JavaScript

Smedberg, Fredrik January 2010 (has links)
In the last decade, web browsers have seen a remarkable increase of performance, especially in the JavaScript engines. JavaScript has over the years gone from being a slow and rather limited language, to today have become feature-rich and fast. It’s speed can be around the same or half of comparable code written in C++, but this speed is directly dependent on the choice of the web browser, and the best performance is seen in browsers using JIT compilation techniques. Even though the language has seen a dramatic increase in performance, there’s still major problems regarding memory usage. JavaScript applications typically consume 3-4 times more memory than similar applications written in C++. Many browser vendors, like Opera Software, acknowledge this and are currently trying to optimize their memory usage. This issue is hopefully non-existent within a near future. Because the majority of scientific papers written about JavaScript only compare performance using the industry benchmarks SunSpider and V8, this thesis have chosen to widen the scope. The benchmarks really give no information about how JavaScript stands in comparison to C#, C++ and other popular languages. To be able to compare that, I’ve implemented a GIF decoder, an XML parser and various elementary tests in both JavaScript and C++ to compare how far apart the languages are in terms of speed, memory usage and responsiveness.
160

Polymorphic ASIC : For Video Decoding

Adarsha Rao, S J January 2013 (has links) (PDF)
Video applications are becoming ubiquitous in recent times due to an explosion in the number of devices with video capture and display capabilities. Traditionally, video applications are implemented on a variety of devices with each device targeting a specific application. However, the advances in technology have created a need to support multiple applications from a single device like a smart phone or tablet. Such convergence of applications necessitates support for interoperability among various applications, scalable performance meet the requirements of different applications and a high degree of reconfigurability to accommodate rapid evolution in applications features. In addition, low power consumption requirement is also very stringent for many video applications. The conventional custom hardware implementations of video applications deliver high performance at low power consumption while the recent MPSoC implementations enable high degree of interoperability and are useful to support application evolution. In this thesis, we combine the best features of custom hardware and MPSoC approaches to design a Polymorphic ASIC. A Polymorphic ASIC is an integrated circuit designed to meet the requirements of several applications belonging to a particular domain. A polymorphic ASIC consists of a fabric of computation, storage and communication resources, using which applications are composed dynamically. Although different video applications differ widely in the internal de-tails of operation, at the heart of almost every video application is a video codec (encoder and decoder). The requirements of scalability, high performance and low power consumption are very stringent for video decoding. Therefore this thesis focuses mainly on the architectural design of a Polymorphic ASIC for video decoding. We present an unified software and hardware architecture (USHA) for Polymorphic ASIC. USHA is a tiled architecture which uses loosely coupled processor and hardware tiles that are software programmable and hardware reconfigurable respectively. The distinctive feature of Polymorphic ASIC is the static partitioning of the application and dynamic mapping of ap-plication processes onto the computational tiles. Depending on the application scenarios, a process may be mapped onto one of the hardware or processor tiles. Polymorphic ASIC incor-porates a network–on–chip (NoC) to achieve flexible communication across different tiles. Formulation of a programming framework for Polymorphic ASIC requires an implementation model that captures the structure of video decoder applications as well as the properties of the Polymorphic ASIC architecture. We derive an implementation model based on a combination of parametric polyhedral process networks, stream based functions and windowed dataflow models of computation. The implementation model leads to a process network oriented compilation flow that achieves realization agnostic application partitioning and enables seamless migration across uniprocessor, multi–processor, semi hardware and full hardware configurations of a video decoder. The thesis also presents an application QoS aware scheduler that selects a decoder configuration that best meets the application performance requirements, thereby enabling dynamic performance scaling. The memory hierarchy of Polymorphic ASIC makes use of an application specific cache. Through a combined analysis of miss rate and external memory bandwidth, we show that the degradation in decoder performance due to memory stall cycles depends on the properties of the video being decoded as well as the behavior of the external memory interface. Based on this observation, we present the design of a reconfigurable 2–D cache architecture which can adjust its parameters in accordance with the characteristics of the video stream being decoded. We validate the Polymorphic ASIC using a proof–of–concept implementation on an FPGA. The performance of H.264 decoder on Polymorphic ASIC is evaluated for uniprocessor, multi processor, hardware accelerated and full hardware configurations. The scaling in performance delivered by these configurations shows that the Polymorphic ASIC enables the application to achieve super linear speedups [1]. The experimental results show that different implementations of a H.264 video decoder on the Polymorphic ASIC can deliver performance comparable to a wide spectrum of devices ranging from embedded processor like ARM 9 to MPSoCs like IBM Cell. We also present the energy consumption of various configurations of video decoders on Polymorphic ASIC and an application to configuration mapping aimed at minimizing the overall energy consumption of a Polymorphic ASIC.

Page generated in 0.0495 seconds