• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 197
  • 34
  • 23
  • 20
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 298
  • 298
  • 110
  • 104
  • 84
  • 82
  • 51
  • 49
  • 41
  • 41
  • 41
  • 41
  • 38
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Rate Control Of MPEG-2 Video And JPEG images

Selvaraj, V 07 1900 (has links) (PDF)
No description available.
272

Some New Approaches To Block Based Motion Estimation And Compensation For Video Compression

Rath, Gagan Bihari 04 1900 (has links) (PDF)
No description available.
273

Some New Methods For Improved Fractal Image Compression

Ramkumar, M 08 1900 (has links) (PDF)
No description available.
274

Video Compression Through Spatial Frequency Based Motion Estimation And Compensation

Menezes, Vinod 02 1900 (has links) (PDF)
No description available.
275

Temporal coherency in video tone mapping / Influence de la cohérence temporelle dans les techniques de Vidéo Tone Mapping

Boitard, Ronan 16 October 2014 (has links)
L'un des buts principaux de l'imagerie numérique est d'une part la capture et d'autre part la reproduction de scènes réelles ou synthétiques sur des dispositifs d'affichage aux capacités restreintes. Les techniques d'imagerie traditionnelles sont limitées par la gamme de luminance qu'elles peuvent capturer et afficher. L'imagerie à grande gamme de luminance (High Dynamic Range – HDR) vise à dépasser cette limitation en capturant, représentant et affichant les quantités physique de la lumière présente dans une scène. Cependant, les technologies d'affichage existantes ne vont pas disparaitre instantanément, la compatibilité entre ces nouveaux contenus HDR et les contenus classiques est donc requise. Cette compatibilité est assurée par une opération de réduction des gammes de luminance (tone mapping) qui adapte les contenus HDR aux capacités restreintes des écrans. Bien que de nombreux opérateurs de tone mapping existent, ceux-ci se focalisent principalement sur les images fixes. Les verrous scientifiques associés au tone mapping de vidéo HDR sont plus complexes du fait de la dimension temporelle. Les travaux recherche menés dans la thèse se sont focalisés sur la préservation de la cohérence temporelle du vidéo tone mapping. Deux principaux axes de recherche ont été traités : la qualité subjective de contenus tone mappés et l'efficacité de la compression des vidéos HDR. En effet, tone mapper individuellement chaque image d'une séquence vidéo HDR engendre des artefacts temporels. Ces artefacts affectent la qualité visuelle de la vidéo tone mappée et il est donc nécessaire de les minimiser. Au travers de tests effectués sur des vidéos HDR avec différents opérateurs de tone mapping, nous avons proposé une classification des artefacts temporels en six catégories. Après avoir testé les opérateurs de tone mapping vidéo existants sur les différents types d'artefacts temporels, nous avons observé que seulement trois des six types d'artefacts étaient résolus. Nous avons donc créé une technique de post-traitement qui permet de réduire les 3 types d'artefacts non-considérés. Le deuxième aspect considéré dans la thèse concerne les relations entre compression et tone mapping. Jusque là, les travaux effectués sur le tone mapping et la vidéo compression se focalisaient sur l'optimisation du tone mapping de manière à atteindre des taux de compression élevés. Ces techniques modifient fortement le rendu, c'est à dire l'aspect de la vidéo, modifiant ainsi l'intention artistique initiale en amont dans la chaine de distribution (avant la compression). Dans ce contexte, nous avons proposé une technique qui permet de réduire l'entropie d'une vidéo tone mappée sans en modifier son rendu. Notre méthode adapte la quantification afin d'accroitre les corrélations entre images successives d'une vidéo. / One of the main goals of digital imagery is to improve the capture and the reproduction of real or synthetic scenes on display devices with restricted capabilities. Standard imagery techniques are limited with respect to the dynamic range that they can capture and reproduce. High Dynamic Range (HDR) imagery aims at overcoming these limitations by capturing, representing and displaying the physical value of light measured in a scene. However, current commercial displays will not vanish instantly hence backward compatibility between HDR content and those displays is required. This compatibility is ensured through an operation called tone mapping that retargets the dynamic range of HDR content to the restricted dynamic range of a display device. Although many tone mapping operators exist, they focus mostly on still images. The challenges of tone mapping HDR videos are more complex than those of still images since the temporal dimensions is added. In this work, the focus was on the preservation of temporal coherency when performing video tone mapping. Two main research avenues are investigated: the subjective quality of tone mapped video content and their compression efficiency. Indeed, tone mapping independently each frame of a video sequence leads to temporal artifacts. Those artifacts impair the visual quality of the tone mapped video sequence and need to be reduced. Through experimentations with HDR videos and Tone Mapping Operators (TMOs), we categorized temporal artifacts into six categories. We tested video tone mapping operators (techniques that take into account more than a single frame) on the different types of temporal artifact and we observed that they could handle only three out of the six types. Consequently, we designed a post-processing technique that adapts to any tone mapping operator and reduces the three types of artifact not dealt with. A subjective evaluation reported that our technique always preserves or increases the subjective quality of tone mapped content for the sequences and TMOs tested. The second topic investigated was the compression of tone mapped video content. So far, work on tone mapping and video compression focused on optimizing a tone map curve to achieve high compression ratio. These techniques changed the rendering of the video to reduce its entropy hence removing any artistic intent or constraint on the final results. That is why, we proposed a technique that reduces the entropy of a tone mapped video without altering its rendering.
276

Ztrátová komprese pohyblivých obrazů / Lossy Video Compression

Šiška, Michal January 2011 (has links)
This thesis deals with description of lossy video compression. Theoretical part of the work describes the fundamentals of the video compression and standarts for lossy as well lossless video and still image compression. The practical part follows up with design of Java program for simulation of MPEG codec.
277

Optimisation des applications multimédia sur des processeurs multicœurs embarqués / Optimization of multimedia applications on embedded multicore processors

Baaklini, Elias Michel 12 February 2014 (has links)
L’utilisation de plusieurs cœurs pour l’exécution des applications mobiles sera l’approche dominante dans les systèmes embarqués pour les prochaines années. Cette approche permet en générale d’augmenter les performances du système sans augmenter la vitesse de l’horloge. Grâce à cela, la consommation d’énergie reste modérée. Toutefois, la concurrence entre les tâches doit être exploitée afin d’améliorer les performances du système dans les différentes situations où l’application peut s’exécuter. Les applications multimédias comme la vidéoconférence ou la vidéo haute définition, ont de nombreuses nouvelles fonctionnalités qui nécessitent des calculs complexes par rapport aux normes précédentes de codage vidéo. Ces applications créent une charge de travail très importante sur les systèmes multiprocesseurs. L’exploitation du parallélisme pour les applications multimédia, comme le codec vidéo H.264/AVC, peut se faire à différents niveaux : au niveau de données ou bien au niveau tâches. Dans le cadre de cette thèse de doctorat, nous proposons de nouvelles solutions pour une meilleure exploitation du parallélisme dans les applications multimédia sur des systèmes embarqués ayant une architecture parallèle symétrique (ou SMP pour Symmetric Multi-Processor). Des approches innovantes pour le décodeur H.264/AVC qui traitent des composantes de couleur et des blocs de l’image en parallèle sont proposées et expérimentées. / Parallel computing is currently the dominating architecture in embedded systems. Concurrency improves the performance of the system rather without increasing the clock speed which affects the power consumption of the system. However, concurrency needs to be exploited in order to improve the system performance in different applications environments. Multimedia applications (real-Time conversational services such as video conferencing, video phone, etc.) have many new features that require complex computations compared to previous video coding standards. These applications have a challenging workload for future multiprocessors. Exploiting parallelism in multimedia applications can be done at data and functional levels or using different instruction sets and architectures. In this research, we design new parallel algorithms and mapping methodologies in order to exploit the natural existence of parallelism in multimedia applications, specifically the H.264/AVC video decoder. We mainly target symmetric shared-Memory multiprocessors (SMPs) for embedded devices such as ARM Cortex-A9 multicore chips. We evaluate our novel parallel algorithms of the H.264/AVC video decoder on different levels: memory load, energy consumption, and execution time.
278

Estimation of LRD present in H.264 video traces using wavelet analysis and proving the paramount of H.264 using OPF technique in wi-fi environment.

Jayaseelan, John January 2012 (has links)
While there has always been a tremendous demand for streaming video over Wireless networks, the nature of the application still presents some challenging issues. These applications that transmit coded video sequence data over best-effort networks like the Internet, the application must cope with the changing network behaviour; especially, the source encoder rate should be controlled based on feedback from a channel estimator that explores the network intermittently. The arrival of powerful video compression techniques such as H.264, which advance in networking and telecommunications, opened up a whole new frontier for multimedia communications. The aim of this research is to transmit the H.264 coded video frames in the wireless network with maximum reliability and in a very efficient manner. When the H.264 encoded video sequences are to be transmitted through wireless network, it faces major difficulties in reaching the destination. The characteristics of H.264 video coded sequences are studied fully and their capability of transmitting in wireless networks are examined and a new approach called Optimal Packet Fragmentation (OPF) is framed and the H.264 coded sequences are tested in the wireless simulated environment. This research has three major studies involved in it. First part of the research has the study about Long Range Dependence (LRD) and the ways by which the self-similarity can be estimated. For estimating the LRD a few studies are carried out and Wavelet-based estimator is selected for the research because Wavelets incarcerate both time and frequency features in the data and regularly provides a more affluent picture than the classical Fourier analysis. The Wavelet used to estimate the self-similarity by using the variable called Hurst Parameter. Hurst Parameter tells the researcher about how a data can behave inside the transmitted network. This Hurst Parameter should be calculated for a more reliable transmission in the wireless network. The second part of the research deals with MPEG-4 and H.264 encoder. The study is carried out to prove which encoder is superior to the other. We need to know which encoder can provide excellent Quality of Service (QoS) and reliability. This study proves with the help of Hurst parameter that H.264 is superior to MPEG-4. The third part of the study is the vital part in this research; it deals with the H.264 video coded frames that are segmented into optimal packet size in the MAC Layer for an efficient and more reliable transfer in the wireless network. Finally the H.264 encoded video frames incorporated with the Optimal Packet Fragmentation are tested in the NS-2 wireless simulated network. The research proves the superiority of H.264 video encoder and OPF¿s master class.
279

Investigating the Adaptive Loop Filter in Next Generation Video Coding

De La Rocha Gomes-Arevalillo, Alfonso January 2017 (has links)
Current trends on video technologies and services are demanding higher bit rates, highervideo resolutions and better video qualities. This issue results in the need of a new generationof video coding techniques to increase the quality and compression rates of previous standards.Since the release of HEVC, ITU-T VCEG and ISO/IEC MPEG have been studying the potentialneed for standardization of future video coding technologies with a compression capability thatsignificantly exceeds the ones from current standards. These new e↵orts of standardization andcompression enhancements are being implemented and evaluated over a software test modelknown under the name of Joint Exploration Model (JEM). One of the blocks being explored inJEM is an Adaptive Loop Filter (ALF) at the end of each frame’s processing flow. ALF aimsto minimize the error between original pixels and decoded pixels using Wiener-based adaptivefilter coefficients, reporting, in its JEM’s implementation, improvements of around a 1% in theBD MS-SSIM rate. A lot of e↵orts have been devoted on improving this block over the pastyears. However, current ALF implementations do not consider the potential use of adaptive QPalgorithms at the encoder. Adaptive QP algorithms enable the use of di↵erent quality levels forthe coding of di↵erent parts of a frame to enhance its subjective quality.In this thesis, we explore potential improvements over di↵erent dimensions of JEM’s AdaptiveLoop Filter block considering the potential use of adaptive QP algorithms. In the document, weexplore a great gamut of modification over ALF processing stages, being the ones with betterresults (i) a QP-aware implementation of ALF were the filter coefficients estimation, the internalRD-optimization and the CU-level flag decision process are optimized for the use of adaptiveQP, (ii) the optimization of ALF’s standard block activity classification stage through the useof CU-level information given by the di↵erent QPs used in a frame, and (iii) the optimizationof ALF’s standard block activity classification stage in B-frames through the application of acorrection weight on coded, i.e. not predicted, blocks of B-frames. These ALF modificationscombined obtained improvements of a 0.419% on average for the BD MS-SSIM rate in the lumachannel, showing each modification individual improvements of a 0.252%, 0.085% and 0.082%,respectively. Thus, we concluded the importance of optimizing ALF for the potential use ofadaptive-QP algorithms in the encoder, and the benefits of considering CU-level and frame-levelmetrics in ALF’s block classification stage. / Utvecklingen inom video-teknologi och service kräver högre bithastighet, högre videoupplösningoch bättre kvalitet. Problemet kräver en ny generation av kodning och tekniker för att ökakvaliteten och komprimeringsgraden utöver vad tidigare teknik kunnat prestera. Sedan lanseringenav HEVC har ITU-T VCEG och ISO/IEC MPEG studerat ett eventuellt behov av standardiseringav framtida video-kodings tekniker med kompressions kapacitet som vida överstigerdagens system. Dessa försök till standardisering och kompressionsframsteg har implementeratsoch utvärderats inom ramen för en mjukvara testmodell som kallas Joint Exploration Model(JEM). Ett av områdena som undersöks inom ramen för JEM är adaptiva loopfilter (ALF) somläggs till i slutet av varje bilds processflöde. ALF har som mål att minimera felet mellan originalpixel och avkodad pixel genom Wiener-baserade adaptiva filter-koefficienter. Mycket kraft harlagts på att förbättra detta område under de senaste åren. Men, nuvarande ALF-appliceringbeaktar inte potentialen av att använda adaptiva QP algoritmer i videokodaren. Adaptiva QPalgoritmer tillåter användningen av olika kvalitet på kodning av olika delar av bilden för attförbättra den subjektiva kvaliteten.I föreliggande uppsats kommer vi undersöka den potentiella förbättringen av JEM:s adaptivaloopfilter som kan uppnås genom att använda adaptiva QP algoritmer. I uppsatsen kommervi undersöka ett stort antal modifikationer i ALF:s process-stadier, för att ta reda på vilkenmodifikationer som har bäst resultat: (i) en QP-medveten implementering av ALF där filterkoefficientensuppskattning av den interna RD-optimeringen och CU-nivåns flaggbeslutsprocessär optimerade för användnngen av adaptiv QP, (ii) optimeringen av ALF:s standard blockaktivitets klassificerings stadie genom användning av CU-nivå-information producerad av deolika QP:n som används i en bild, och (iii) optimeringen av ALF:s standard block aktivitetsklassificerings stadier i B-bilders genom applicering av en korrektursvikt i tidigare kod, d.v.sej förutsedda, block av B-bilder. När dessa ALF modifikationer kombinerades förbättradesi genomsnitt BD MS-SSIM hastigheten i luma kanalen med 0.419%, där varje modifikationförbättrade med 0.252%, 0.085% och 0.082% var. Därigenom drog vi slutstatsen att det är viktigtatt optimera ALF för det potentiella användandet av adaptiva QP-algoritmer i kodaren, ochfördelarna av att beakta CU-nivåmätningar och bild-nivåmätningar i ALF:s block klassificeringsstadie.
280

AZIP, audio compression system: Research on audio compression, comparison of psychoacoustic principles and genetic algorithms

Chen, Howard 01 January 2005 (has links)
The purpose of this project is to investigate the differences between psychoacoustic principles and genetic algorithms (GA0). These will be discussed separately. The review will also compare the compression ratio and the quality of the decompressed files decoded by these two methods.

Page generated in 0.1185 seconds