• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 181
  • 56
  • 9
  • 9
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 288
  • 288
  • 89
  • 82
  • 80
  • 72
  • 47
  • 46
  • 43
  • 41
  • 41
  • 37
  • 36
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

The design and implementation of a MPEG video system with transmission control and QoS support

Hui, Kin Cheung 01 January 2002 (has links)
No description available.
162

Cross Layer Design for Video Streaming over 4G Networks Using SVC

Radhakrishna, Rakesh January 2012 (has links)
Fourth Generation (4G) cellular technology Third Generation Partnership Project (3GPP) Long Term Evolution (LTE) offers high data rate capabilities to mobile users; and, operators are trying to deliver a true mobile broadband experience over LTE networks. Mobile TV and Video on Demand (VoD) are expected to be the main revenue generators in the near future [36] and efficient video streaming over wireless is the key to enabling this. 3GPP recommends the use of H.264 baseline profiles for all video based services in Third Generation (3G) Universal Mobile Telecommunication System (UMTS) networks. However, LTE networks need to support mobile devices with different display resolution requirements like small resolution mobile phones and high resolution laptops. Scalable Video Coding (SVC) is required to achieve this goal. Feasibility study of SVC for LTE is one of the main agenda of 3GPP Release10. SVC enhances H.264 with a set of new profiles and encoding tools that may be used to produce scalable bit streams. Efficient adaptation methods for SVC video transmission over LTE networks are proposed in this thesis. Advantages of SVC over H.264 are analyzed using real time use cases of mobile video streaming. Further, we study the cross layer adaptation and scheduling schemes for delivering SVC video streams most efficiently to the users in LTE networks in unicast and multicast transmissions. We propose SVC based video streaming scheme for unicast and multicast transmissions in the downlink direction, with dynamic adaptations and a scheduling scheme based on channel quality information from users. Simulation results indicate improved video quality for more number of users in the coverage area and efficient spectrum usage with the proposed methods.
163

Computational effort analysis and control in High Efficiency Video Coding

Silva, Mateus Grellert da January 2014 (has links)
Codificadores HEVC impõem diversos desafios em aplicações embarcadas com restrições computacionais, especialmente quando há restrições de processamento em tempo real. Para tornar a codificação de vídeos HEVC factível nessas situações, é proposto neste trabalho um Sistema de Controle de Complexidade (SCC) que se adapta dinamicamente a capacidades computacionais varáveis. Considera-se que o codificador faz parte de um sistema maior, o qual informa suas restrições como disponibilidade da CPU e processamento alvo para o SCC. Para desenvolver um sistema eficiente, uma extensiva análise de complexidade dos principais parâmetros de codificação é realizada. Nessa análise, foi definida uma métrica livre de particularidades da plataforma de simulação, como hierarquia de memória e acesso concorrente à unidade de processamento. Essa métrica foi chamada de Complexidade Aritmética e pode ser facilmente adaptada para diversas plataformas. Os resultados mostram que o SCC proposto atinge ganhos médios de 40% em complexidade com penalidade mínima em eficiência de compressão e qualidade. As análises de adaptabilidade e controlabilidade mostraram que o SCC rapidamente se adapta a diferentes restrições, por exemplo, quando a disponibilidade de recursos computacionais varia dinamicamente enquanto um vídeo é codificado. Comparado com o estado da arte, o SCC atinge uma redução de 44% no tempo de codificação com penalidade de 2.9% na taxa de compressão e acréscimo de 6% em BD-bitrate. / HEVC encoders impose several challenges in resource-/computationally-constrained embedded applications, especially under real-time throughput constraints. To make HEVC encoding feasible in such scenarios, an adaptive Computation Management Scheme (CMS) that dynamically adapts to varying compute capabilities is proposed in this work. It is assumed that the encoder is part of a larger system, which informs to the CMS its restrictions and requirements, like CPU availability and target frame rate. To effectively develop and apply such a scheme, an extensive computational effort analysis of key encoding parameters of the HEVC is carried out. For this analysis, a platform-orthogonal metric called “Arithmetic Complexity” was developed, which can be widely adopted for various computing platforms. The achieved results illustrate that the proposed CMS provides 40% cycle savings on average at the cost of small RD penalties. The adaptability and controllability analyses show that the CMS quickly adapts to different constrained scenarios, e.g., when the executing HEVC encoder requires more or less computation from the underlying platform. Compared to state of the art, the CMS achieves 44% encoding time savings while incurring a minor 2.9% increase in the bitrate and 6% increase in BD-bitrate.
164

Reversão anaglífica em vídeos estereoscópicos / Anaglyphic reversion in stereoscopic videos

Felipe Maciel Rodrigues 24 May 2016 (has links)
A atenção voltada à produção de conteúdos 3D atualmente tem sido alta, grande parte devido à aceitação e à manifestação de interesse do público para esta tecnologia. Novas técnicas de captação e codificação e modos de reprodução de vídeos 3D, particularmente vídeos estereoscópicos, vêm surgindo ou sendo melhorados, visando aperfeiçoar e integrar esta nova tecnologia com a infraestrutura disponível. No entanto, em relação a avanços na área de codificação, nota-se a ausência de uma técnica compatível com mais de um método de visualização de vídeos estereoscópicos - para cada método de visualização há uma técnica de codificação diferente, o que inviabiliza ao usuário escolher o método que deseja visualizar o conteúdo. Uma abordagem para resolver este problema é desenvolver uma técnica genérica, isto é, uma técnica que seja independentemente do método de visualização, que através de parâmetros adequados, produza um vídeo estereoscópico sem perda significativa de qualidade ou a percepção de profundidade, que é a característica marcante desse tipo de conteúdo. O método proposto neste trabalho, chamado HaaRGlyph, transforma um vídeo esterescópico em um único fluxo contendo um anáglifo, codificado de modo especial. Esse fluxo além de ser compatível com o método de visualização anaglífica é também reversível à uma aproximação do par estéreo original, possibilitando a independência de visualização. Além disso, a HaaRGlyph atinge maiores taxas de compressão do que o trabalho relacionado. / Attention towards 3D content production has been currently high, mostly because of public acceptance and interest in this kind of technology. Therefore, new capturing techniques, coding and playback modes for 3D video, particularly stereoscopic video, have been emerging or being enhanced, focusing on improving and integrating this new kind of technology with the available infrastructure. However, regarding advances in the coding area, there are conflicts because each stereoscopic visualization method uses a different coding technique. That leads to incompatibility between those methods. An approach to tackle this problem is to develop a generic technique, that is, a technique that is appropriate regardless the visualization method. Such technique, with suitable parameters, outputs a stereoscopic video with no significant loss of quality or depth perception, which is the remarkable feature of this kind of content. The method proposed in this work, named HaaRGlyph, transforms a stereo pair of videos into a single anaglyph stream, coded in a special manner. Such stream is not only compliant with the anaglyph visualization method but also reversible to something close to the original stereo pair, allowing visualization independence. Moreover, HaarGlyph achieves higher compression rates than related work.
165

Techniques d'amélioration des performances de compression dans le cadre du codage vidéo distribué / Techniques for improving the performance of distributed video coding

Abou El Ailah, Abdalbassir 14 December 2012 (has links)
Le codage vidéo distribué (DVC) est une technique récemment proposée dans le cadre du codage vidéo, et qui convient surtout à une nouvelle classe d'applications telles que la surveillance vidéo sans fil, les réseaux de capteurs multimédia, et les téléphones mobiles. En DVC, une information adjacente (SI) est estimée au décodeur en se basant sur les trames décodées disponibles, et utilisée pour le décodage et la reconstruction des autres trames. Dans cette thèse, nous proposons de nouvelles techniques qui permettent d'améliorer la qualité de la SI. Tout d'abord, le raffinement itératif de la SI est réalisé après le décodage de chaque sous-bande DCT. Ensuite, une nouvelle méthode de génération de la SI est proposée, qui utilise l'estimation des vecteurs de mouvement dans les deux sens et le raffinement Quad-tree. Ensuite, de nouvelles approches sont proposées afin de combiner les estimations globale et locale en utilisant les différences entre les blocs correspondants et la technique SVM. En plus, des algorithmes sont proposés pour améliorer la fusion au cours du décodage. En outre, les objets segmentés des trames de référence sont utilisés dans la fusion, en utilisant les courbes élastiques et la compensation de mouvement basée-objets. De nombreuses simulations ont été effectuées pour tester les performances des techniques proposés et qui montrent des gains importants par rapport au codeur classique DISCOVER. Par ailleurs, les performances de DVC obtenues en appliquant les algorithmes proposés surpassent celles de H.264/AVC Intra et H.264/AVC No motion pour les séquences testées. En plus, l'écart vis-à-vis de H.264/AVC Inter (IB...IB) est considérablement réduit. / Distributed Video Coding (DVC) is a recently proposed paradigm in video communication, which fits well emerging applications such as wireless video surveillance, multimedia sensor networks, wireless PC camera, and mobile cameras phones. These applications require a low complexity encoding, while possibly affording a high complexity decoding. In DVC, a Side Information (SI) is estimated at the decoder, using the available decoded frames, and used for the decoding and reconstruction of other frames. In this PhD thesis, we propose new techniques in order to improve the quality of the SI. First, successive refinement of the SI is performed after each decoded DCT band. Then, a new scheme for SI generation based on backward, forward motion estimations, and Quad-tree refinement is proposed. Furthermore, new methods for combining global and local motion estimations are proposed, to further improve the SI, using the differences between the corresponding blocks and Support Vector Machine (SVM). In addition, algorithms are proposed to refine the fusion during the decoding process. Furthermore, the foreground objects are used in the combination of the global and local motion estimations, using elastic curves and foreground objects motion compensation. Extensive experiments have been conducted showing that important gains are obtained by the proposed techniques compared to the classical DISCOVER codec. In addition, the performance of DVC applying the proposed algorithms outperforms now the performance of H.264/AVC Intra and H.264/AVC No motion for tested sequences. Besides that, the gap with H.264/AVC in an Inter IB…IB configuration is significantly reduced.
166

Light-field image and video compression for future immersive applications / Compression d'image et vidéo light-field pour les futures applications immersives

Dricot, Antoine 01 March 2017 (has links)
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux. / Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks.
167

Efficient Support for Application-Specific Video Adaptation

Huang, Jie 01 January 2006 (has links)
As video applications become more diverse, video must be adapted in different ways to meet the requirements of different applications when there are insufficient resources. In this dissertation, we address two sorts of requirements that cannot be addressed by existing video adaptation technologies: (i) accommodating large variations in resolution and (ii) collecting video effectively in a multi-hop sensor network. In addition, we also address requirements for implementing video adaptation in a sensor network. Accommodating large variation in resolution is required by the existence of display devices with widely disparate screen sizes. Existing resolution adaptation technologies usually aim at adapting video between two resolutions. We examine the limitations of these technologies that prevent them from supporting a large number of resolutions efficiently. We propose several hybrid schemes and study their performance. Among these hybrid schemes, Bonneville, a framework that combines multiple encodings with limited scalability, can make good trade-offs when organizing compressed video to support a wide range of resolutions. Video collection in a sensor network requires adapting video in a multi-hop storeand- forward network and with multiple video sources. This task cannot be supported effectively by existing adaptation technologies, which are designed for real-time streaming applications from a single source over IP-style end-to-end connections. We propose to adapt video in the network instead of at the network edge. We also propose a framework, Steens, to compose adaptation mechanisms on multiple nodes. We design two signaling protocols in Steens to coordinate multiple nodes. Our simulations show that in-network adaptation can use buffer space on intermediate nodes for adaptation and achieve better video quality than conventional network-edge adaptation. Our simulations also show that explicit collaboration among multiple nodes through signaling can improve video quality, waste less bandwidth, and maintain bandwidth-sharing fairness. The implementation of video adaptation in a sensor network requires system support for programmability, retaskability, and high performance. We propose Cascades, a component-based framework, to provide the required support. A prototype implementation of Steens in this framework shows that the performance overhead is less than 5% compared to a hard-coded C implementation.
168

Reliability of Pre-Service Teachers Coding of Teaching Videos Using Video-Annotation Tools

Dye, Brigham R. 18 July 2007 (has links) (PDF)
Teacher education programs that aspire to helping pre-service teachers develop expertise must help students engage in deliberate practice along dimensions of teaching expertise. However, field teaching experiences often lack the quantity and quality of feedback that is needed to help students engage in meaningful teaching practice. The limited availability of supervising teachers makes it difficult to personally observe and evaluate each student teacher's field teaching performances. Furthermore, when a supervising teacher debriefs such an observation, the supervising teacher and student may struggle to communicate meaningfully about the teaching performance. This is because the student teacher and supervisor often have very different perceptions of the same teaching performance. Video analysis tools show promise for improving the quality of feedback student teachers receive in their teaching performance by providing a common reference for evaluative debriefing and allowing students to generate their own feedback by coding videos of their own teaching. This study investigates the reliability of pre-service teacher coding using a video analysis tool. This study found that students were moderately reliable coders when coding video of an expert teacher (49%-68%). However, when the reliability of student coding of their own teaching videos was audited, students showed a high degree of accuracy (91%). These contrasting findings suggest that coding reliability scores may not be simple indicators of student understanding of the teaching competencies represented by a coding scheme. Instead, reliability scores may also be subject to the influence of extraneous factors. For example, reliability scores in this study were influenced by differences in the technical aspects of how students implemented the coding system. Furthermore, reliability scores were influenced by how coding proficiency was measured. Because this study also suggests that students can be taught to improve their coding reliability, further research may improve reliability scores"-and make them a more valid reflection of student understanding of teaching competency-"by training students about the technical aspects of implementing a coding system.
169

Motion Estimation and Compensation in the Redundant Wavelet Domain

Cui, Suxia 02 August 2003 (has links)
Despite being the prefered approach for still-image compression for nearly a decade, wavelet-based coding for video has been slow to emerge, due primarily to the fact that the shift variance of the discrete wavelet transform hinders motion estimation and compensation crucial to modern video coders. Recently it has been recognized that a redundant, or overcomplete, wavelet transform is shift invariant and thus permits motion prediction in the wavelet domain. In this dissertation, other uses for the redundancy of overcomplete wavelet transforms in video coding are explored. First, it is demonstrated that the redundant-wavelet domain facilitates the placement of an irregular triangular mesh to video images, thereby exploiting transform redundancy to implement geometries for motion estimation and compensation more general than the traditional block structure widely employed. As the second contribution of this dissertation, a new form of multihypothesis prediction, redundant wavelet multihypothesis, is presented. This new approach to motion estimation and compensation produces motion predictions that are diverse in transform phase to increase prediction accuracy. Finally, it is demonstrated that the proposed redundant-wavelet strategies complement existing advanced video-coding techniques and produce significant performance improvements in a battery of experimental results.
170

Exploiting Region Of Interest For Improved Video Coding

Gopalan, Ramya 28 September 2009 (has links)
No description available.

Page generated in 0.1024 seconds