• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 175
  • 61
  • 35
  • 25
  • 17
  • 12
  • 11
  • 7
  • 7
  • 7
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 389
  • 149
  • 111
  • 101
  • 83
  • 75
  • 70
  • 60
  • 57
  • 56
  • 41
  • 37
  • 35
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Adaptação de vídeo ao vivo apoiada em informações de contexto / Live video adaptation based on context information

Marcelo Garcia Manzato 22 September 2006 (has links)
O trabalho apresentado nesta dissertação trata do desenvolvimento de um mecanismo para adaptação automática de ví?deo MPEG-4 ao vivo, de modo a atender as necessidades ou capacidades atuais de usuários e do sistema. Um dos desafios dessa área é capturar e representar as informações necessárias para realizar a adaptação. Assim, utilizando técnicas da área de computação ciente de contexto, foi desenvolvido um modelo extensível para representação de dispositivos. Também foram desenvolvidos métodos automáticos e semi-automáticos para capturar as informações necessárias. Neste trabalho foi adotado o modelo de recodificação de vídeo, o qual pode gerar atrasos que inviabilizam a adaptação de vídeo ao vivo em aplicações interativas. Assim, este trabalho realizou uma avaliação do impacto causado pela recodificação no atraso total, fim-a-fim, percebido pelo usuário. / This work presents the development of a mechanism to automatically adapt MPEG-4 live video, in a way to response the actual necessities or capacities of users or systems. One of the challanges in this area is to capture and represent the information needed to adapting content. Thus, using context aware computing techniques, an extensible model has been developed, which can be used to represent devices. It has also been developed automatic and semi-automatic methods to capture the needed information. In this work, the transcoding model has been adopted, which may generate latency, making difficult to use transcoding with interactive applications. In this way, this work has evaluated the impact caused by the transcoding when compared to the end-to-end total delay perceived by the user.
72

Aplicabilidade do antígeno tetânico conjugado com derivados do Monometoxi-polietilenoglicol. / Applicability of tetanus antigen conjugated to derivatives of Monometoxypolyethylene glycol.

Sally Müller Affonso Prado 10 September 2008 (has links)
O Monometoxi-polietilenoglicol succinimidil ácido propiônico (mPEG-SPA 5 e 20 kDa) foi analisado como adjuvante e inibidor da atividade neurotóxica da toxina tetânica (TxT) adsorvida ou não em Al(OH)3, à qual o polímero foi conjugado. Avaliou-se a toxicidade das amostras por DL50, demonstrando que a atividade neurotóxica da TxT foi inibida. A via subcutânea foi mais efetiva na indução de resposta à TxT tratada pelo mPEG-SPA e o efeito adjuvante do Al(OH)3 se deu pela intramuscular. Trinta cavalos foram submetidos a esquema de imunização seletiva, dividindo-se os dezoito escolhidos em grupos para imunização com TxT conjugada ao mPEG-SPA 5.000 e 5.000(2X) e TxT adsorvida ou não. Os soros dos cavalos foram analisados por ToBI Teste, que avaliou a evolução da resposta imune. Os soros também foram analisados por imunodifusão, eletroforese e immunoblotting, tendo este indicado uma provável superioridade antigênica da TxT Fluida relativamente aos adjuvantes. A conjugação mPEG-SPA provou ser efetiva na produção do soro antitetânico terapêutico para uso humano. / Monometoxypolyethylene glycol succinimidyl propionic acid (SPA-mPEG 5 and 20 kDa) was analyzed as adjuvant and inhibitor of tetanus toxin neurotoxic activity (TxT) adsorbed or not by Al(OH)3, to which the polymer was conjugated. The samples toxicity was evaluated by DL50, disclosing that TxT neurotoxic activity was inhibited. The subcutaneous inoculation was more effective in induction of response to TxT treated with SPA-mPEG and the adjuvant effect of Al(OH)3 was evidenced by the intramuscular. Thirty horses were submitted to a selective scheme of immunization and eighteen were divided in groups to be immunized with TxT conjugated to SPA-mPEG 5,000 and 5,000(2X) and TxT adsorbed or not. The horses sera were analysed by ToBI Test, which evaluated the immune response development. The sera were also analysed through immunodifusion, electrophoresis and immunoblotting and the last one indicates a probable antigenic superiority of TxT fluid relatively to the adjuvants. The SPA-mPEG conjugation proved to be effective for anti-tetanus human therapeutic serum production.
73

Datenübertragung per RTSP

Lötzsch, Steffen 28 February 2002 (has links)
In dieser Arbeit wird das Real-Time Streaming Protocol (RTSP) spwie das Real-Time Transport Protocol (RTP) analysiert. Die Syntax und Semantik von Präsentations-Beschreibungen im Format des Session Description Protocol (SDP) werden vorgestellt. Es wird eine kurze Einführung in das Format von MPEG-1-Videosequenzen gegeben. In der Analysephase wird der aktuelle Stand des Inline-MPEG-1-Players und des Multicast MPEG-Servers untersucht. Ausgehend davon werden die Ziele der Implementierung eines neuen Java-Applets sowie eines RTSP-Servers festgelegt und beschrieben, in welchen Schritten diese erreicht werden sollen. In der Implementierungsphase werden die Erstellung eines RTSP- und eines RTP-Klienten in Java beschrieben. Es wird dargelegt, wie unter Verwendung dieser Klienten auf Basis des Inline-MPEG-1-Players ein Java-Applet erzeugt wurde, daß MPEG-1-Videosequenzen von einem Medienserver empfangen und abspielen kann. Es wird der Entwurf eines RTSP-Medienservers und dessen Implementierung beschrieben. Mit Abschluß dieser Arbeit steht ein einsatzbereites System aus RTSP-Server und RTSP-Klient bereit, daß ohne weitere Klientsoftware auf einer Weboberfläche eingesetzt werden kann.
74

An Ontology-based Multimedia Information Management System

Tarakci, Hilal 01 August 2008 (has links) (PDF)
In order to manage the content of multimedia data, the content must be annotated. Although any user-defined annotation is acceptable, it is preferable if systems agree on the same annotation format. MPEG-7 is a widely accepted standard for multimedia content annotation. However, in MPEG-7, semantically identical metadata can be represented in multiple ways due to lack of precise semantics in its XML-based syntax. Unfortunately this prevents metadata interoperability. To overcome this problem, MPEG-7 standard is translated into an ontology. In this thesis, MPEG-7 ontology is used on top and the given user-defined ontologies are attached to the MPEG-7 ontology via a user friendly interface, thus building MPEG-7 based ontologies automatically. Our proposed system is an ontology-based multimedia information management framework due to its modular architecture, ease of integrating with domain specific ontologies naturally and automatic harmonization of MPEG-7 ontology and domain-specific ontologies. Integration with domain specific ontologies is carried out by enabling import of domain ontologies via a user-friendly interface which makes the system independent of application domains.
75

Modélisation et animation interactive de visages virtuels de dessins animés

Monjaux, Perrine 10 December 2007 (has links) (PDF)
La production de dessins animés 2D qui suit actuellement un schéma mis en place dans les années 1920 fait intervenir un très grand nombre de compétences humaines et de métiers différents. Par opposition à ce mode de travail traditionnel, la production de films de synthèse 3D, en exploitant les technologies et les outils les plus récents de modélisation et d'animation 3D, s'affranchit en bonne partie de cette composante artisanale et vient concurrencer l'industrie du dessin animé traditionnel en termes de temps et coûts de fabrication. <br />Les défis à relever par l'industrie du dessin animé 2D se posent donc en termes de :<br />1. Réutilisation des contenus selon le paradigme d'accès «Create once, render many»,<br />2. Facilité d'échange et de transmission des contenus ce qui nécessite de disposer d'un unique format de représentation,<br />3. Production efficace et économique des contenus requérant alors une animation automatisée par ordinateur.<br />Dans ce contexte compétitif, cette thèse, réalisée dans le cadre du projet industriel TOON financé par la société Quadraxis (www.quadraxis.com) et supporté par l'Agence Nationale de Valorisation de la Recherche (ANVAR), a pour objectif de contribuer au développement d'une plate-forme de reconstruction, déformation et animation de modèles 3D de visages pour les dessins animés 2D. Vecteurs de la parole et des expressions, les visages nécessitent en effet une attention particulière quant à leur modélisation et animation conforme aux souhaits des créateurs de dessins animés. <br />Un état de l'art des méthodes, outils et systèmes contribuant à la création de modèles 3D faciaux et à leur animation est présenté et discuté au regard des contraintes spécifiques qui régissent les règles de création des dessins animés 2D et la chaîne de fabrication traditionnelle.<br />Ayant identifié les verrous technologiques à lever, nos contributions ont porté sur :<br /> l'élaboration d'une méthode de conception de visages virtuels 3D à partir d'une part d'un modèle 3D de type seamless, adapté aux exigences d'animation sans rupture, et d'autre part d'un ensemble de dessins 2D représentant les caractéristiques faciales,<br /> la mise au point d'une procédure de création de poses clés, mettant en œuvre plusieurs méthodes de déformation non-rigide,<br /> la conception d'un module d'animation 3D compatible avec le standard MPEG-4/AFX. <br /><br />Les développements réalisés, intégrés dans un premier prototype de la plate-forme FaceTOON, montrent un gain en temps de 20% sur l'ensemble de la chaîne de production tout en assurant une complète interopérabilité des applications via le standard MPEG-4.
76

Model- and image-based scene representation.

January 1999 (has links)
Lee Kam Sum. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 97-101). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.2 / Chapter 1.1 --- Video representation using panorama mosaic and 3D face model --- p.2 / Chapter 1.2 --- Mosaic-based Video Representation --- p.3 / Chapter 1.3 --- "3D Human Face modeling ," --- p.7 / Chapter 2 --- Background --- p.13 / Chapter 2.1 --- Video Representation using Mosaic Image --- p.13 / Chapter 2.1.1 --- Traditional Video Compression --- p.17 / Chapter 2.2 --- 3D Face model Reconstruction via Multiple Views --- p.19 / Chapter 2.2.1 --- Shape from Silhouettes --- p.19 / Chapter 2.2.2 --- Head and Face Model Reconstruction --- p.22 / Chapter 2.2.3 --- Reconstruction using Generic Model --- p.24 / Chapter 3 --- System Overview --- p.27 / Chapter 3.1 --- Panoramic Video Coding Process --- p.27 / Chapter 3.2 --- 3D Face model Reconstruction Process --- p.28 / Chapter 4 --- Panoramic Video Representation --- p.32 / Chapter 4.1 --- Mosaic Construction --- p.32 / Chapter 4.1.1 --- Cylindrical Panorama Mosaic --- p.32 / Chapter 4.1.2 --- Cylindrical Projection of Mosaic Image --- p.34 / Chapter 4.2 --- Foreground Segmentation and Registration --- p.37 / Chapter 4.2.1 --- Segmentation Using Panorama Mosaic --- p.37 / Chapter 4.2.2 --- Determination of Background by Local Processing --- p.38 / Chapter 4.2.3 --- Segmentation from Frame-Mosaic Comparison --- p.40 / Chapter 4.3 --- Compression of the Foreground Regions --- p.44 / Chapter 4.3.1 --- MPEG-1 Compression --- p.44 / Chapter 4.3.2 --- MPEG Coding Method: I/P/B Frames --- p.45 / Chapter 4.4 --- Video Stream Reconstruction --- p.48 / Chapter 5 --- Three Dimensional Human Face modeling --- p.52 / Chapter 5.1 --- Capturing Images for 3D Face modeling --- p.53 / Chapter 5.2 --- Shape Estimation and Model Deformation --- p.55 / Chapter 5.2.1 --- Head Shape Estimation and Model deformation --- p.55 / Chapter 5.2.2 --- Face organs shaping and positioning --- p.58 / Chapter 5.2.3 --- Reconstruction with both intrinsic and extrinsic parameters --- p.59 / Chapter 5.2.4 --- Reconstruction with only Intrinsic Parameter --- p.63 / Chapter 5.2.5 --- Essential Matrix --- p.65 / Chapter 5.2.6 --- Estimation of Essential Matrix --- p.66 / Chapter 5.2.7 --- Recovery of 3D Coordinates from Essential Matrix --- p.67 / Chapter 5.3 --- Integration of Head Shape and Face Organs --- p.70 / Chapter 5.4 --- Texture-Mapping --- p.71 / Chapter 6 --- Experimental Result & Discussion --- p.74 / Chapter 6.1 --- Panoramic Video Representation --- p.74 / Chapter 6.1.1 --- Compression Improvement from Foreground Extraction --- p.76 / Chapter 6.1.2 --- Video Compression Performance --- p.78 / Chapter 6.1.3 --- Quality of Reconstructed Video Sequence --- p.80 / Chapter 6.2 --- 3D Face model Reconstruction --- p.91 / Chapter 7 --- Conclusion and Future Direction --- p.94 / Bibliography --- p.101
77

Foreground/background video coding for video conferencing =: 應用於視訊會議之前景/後景視訊編碼. / 應用於視訊會議之前景/後景視訊編碼 / Foreground/background video coding for video conferencing =: Ying yong yu shi xun hui yi zhi qian jing/ hou jing shi xun bian ma. / Ying yong yu shi xun hui yi zhi qian jing/ hou jing shi xun bian ma

January 2002 (has links)
Lee Kar Kin Edwin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 129-134). / Text in English; abstracts in English and Chinese. / Lee Kar Kin Edwin. / Acknowledgement --- p.ii / Abstract --- p.iii / Contents --- p.vii / List of Figures --- p.ix / List of Tables --- p.xiii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- A brief review of transform-based video coding --- p.1 / Chapter 1.2 --- A brief review of content-based video coding --- p.6 / Chapter 1.3 --- Objectives of the research work --- p.9 / Chapter 1.4 --- Thesis outline --- p.12 / Chapter 2 --- Incorporation of DC Coefficient Restoration into Foreground/Background coding --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- A review of FB coding in H.263 sequence --- p.15 / Chapter 2.3 --- A review of DCCR --- p.18 / Chapter 2.4 --- DCCRFB coding --- p.23 / Chapter 2.4.1 --- Methodology --- p.23 / Chapter 2.4.2 --- Implementation --- p.24 / Chapter 2.4.3 --- Experimental results --- p.26 / Chapter 2.5 --- The use of block selection scheme in DCCRFB coding --- p.32 / Chapter 2.5.1 --- Introduction --- p.32 / Chapter 2.5.2 --- Experimental results --- p.34 / Chapter 2.6 --- Summary --- p.47 / Chapter 3 --- Chin contour estimation on foreground human faces --- p.48 / Chapter 3.1 --- Introduction --- p.48 / Chapter 3.2 --- Least mean square estimation of chin location --- p.50 / Chapter 3.3 --- Chin contour estimation using chin edge detector and contour modeling --- p.58 / Chapter 3.3.1 --- Face segmentation and facial organ extraction --- p.59 / Chapter 3.3.2 --- Identification of search window --- p.59 / Chapter 3.3.3 --- Edge detection using chin edge detector --- p.60 / Chapter 3.3.4 --- "Determination of C0, C1 and c2" --- p.63 / Chapter 3.3.5 --- Chin contour modeling --- p.67 / Chapter 3.4 --- Experimental results --- p.71 / Chapter 3.5 --- Summary --- p.77 / Chapter 4 --- Wire-frame model deformation and face animation using FAP --- p.78 / Chapter 4.1 --- Introduction --- p.78 / Chapter 4.2 --- Wire-frame face model deformation --- p.79 / Chapter 4.2.1 --- Introduction --- p.79 / Chapter 4.2.2 --- Wire-frame model selection and FDP generation --- p.81 / Chapter 4.2.3 --- Global deformation --- p.85 / Chapter 4.2.4 --- Local deformation --- p.87 / Chapter 4.2.5 --- Experimental results --- p.93 / Chapter 4.3 --- Face animation using FAP --- p.98 / Chapter 4.3.1 --- Introduction and methodology --- p.98 / Chapter 4.3.2 --- Experiments --- p.102 / Chapter 4.4 --- Summary --- p.112 / Chapter 5 --- Conclusions and future developments --- p.113 / Chapter 5.1 --- Contributions and conclusions --- p.113 / Chapter 5.2 --- Future developments --- p.117 / Appendix A H.263 bitstream syntax --- p.122 / Appendix B Excerpt of the FAP specification table [17] --- p.123 / Bibliography --- p.129
78

Robust and efficient techniques for automatic video segmentation.

January 1998 (has links)
by Lam Cheung Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 174-179). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Problem Definition --- p.2 / Chapter 1.2 --- Motivation --- p.5 / Chapter 1.3 --- Problems --- p.7 / Chapter 1.3.1 --- Illumination Changes and Motions in Videos --- p.7 / Chapter 1.3.2 --- Variations in Video Scene Characteristics --- p.8 / Chapter 1.3.3 --- High Complexity of Algorithms --- p.10 / Chapter 1.3.4 --- Heterogeneous Approaches to Video Segmentation --- p.10 / Chapter 1.4 --- Objectives and Approaches --- p.11 / Chapter 1.5 --- Organization of the Thesis --- p.13 / Chapter 2 --- Related Work --- p.15 / Chapter 2.1 --- Algorithms for Uncompressed Videos --- p.16 / Chapter 2.1.1 --- Pixel-based Method --- p.16 / Chapter 2.1.2 --- Histogram-based Method --- p.17 / Chapter 2.1.3 --- Motion-based Algorithms --- p.18 / Chapter 2.1.4 --- Color-ratio Based Algorithms --- p.18 / Chapter 2.2 --- Algorithms for Compressed Videos --- p.19 / Chapter 2.2.1 --- Algorithms based on JPEG Image Sequences --- p.19 / Chapter 2.2.2 --- Algorithms based on MPEG Videos --- p.20 / Chapter 2.2.3 --- Algorithms based on VQ Compressed Videos --- p.21 / Chapter 2.3 --- Frame Difference Analysis Methods --- p.21 / Chapter 2.3.1 --- Scene Cut Detection --- p.21 / Chapter 2.3.2 --- Gradual Transition Detection --- p.22 / Chapter 2.4 --- Speedup Techniques --- p.23 / Chapter 2.5 --- Other Approaches --- p.24 / Chapter 3 --- Analysis and Enhancement of Existing Algorithms --- p.25 / Chapter 3.1 --- Introduction --- p.25 / Chapter 3.2 --- Video Segmentation Algorithms --- p.26 / Chapter 3.2.1 --- Frame Difference Metrics --- p.26 / Chapter 3.2.2 --- Frame Difference Analysis Methods --- p.29 / Chapter 3.3 --- Analysis of Feature Extraction Algorithms --- p.30 / Chapter 3.3.1 --- Pair-wise pixel comparison --- p.30 / Chapter 3.3.2 --- Color histogram comparison --- p.34 / Chapter 3.3.3 --- Pair-wise block-based comparison of DCT coefficients --- p.38 / Chapter 3.3.4 --- Pair-wise pixel comparison of DC-images --- p.42 / Chapter 3.4 --- Analysis of Scene Change Detection Methods --- p.45 / Chapter 3.4.1 --- Global Threshold Method --- p.45 / Chapter 3.4.2 --- Sliding Window Method --- p.46 / Chapter 3.5 --- Enhancements and Modifications --- p.47 / Chapter 3.5.1 --- Histogram Equalization --- p.49 / Chapter 3.5.2 --- DD Method --- p.52 / Chapter 3.5.3 --- LA Method --- p.56 / Chapter 3.5.4 --- Modification for pair-wise pixel comparison --- p.57 / Chapter 3.5.5 --- Modification for pair-wise DCT block comparison --- p.61 / Chapter 3.6 --- Conclusion --- p.69 / Chapter 4 --- Color Difference Histogram --- p.72 / Chapter 4.1 --- Introduction --- p.72 / Chapter 4.2 --- Color Difference Histogram --- p.73 / Chapter 4.2.1 --- Definition of Color Difference Histogram --- p.73 / Chapter 4.2.2 --- Sparse Distribution of CDH --- p.76 / Chapter 4.2.3 --- Resolution of CDH --- p.77 / Chapter 4.2.4 --- CDH-based Inter-frame Similarity Measure --- p.77 / Chapter 4.2.5 --- Computational Cost and Discriminating Power --- p.80 / Chapter 4.2.6 --- Suitability in Scene Change Detection --- p.83 / Chapter 4.3 --- Insensitivity to Illumination Changes --- p.89 / Chapter 4.3.1 --- Sensitivity of CDH --- p.90 / Chapter 4.3.2 --- Comparison with other feature extraction algorithms --- p.93 / Chapter 4.4 --- Orientation and Motion Invariant --- p.96 / Chapter 4.4.1 --- Camera Movements --- p.97 / Chapter 4.4.2 --- Object Motion --- p.100 / Chapter 4.4.3 --- Comparison with other feature extraction algorithms --- p.100 / Chapter 4.5 --- Performance of Scene Cut Detection --- p.102 / Chapter 4.6 --- Time Complexity Comparison --- p.105 / Chapter 4.7 --- Extension to DCT-compressed Images --- p.106 / Chapter 4.7.1 --- Performance of scene cut detection --- p.108 / Chapter 4.8 --- Conclusion --- p.109 / Chapter 5 --- Scene Change Detection --- p.111 / Chapter 5.1 --- Introduction --- p.111 / Chapter 5.2 --- Previous Approaches --- p.112 / Chapter 5.2.1 --- Scene Cut Detection --- p.112 / Chapter 5.2.2 --- Gradual Transition Detection --- p.115 / Chapter 5.3 --- DD Method --- p.116 / Chapter 5.3.1 --- Detecting Scene Cuts --- p.117 / Chapter 5.3.2 --- Detecting 1-frame Transitions --- p.121 / Chapter 5.3.3 --- Detecting Gradual Transitions --- p.129 / Chapter 5.4 --- Local Thresholding --- p.131 / Chapter 5.5 --- Experimental Results --- p.134 / Chapter 5.5.1 --- Performance of CDH+DD and CDH+DL --- p.135 / Chapter 5.5.2 --- Performance of DD on other features --- p.144 / Chapter 5.6 --- Conclusion --- p.150 / Chapter 6 --- Motion Vector Based Approach --- p.151 / Chapter 6.1 --- Introduction --- p.151 / Chapter 6.2 --- Previous Approaches --- p.152 / Chapter 6.3 --- MPEG-I Video Stream Format --- p.153 / Chapter 6.4 --- Derivation of Frame Differences from Motion Vector Counts --- p.156 / Chapter 6.4.1 --- Types of Frame Pairs --- p.156 / Chapter 6.4.2 --- Conditions for Scene Changes --- p.157 / Chapter 6.4.3 --- Frame Difference Measure --- p.159 / Chapter 6.5 --- Experiment --- p.160 / Chapter 6.5.1 --- Performance of MV --- p.161 / Chapter 6.5.2 --- Performance Enhancement --- p.162 / Chapter 6.5.3 --- Limitations --- p.163 / Chapter 6.6 --- Conclusion --- p.164 / Chapter 7 --- Conclusion and Future Work --- p.165 / Chapter 7.1 --- Contributions --- p.165 / Chapter 7.2 --- Future Work --- p.169 / Chapter 7.3 --- Conclusion --- p.171 / Bibliography --- p.174 / Chapter A --- Sample Videos --- p.180 / Chapter B --- List of Abbreviations --- p.183
79

Error-resilient coding tools in MPEG-4.

January 1998 (has links)
by Cheng Shu Ling. / Thesis submitted in: July 1997. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 70-71). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Image Coding Standard: JPEG --- p.1 / Chapter 1.2 --- Video Coding Standard: MPEG --- p.6 / Chapter 1.2.1 --- MPEG history --- p.6 / Chapter 1.2.2 --- MPEG video compression algorithm overview --- p.8 / Chapter 1.2.3 --- More MPEG features --- p.10 / Chapter 1.3 --- Summary --- p.17 / Chapter Chapter 2 --- Error Resiliency --- p.18 / Chapter 2.1 --- Introduction --- p.18 / Chapter 2.2 --- Traditional approaches --- p.19 / Chapter 2.2.1 --- Channel coding --- p.19 / Chapter 2.2.2 --- ARQ --- p.20 / Chapter 2.2.3 --- Multi-layer coding --- p.20 / Chapter 2.2.4 --- Error Concealment --- p.20 / Chapter 2.3 --- MPEG-4 work on error resilience --- p.21 / Chapter 2.3.1 --- Resynchronization --- p.21 / Chapter 2.3.2 --- Data Recovery --- p.25 / Chapter 2.3.3 --- Error Concealment --- p.28 / Chapter 2.4 --- Summary --- p.29 / Chapter Chapter 3 --- Fixed length codes --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Tunstall code --- p.31 / Chapter 3.3 --- Lempel-Ziv code --- p.34 / Chapter 3.3.1 --- LZ-77 --- p.35 / Chapter 3.3.2 --- LZ-78 --- p.36 / Chapter 3.4 --- Simulation --- p.38 / Chapter 3.4.1 --- Experiment Setup --- p.38 / Chapter 3.4.2 --- Results --- p.39 / Chapter 3.4.3 --- Concluding Remarks --- p.42 / Chapter Chapter 4 --- Self-Synchronizable codes --- p.44 / Chapter 4.1 --- Introduction --- p.44 / Chapter 4.2 --- Scholtz synchronizable code --- p.45 / Chapter 4.2.1 --- Definition --- p.45 / Chapter 4.2.2 --- Construction procedure --- p.45 / Chapter 4.2.3 --- Synchronizer --- p.48 / Chapter 4.2.4 --- Effects of errors --- p.51 / Chapter 4.3 --- Simulation --- p.52 / Chapter 4.3.1 --- Experiment Setup --- p.52 / Chapter 4.3.2 --- Results --- p.56 / Chapter 4.4 --- Concluding Remarks --- p.68 / Chapter Chapter 5 --- Conclusions --- p.69 / References --- p.70
80

Impact of Acknowledgments on Application Performance in 4G LTE Networks

Levasseur, Brett Michael 21 May 2014 (has links)
4G LTE is a new cellular phone network standard to provide both the capacity and Quality of Service (QoS) needed to support multimedia applications. Recent research in LTE has explored modifications to the current QoS setup, creating MAC layer schedulers and modifying the current QoS architecture. However, what has not been fully explored are the effects of LTE retransmission choices and capabilities on QoS. This thesis examines the impact of using acknowledgments to recover lost data over the wireless interface on VoIP, FTP and MPEG video applications. Issues explored include interaction between application performance, network transport protocols, LTE acknowledgment mode, and wireless conditions. Simulations show that LTE retransmissions improve FTP throughput by 0.1 to 0.8 Mb/s. With delay sensitive applications, like VoIP and video, the benefits of retransmissions are dependent on the loss rate. When the wireless loss rate is less than 20%, VoIP has similar performance with and without LTE retransmissions. At higher loss rates the use of LTE retransmissions adds degrading the VoIP quality by 71%. With UDP video, the choice of retransmissions or not makes little change when the wireless loss rates are less than 10%. With higher wireless loss rates, the frame arrival delay increases by up to 539% with LTE retransmissions, but the frame rate of the video decreases by up to 34% without those retransmissions. LTE providers should configure their networks to use retransmission policies appro- priate for the type of application traffic. This thesis shows that VoIP, FTP and video require different configurations in the LTE network layers.

Page generated in 0.0546 seconds