• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • Tagged with
  • 22
  • 10
  • 10
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

THE DYNAMICS AND INTRICACIES OF 3DTV BROADCASTING – A Survey : 3DTV Broadcasting

Mandadi, Vittal Reddy January 2014 (has links)
It is predicted that three-dimensional television (3DTV) will enter the markets in ten years. 3d-imaging and display are one of the most important applications of information systems with a broad range of applications in computer display, TV, video, robotic, metrology, reconnaissance, and medicine. This master thesis project will be focused on the review of the state of the art in 3DTV technologies, challenges and possible approaches, including, for example but not limited:   Standards or specifications of 3DTV cameras, signal processing, compression, transmission and displays   Established research institutes, companies, organizations, international conferences, journals, magazines, and famous projects in the area of 3DTV in the world.   Products, Prototypes and platforms of 3DTV, including the names, producers, brief description of the products or prototypes, key features and data sheets, as well as the price.   3D Research methodologies and the performance measures   3DTV capture and representation techniques   3DTV coding techniques, including compression mainly   3DTV transmission techniques (mainly TV broadcasting, also internet and mobile)   3DTV display techniques.   3DTV rendering techniques.   Forward and backward compatibility.   In addition, an optional task of the project is to make an estimation of implementation complexity for 3D encoding, processing, transmission, receiving and re-representation algorithms which are most popular currently and possibly be used in the future.
2

Space carving de séquences Multi-vues Vidéo plus, Profondeur pour la représentation et la transmission de contenus deTV3D et FTV / Space Carving multi-view video plus depth sequences for representation and transmission of 3DTV and FTV contents

Alj, Youssef 16 May 2013 (has links)
La vidéo 3D a suscité un intérêt croissant durant ces dernières années. Grâce au développement récent des écrans stéréoscopiques et auto-stéréoscopiques, la vidéo 3D fournit une sensation réaliste de profondeur à l'utilisateur et une navigation virtuelle autour de la scène observée. Cependant de nombreux défis techniques existent encore. Ces défis peuvent être liés à l'acquisition de la scène et à sa représentation d'une part ou à la transmission des données d'autre part. Dans le contexte de la représentation de scènes naturelles, de nombreux efforts ont été fournis afin de surmonter ces difficultés. Les méthodes proposées dans la littérature peuvent être basées image, géométrie ou faire appel à des représentations combinant image et géométrie. L'approche adoptée dans cette thèse consiste en une méthode hybride s'appuyant sur l'utilisation des séquences multi-vues plus profondeur MVD (Multi-view Video plus Depth) afin de conserver le photo-réalisme de la scène observée, combinée avec un modèle géométrique, à base de maillage triangulaire, renforçant ainsi la compacité de la représentation. Nous supposons que les cartes de profondeur des données MVD fournies sont fiables et que les caméras utilisées durant l'acquisition sont calibrées, les paramètres caméras sont donc connus, mais les images correspondantes ne sont pas nécessairement rectifiées. Nous considérerons ainsi le cas général où les caméras peuvent être parallèles ou convergentes. Les contributions de cette thèse sont les suivantes. D'abord, un schéma volumétrique dédié à la fusion des cartes de profondeur en une surface maillée est proposé. Ensuite, un nouveau schéma de plaquage de texture multi-vues est proposé. Finalement, nous abordons à l'issue ce ces deux étapes de modélisation, la transmission proprement dite et comparons les performances de notre schéma de modélisation avec un schéma basé sur le standard MPEG-MVC, état de l'art dans la compression de vidéos multi-vues. / 3D videos have witnessed a growing interest in the last few years. Due to the recent development ofstereoscopic and auto-stereoscopic displays, 3D videos provide a realistic depth perception to the user and allows a virtual navigation around the scene. Nevertheless, several technical challenges are still remaining. Such challenges are either related to scene acquisition and representation on the one hand or to data transmission on the other hand. In the context of natural scene representation, research activities have been strengthened worldwide in order to handle these issues. The proposed methods for scene representation can be image-based, geometry based or methods combining both image and geometry. In this thesis, we take advantage of image based representations, thanks to the use of Multi-view Video plus Depth representation, in order to preserve the photorealism of the observed scene, and geometric based representations in order to enforce the compactness ofthe proposed scene representation. We assume the provided depth maps to be reliable.Besides, the considered cameras are calibrated so that the cameras parameters are known but thecorresponding images are not necessarily rectified. We consider, therefore, the general framework where cameras can be either convergent or parallel. The contributions of this thesis are the following. First, a new volumetric framework is proposed in order to mergethe input depth maps into a single and compact surface mesh. Second, a new algorithm for multi-texturing the surface mesh is proposed. Finally, we address the transmission issue and compare the performance of the proposed modeling scheme with the current standard MPEG-MVC, that is the state of the art of multi-view video compression.
3

A Scalable Coding Approach for High Quality Depth Image Compression

Li, Yun, Sjöström, Mårten, Jennehag, Ulf, Olsson, Roger January 2012 (has links)
The distortion by using traditional video encoders (e.g. H.264) on the depth discontinuity can introduce disturbing effects on the synthesized view. The proposed scheme aims at preserving the most significantdepth transition for a better view synthesis. Furthermore, it has a scalable structure. The scheme extracts edge contours from a depth image and represents them by chain code. The chain code and the sampleddepth values on each side of the edge contour are encoded by differential and arithmetic coding. The depthimage is reconstructed by diffusion of edge samples and uniform sub-samples from the low quality depthimage. At low bit rates, the proposed scheme outperforms HEVC intra at the edges in the synthesized views, which correspond to the significant discontinuities in the depth image. The overall quality is also better with the proposed scheme at low bit rates for contents with distinct depth transition. © 2012 IEEE.
4

View Rendering for 3DTV

Muddala, Suryanarayana Murthy January 2013 (has links)
Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.   Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.   The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.
5

Síntese de vistas em depht-image-based rendering (DIBR) / View synthesis with depth-image-based rendering (DIBR)

Oliveira, Adriano Quilião de January 2016 (has links)
Esta dissertação investiga soluções para o problema genérico de geração de vistas sintéticas a partir de um conjunto de imagens utilizando a abordagem Depth-Image-Based Rendering. Essa abordagem utiliza um formato compacto para a representação de imagens 3D, composto basicamente por duas imagens, uma colorida para a vista de referência e outra em tons de cinza com a correspondência de disparidade para cada pixel. Soluções para esse problema beneficiam aplicações como Free Viewpoint Television. O maior desafio é o preenchimento de regiões sem informação de projeção considerando o novo ponto de vista, genericamente denominados holes, além de outros artefatos como cracks e ghosts que ocorrem por oclusões e erros no mapa de disparidade. Nesta dissertação apresentamos técnicas para remoção e tratamento de cada uma das classes de potenciais artefatos. O conjunto de métodos propostos apresenta melhores resultados quando comparado com o atual estado da arte em geração de vistas sintéticas com o modelo DIBR para o conjunto de dados Middlebury, considerando-se as métricas SSIM e PSNR. / This dissertation investigates solutions to the general problem of generating synthetic views from a set of images using the Depth-Image-Based Rendering approach. This approach uses a compact format for the 3D image representation, composed basically of two images, one color image for the reference view and other grayscale image with the disparity information available for each pixel. Solutions to this problem benefit applications such as Free Viewpoint Television. The biggest challenge is filling in regions without projection information considering the new viewpoint, usually called holes, and other artifacts such as cracks and ghosts that occur due to occlusions and errors in the disparity map. In this dissertation we present techniques for removal and treatment of each of these classes of potential artifacts. The set of proposed methods shows improved results when compared to the current state of the art generation of synthetic views using the DIBR model applied to the Middlebury dataset, considering the SSIM and PSNR metrics.
6

Edge-aided virtual view rendering for multiview video plus depth

Muddala, Suryanarayana Murthy, Sjöström, Mårten, Olsson, Roger, Tourancheau, Sylvain January 2013 (has links)
Depth-Image-Based Rendering (DIBR) of virtual views is a fundamental method in three dimensional 3-D video applications to produce dierent perspectives from texture and depth information, in particular the multi-viewplus-depth (MVD) format. Artifacts are still present in virtual views as a consequence of imperfect rendering using existing DIBR methods. In this paper, we propose an alternative DIBR method for MVD. In the proposed method we introduce an edge pixel and interpolate pixel values in the virtual view using the actual projected coordinates from two adjacent views, by which cracks and disocclusions are automatically lled. In particular, we propose a method to merge pixel information from two adjacent views in the virtual view before the interpolation; we apply a weighted averaging of projected pixels within the range of one pixel in the virtual view. We compared virtual view images rendered by the proposed method to the corresponding view images rendered by state-of-theart methods. Objective metrics demonstrated an advantage of the proposed method for most investigated media contents. Subjective test results showed preference to dierent methods depending on media content, and the test could not demonstrate a signicant dierence between the proposed method and state-of-the-art methods.
7

Space carving de séquences Multi-vues Vidéo plus, Profondeur pour la représentation et la transmission de contenus deTV3D et FTV

Alj, Youssef 16 May 2013 (has links) (PDF)
La vidéo 3D a suscité un intérêt croissant durant ces dernières années. Grâce au développement récent des écrans stéréoscopiques et auto-stéréoscopiques, la vidéo 3D fournit une sensation réaliste de profondeur à l'utilisateur et une navigation virtuelle autour de la scène observée. Cependant de nombreux défis techniques existent encore. Ces défis peuvent être liés à l'acquisition de la scène et à sa représentation d'une part ou à la transmission des données d'autre part. Dans le contexte de la représentation de scènes naturelles, de nombreux efforts ont été fournis afin de surmonter ces difficultés. Les méthodes proposées dans la littérature peuvent être basées image, géométrie ou faire appel à des représentations combinant image et géométrie. L'approche adoptée dans cette thèse consiste en une méthode hybride s'appuyant sur l'utilisation des séquences multi-vues plus profondeur MVD (Multi-view Video plus Depth) afin de conserver le photo-réalisme de la scène observée, combinée avec un modèle géométrique, à base de maillage triangulaire, renforçant ainsi la compacité de la représentation. Nous supposons que les cartes de profondeur des données MVD fournies sont fiables et que les caméras utilisées durant l'acquisition sont calibrées, les paramètres caméras sont donc connus, mais les images correspondantes ne sont pas nécessairement rectifiées. Nous considérerons ainsi le cas général où les caméras peuvent être parallèles ou convergentes. Les contributions de cette thèse sont les suivantes. D'abord, un schéma volumétrique dédié à la fusion des cartes de profondeur en une surface maillée est proposé. Ensuite, un nouveau schéma de plaquage de texture multi-vues est proposé. Finalement, nous abordons à l'issue ce ces deux étapes de modélisation, la transmission proprement dite et comparons les performances de notre schéma de modélisation avec un schéma basé sur le standard MPEG-MVC, état de l'art dans la compression de vidéos multi-vues.
8

Síntese de vistas em depht-image-based rendering (DIBR) / View synthesis with depth-image-based rendering (DIBR)

Oliveira, Adriano Quilião de January 2016 (has links)
Esta dissertação investiga soluções para o problema genérico de geração de vistas sintéticas a partir de um conjunto de imagens utilizando a abordagem Depth-Image-Based Rendering. Essa abordagem utiliza um formato compacto para a representação de imagens 3D, composto basicamente por duas imagens, uma colorida para a vista de referência e outra em tons de cinza com a correspondência de disparidade para cada pixel. Soluções para esse problema beneficiam aplicações como Free Viewpoint Television. O maior desafio é o preenchimento de regiões sem informação de projeção considerando o novo ponto de vista, genericamente denominados holes, além de outros artefatos como cracks e ghosts que ocorrem por oclusões e erros no mapa de disparidade. Nesta dissertação apresentamos técnicas para remoção e tratamento de cada uma das classes de potenciais artefatos. O conjunto de métodos propostos apresenta melhores resultados quando comparado com o atual estado da arte em geração de vistas sintéticas com o modelo DIBR para o conjunto de dados Middlebury, considerando-se as métricas SSIM e PSNR. / This dissertation investigates solutions to the general problem of generating synthetic views from a set of images using the Depth-Image-Based Rendering approach. This approach uses a compact format for the 3D image representation, composed basically of two images, one color image for the reference view and other grayscale image with the disparity information available for each pixel. Solutions to this problem benefit applications such as Free Viewpoint Television. The biggest challenge is filling in regions without projection information considering the new viewpoint, usually called holes, and other artifacts such as cracks and ghosts that occur due to occlusions and errors in the disparity map. In this dissertation we present techniques for removal and treatment of each of these classes of potential artifacts. The set of proposed methods shows improved results when compared to the current state of the art generation of synthetic views using the DIBR model applied to the Middlebury dataset, considering the SSIM and PSNR metrics.
9

Síntese de vistas em depht-image-based rendering (DIBR) / View synthesis with depth-image-based rendering (DIBR)

Oliveira, Adriano Quilião de January 2016 (has links)
Esta dissertação investiga soluções para o problema genérico de geração de vistas sintéticas a partir de um conjunto de imagens utilizando a abordagem Depth-Image-Based Rendering. Essa abordagem utiliza um formato compacto para a representação de imagens 3D, composto basicamente por duas imagens, uma colorida para a vista de referência e outra em tons de cinza com a correspondência de disparidade para cada pixel. Soluções para esse problema beneficiam aplicações como Free Viewpoint Television. O maior desafio é o preenchimento de regiões sem informação de projeção considerando o novo ponto de vista, genericamente denominados holes, além de outros artefatos como cracks e ghosts que ocorrem por oclusões e erros no mapa de disparidade. Nesta dissertação apresentamos técnicas para remoção e tratamento de cada uma das classes de potenciais artefatos. O conjunto de métodos propostos apresenta melhores resultados quando comparado com o atual estado da arte em geração de vistas sintéticas com o modelo DIBR para o conjunto de dados Middlebury, considerando-se as métricas SSIM e PSNR. / This dissertation investigates solutions to the general problem of generating synthetic views from a set of images using the Depth-Image-Based Rendering approach. This approach uses a compact format for the 3D image representation, composed basically of two images, one color image for the reference view and other grayscale image with the disparity information available for each pixel. Solutions to this problem benefit applications such as Free Viewpoint Television. The biggest challenge is filling in regions without projection information considering the new viewpoint, usually called holes, and other artifacts such as cracks and ghosts that occur due to occlusions and errors in the disparity map. In this dissertation we present techniques for removal and treatment of each of these classes of potential artifacts. The set of proposed methods shows improved results when compared to the current state of the art generation of synthetic views using the DIBR model applied to the Middlebury dataset, considering the SSIM and PSNR metrics.
10

Depth Map Upscaling for Three-Dimensional Television : The Edge-Weighted Optimization Concept

Schwarz, Sebastian January 2012 (has links)
With the recent comeback of three-dimensional (3D) movies to the cinemas, there have been increasing efforts to spread the commercial success of 3D to new markets. The possibility of a 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Scene depth information plays a crucial role in all parts of the distribution chain from content capture via transmission to the actual 3D display. This depth information is transmitted in the form of depth maps and is accompanied by corresponding video frames, i.e. for Depth Image Based Rendering (DIBR) view synthesis. Nonetheless, scenarios do exist for which the original spatial resolutions of depth maps and video frames do not match, e.g. sensor driven depth capture or asymmetric 3D video coding. This resolution discrepancy is a problem, since DIBR requires accordance between the video frame and depth map. A considerable amount of research has been conducted into ways to match low-resolution depth maps to high resolution video frames. Many proposed solutions utilize corresponding texture information in the upscaling process, however they mostly fail to review this information for validity. In the strive for better 3DTV quality, this thesis presents the Edge-Weighted Optimization Concept (EWOC), a novel texture-guided depth upscaling application that addresses the lack of information validation. EWOC uses edge information from video frames as guidance in the depth upscaling process and, additionally, confirms this information based on the original low resolution depth. Over the course of four publications, EWOC is applied in 3D content creation and distribution. Various guidance sources, such as different color spaces or texture pre-processing, are investigated. An alternative depth compression scheme, based on depth map upscaling, is proposed and extensions for increased visual quality and computational performance are presented in this thesis. EWOC was evaluated and compared with competing approaches, with the main focus was consistently on the visual quality of rendered 3D views. The results show an increase in both objective and subjective visual quality to state-of-the-art depth map upscaling methods. This quality gain motivates the choice of EWOC in applications affected by low resolution depth. In the end, EWOC can improve 3D content generation and distribution, enhancing the 3D experience to boost the commercial success of 3DTV.

Page generated in 0.0827 seconds