Spelling suggestions: "subject:"free viewpoint"" "subject:"tree viewpoint""
1 |
A Cluster based Free Viewpoint Video System using Region-tree based Scene ReconstructionLei, Cheng Unknown Date
No description available.
|
2 |
Action History Volume for Spatiotemporal Editing of 3D Video in Multi-party Interaction Scenes / 複数人物インタラクションシーンにおけるAction History Volumeを用いた3次元ビデオの時空間編集Shi, Qun 24 September 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第18615号 / 情博第539号 / 新制||情||96(附属図書館) / 31515 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 松山 隆司, 教授 美濃 導彦, 准教授 中澤 篤志, 講師 延原 章平 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
3 |
Object Extraction for Virtual-viewpoint Video Synthesis / 仮想視点映像の合成を目的としたオブジェクト抽出Sankoh, Hiroshi 25 May 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19202号 / 情博第586号 / 新制||情||102(附属図書館) / 32194 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 美濃 導彦, 教授 松山 隆司, 教授 田中 克己 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
4 |
Novel Image Interpolation Schemes with Applications to Frame Rate Conversion and View SynthesisRezaee Kaviani, Hoda January 2018 (has links)
Image interpolation is the process of generating a new image utilizing a set of available images. The available images may be taken with a camera at different times, or with multiple cameras and from different viewpoints. Usually, the interpolation problem in the first scenario is called Frame Rate-Up Conversion (FRUC), and the second one view synthesis.
This thesis focuses on image interpolation and addresses both FRUC and view synthesis problems. We propose a novel FRUC method using optical flow motion estimation and a patch-based reconstruction scheme. FRUC interpolates new frames between original frames of a video to increase the number of frames, and increases motion continuity.
In our approach first, forward and backward motion vectors are obtained using an optical flow algorithm, and reconstructed versions of the current and previous frames are generated by our patch-based reconstruction scheme.
Using the original and reconstructed versions of the current and previous frames, two mismatch masks are obtained. Then two versions of the middle frame are generated using a patch-based scheme, with estimated motion vectors and the current and previous frames. Finally, a middle mask, which identifies the mismatch areas of the two middle frames is reconstructed. Using these three masks, the best candidates for interpolation are selected and fused to obtain the final middle frame.
Due to the patch-based nature of our interpolation scheme most of the holes and cracks will be filled.
Although there is always a probability of having holes, the size and number of such holes are much smaller than those that would be generated using pixel-based mapping. The rare holes are filled using existing hole-filling algorithms. With fewer and smaller holes, simpler hole-filling algorithms can be applied to the image and the overall complexity of the required post processing decreases.
View synthesis is the process of generating a new (virtual) view using available ones. Depending on the amount of available geometric information, view synthesis techniques can be divided into three categories: Image Based Rendering (IBR), Depth Image Based Rendering (DIBR), and Model Based Rendering (MBR).
We introduce an adaptive version, patch-based scheme for IBR. This patch-based scheme reduces the size and number of holes during reconstruction. The size of patch is determined in response to edge information for better reconstruction, especially near the boundaries. In the first stage of the algorithm, disparity is obtained using optical flow estimation. Then, a reconstructed version of the left and right views are generated using our adaptive patch-based algorithm. The mismatches between each view and its reconstructed version are obtained in the mismatch detection steps.
This stage results in two masks as outputs, which help with the refinement of disparities and the selection of the best patches for final synthesis. Finally, the remaining holes are filled using our simple hole filling scheme and the refined disparities. The adaptive version still benefits from the overlapping effect of the patches for hole reduction. However, compared with our fixed-size version, it results in better reconstruction near the edges, object boundaries, and inside the highly textured areas.
We also propose an adaptive patch-based scheme for DIBR. The proposed method avoids unnecessary warping which is a computationally expensive step in DIBR. We divide nearby views into blocks, and only warp the center of each block. To have a better reconstruction near the edges and depth discontinuities, the block size is selected adaptively. In the blending step, an approach is introduced to calculate and refine the blending weights. Many of the existing DIBR schemes warp all pixels of nearby views during interpolation which is unnecessary. We show that using our adaptive patch-based scheme, it is possible to reduce the number of required warping without degrading the overall quality compared with existing schemes. / Thesis / Doctor of Philosophy (PhD)
|
5 |
Real-time Arbitrary View Rendering From Stereo Video And Time-of-flight CameraAtes, Tugrul Kagan 01 January 2011 (has links) (PDF)
Generating in-between images from multiple views of a scene is a crucial task for both computer vision and computer graphics fields. Photorealistic rendering, 3DTV and robot navigation are some of many applications which benefit from arbitrary view synthesis, if it is achieved in real-time. Most modern commodity computer architectures include programmable processing chips, called Graphics Processing Units (GPU), which are specialized in rendering computer generated images. These devices excel in achieving high computation power by processing arrays of data in parallel, which make them ideal for real-time computer vision applications. This thesis focuses on an arbitrary view rendering algorithm by using two high resolution color cameras along with a single low resolution time-of-flight depth camera and matching the programming paradigms of the GPUs to achieve real-time processing rates. Proposed method is divided into two stages. Depth estimation through fusion of stereo vision and time-of-flight measurements forms the data acquisition stage and second stage is intermediate view rendering from 3D representations of scenes. Ideas presented are examined in a common experimental framework and practical results attained are put forward. Based on the experimental results, it could be concluded that it is possible to realize content production and display stages of a free-viewpoint system in real-time by using only low cost commodity computing devices.
|
6 |
3D Image Processing and Communication in Camera Sensor Networks: Free Viewpoint Television NetworkingTeratani, Mehrdad 09 1900 (has links) (PDF)
info:eu-repo/semantics/nonPublished
|
7 |
Renderização interativa de câmeras virtuais a partir da integração de múltiplas câmeras esparsas por meio de homografias e decomposições planares da cena / Interactive virtual camera rendering from multiple sparse cameras using homographies and planar scene decompositionsSilva, Jeferson Rodrigues da 10 February 2010 (has links)
As técnicas de renderização baseadas em imagens permitem que novas visualizações de uma cena sejam geradas a partir de um conjunto de imagens, obtidas a partir de pontos de vista distintos. Pela extensão dessas técnicas para o tratamento de vídeos, podemos permitir a navegação no tempo e no espaço de uma cena obtida a partir de múltiplas câmeras. Nesse trabalho, abordamos o problema de gerar novas visualizações fotorealistas de cenas dinâmicas, com objetos móveis independentes, a partir de vídeos obtidos de múltiplas câmeras com pontos de vista distintos. Os desafios para a solução do problema envolvem a fusão das imagens das múltiplas câmeras minimizando as diferenças de brilho e cor entre elas, a detecção e extração dos objetos móveis da cena e a renderização de novas visualizações combinando um modelo estático da cena com os modelos aproximados dos objetos móveis. Além disso, é importante que novas visualizações possam ser geradas em taxas de quadro interativas de maneira a permitir que um usuário navegue com naturalidade pela cena renderizada. As aplicações dessas técnicas são diversas e incluem aplicações na área de entretenimento, como nas televisões digitais interativas que permitem que o usuário escolha o ponto de vista de filmes ou eventos esportivos, e em simulações para treinamento usando realidade virtual, onde é importante que se haja cenas realistas e reconstruídas a partir de cenas reais. Apresentamos um algoritmo para a calibração das cores capaz de minimizar a diferença de cor e brilho entre as imagens obtidas a partir de câmeras que não tiveram as cores calibradas. Além disso, descrevemos um método para a renderização interativa de novas visualizações de cenas dinâmicas capaz de gerar visualizações com qualidade semelhante à dos vídeos da cena. / Image-based rendering techniques allow the synthesis of novel scene views from a set of images of the scene, acquired from different viewpoints. By extending these techniques to make use of videos, we can allow the navigation in time and space of a scene acquired by multiple cameras. In this work, we tackle the problem of generating novel photorealistic views of dynamic scenes, containing independent moving objects, from videos acquired by multiple cameras with different viewpoints. The challenges presented by the problem include the fusion of images from multiple cameras while minimizing the brightness and color differences between them, the detection and extraction of the moving objects and the rendering of novel views combining a static scene model with approximate models for the moving objects. It is also important to be able to generate novel views in interactive frame rates allowing a user to navigate and interact with the rendered scene. The applications of these techniques are diverse and include applications in the entertainment field, with interactive digital televisions that allow the user to choose the viewpoint while watching movies or sports events, and in virtual-reality training simulations, where it is important to have realistic scenes reconstructed from real scenes. We present a color calibration algorithm for minimizing the color and brightness differences between images acquired from cameras that didn\'t have their colors calibrated. We also describe a method for interactive novel view rendering of dynamic scenes that provides novel views with similar quality to the scene videos.
|
8 |
Free Viewpoint TVHussain, Mudassar January 2010 (has links)
This thesis work regards free viewpoint TV. The main idea is that users can switch between multiple streams in order to find views of their own choice. The purpose is to provide fast switching between the streams, so that users experience less delay while view switching. In this thesis work we will discuss different video stream switching methods in detail. Then we will discuss issues related to those stream switching methods, including transmission and switching. We shall also discuss different scenarios for fast stream switching in order to make services more interactive by minimizing delays. Stream switching time varies from live to recorded events. Quality of service (QoS) is another factor to consider which can be improved by assigning priorities to the packets. We will discuss simultaneous stream transmission methods which are based on predictions and reduced quality streams for providing fast switching. We will present prediction algorithm for viewpoint prediction, propose system model for fast viewpoint switching and make evaluation of simultaneous stream transmission methods for free viewpoint TV. Finally, we draw our conclusions and propose future work. / Degree project
|
9 |
From images to point clouds:practical considerations for three-dimensional computer visionHerrera Castro, D. (Daniel) 04 August 2015 (has links)
Abstract
Three-dimensional scene reconstruction has been an important area of research for many decades. It has a myriad of applications ranging from entertainment to medicine. This thesis explores the 3D reconstruction pipeline and proposes novel methods to improve many of the steps necessary to achieve a high quality reconstruction. It proposes novel methods in the areas of depth sensor calibration, simultaneous localization and mapping, depth map inpainting, point cloud simplification, and free-viewpoint rendering.
Geometric camera calibration is necessary in every 3D reconstruction pipeline. This thesis focuses on the calibration of depth sensors. It presents a review of sensors models and how they can be calibrated. It then examines the case of the well-known Kinect sensor and proposes a novel calibration method using only planar targets.
Reconstructing a scene using only color cameras entails di_erent challenges than when using depth sensors. Moreover, online applications require real-time response and must update the model as new frames are received. The thesis looks at these challenges and presents a novel simultaneous localization and mapping system using only color cameras. It adaptively triangulates points based on the detected baseline while still utilizing non-triangulated features for pose estimation.
The thesis addresses the extrapolating missing information in depth maps. It presents three novel methods for depth map inpainting. The first utilizes random sampling to fit planes in the missing regions. The second method utilizes a 2nd-order prior aligned with intensity edges. The third method learns natural filters to apply a Markov random field on a joint intensity and depth prior.
This thesis also looks at the issue of reducing the quantity of 3D information to a manageable size. It looks at how to merge depth maps from multiple views without storing redundant information. It presents a method to discard this redundant information while still maintaining the naturally variable resolution.
Finally, transparency estimation is examined in the context of free-viewpoint rendering. A procedure to estimate transparency maps for the foreground layers of a multi-view scene is presented. The results obtained reinforce the need for a high accuracy 3D reconstruction pipeline including all the previously presented steps. / Tiivistelmä
Kolmiuloitteisen ympäristöä kuvaavan mallin rakentaminen on ollut tärkeä tutkimuksen kohde jo usean vuosikymmenen ajan. Sen sovelluskohteet ulottuvat aina lääketieteestä viihdeteollisuuteen. Väitöskirja tarkastelee 3D ympäristöä kuvaavan mallin tuottamisprosessia ja esittää uusia keinoja parantaa korkealaatuisen rekonstruktion tuottamiseen vaadittavia vaiheita. Työssä esitetään uusia menetelmiä etäisyyssensoreiden kalibrointiin, samanaikaisesti tapahtuvaan paikannukseen ja kartoitukseen, syvyyskartan korjaamiseen, etäisyyspistepilven yksinkertaistamiseen ja vapaan katselukulman kuvantamiseen.
Väitöskirjan ensi osa keskittyy etäisyyssensoreiden kalibrointiin. Työ esittelee erilaisia sensorimalleja ja niiden kalibrointia. Yleisen tarkastelun lisäksi keskitytään hyvin tunnetun Kinect-sensorin käyttämiseen, ja ehdotetaan uutta kalibrointitapaa pelkkiä tasokohteita hyväksikäyttäen. Pelkkien värikameroiden käyttäminen näkymän rekonstruointiin tuottaa erilaisia haasteita verrattuna etäisyyssensoreiden käyttöön kuvan muodostamisessa. Lisäksi verkkosovellukset vaativat reaaliaikaista vastetta. Väitös tarkastelee kyseisiä haasteita ja esittää uudenlaisen yhtäaikaisen paikannuksen ja kartoituksen mallin tuottamista pelkkiä värikameroita käyttämällä. Esitetty tapa kolmiomittaa adaptiivisesti pisteitä taustan pohjalta samalla kun hyödynnetään eikolmiomitattuja piirteitä asentotietoihin.
Työssä esitellään kolme uudenlaista tapaa syvyyskartan korjaamiseen. Ensimmäinen tapa käyttää satunnaispisteitä tasojen kohdentamiseen puuttuvilla alueilla. Toinen tapa käyttää 2nd-order prior kohdistusta ja intensiteettireunoja. Kolmas tapa oppii filttereitä joita se soveltaa Markov satunnaiskenttiin yhteisillä tiheys ja syvyys ennakoinneilla. Tämä väitös selvittää myös mahdollisuuksia 3D-information määrän pienentämiseen käsiteltävälle tasolle. Työssä selvitetään, kuinka syvyyskarttoja voidaan yhdistää ilman päällekkäisen informaation tallentamista. Työssä esitetään tapa jolla päällekkäisestä datasta voidaan luopua kuitenkin säilyttäen luonnollisesti muuttuva resoluutio.
Viimeksi, tutkimuksessa on esitetty läpinäkyvyyskarttojen arviointiproseduuri etualan kerroksien monikatselukulmanäkymissä vapaan katselukulman renderöinnin näkökulmasta. Saadut tulokset vahvistavat tarkan 3D-näkymän rakentamisliukuhihnan tarvetta sisältäen kaikki edellä mainitut vaiheet.
|
10 |
Renderização interativa de câmeras virtuais a partir da integração de múltiplas câmeras esparsas por meio de homografias e decomposições planares da cena / Interactive virtual camera rendering from multiple sparse cameras using homographies and planar scene decompositionsJeferson Rodrigues da Silva 10 February 2010 (has links)
As técnicas de renderização baseadas em imagens permitem que novas visualizações de uma cena sejam geradas a partir de um conjunto de imagens, obtidas a partir de pontos de vista distintos. Pela extensão dessas técnicas para o tratamento de vídeos, podemos permitir a navegação no tempo e no espaço de uma cena obtida a partir de múltiplas câmeras. Nesse trabalho, abordamos o problema de gerar novas visualizações fotorealistas de cenas dinâmicas, com objetos móveis independentes, a partir de vídeos obtidos de múltiplas câmeras com pontos de vista distintos. Os desafios para a solução do problema envolvem a fusão das imagens das múltiplas câmeras minimizando as diferenças de brilho e cor entre elas, a detecção e extração dos objetos móveis da cena e a renderização de novas visualizações combinando um modelo estático da cena com os modelos aproximados dos objetos móveis. Além disso, é importante que novas visualizações possam ser geradas em taxas de quadro interativas de maneira a permitir que um usuário navegue com naturalidade pela cena renderizada. As aplicações dessas técnicas são diversas e incluem aplicações na área de entretenimento, como nas televisões digitais interativas que permitem que o usuário escolha o ponto de vista de filmes ou eventos esportivos, e em simulações para treinamento usando realidade virtual, onde é importante que se haja cenas realistas e reconstruídas a partir de cenas reais. Apresentamos um algoritmo para a calibração das cores capaz de minimizar a diferença de cor e brilho entre as imagens obtidas a partir de câmeras que não tiveram as cores calibradas. Além disso, descrevemos um método para a renderização interativa de novas visualizações de cenas dinâmicas capaz de gerar visualizações com qualidade semelhante à dos vídeos da cena. / Image-based rendering techniques allow the synthesis of novel scene views from a set of images of the scene, acquired from different viewpoints. By extending these techniques to make use of videos, we can allow the navigation in time and space of a scene acquired by multiple cameras. In this work, we tackle the problem of generating novel photorealistic views of dynamic scenes, containing independent moving objects, from videos acquired by multiple cameras with different viewpoints. The challenges presented by the problem include the fusion of images from multiple cameras while minimizing the brightness and color differences between them, the detection and extraction of the moving objects and the rendering of novel views combining a static scene model with approximate models for the moving objects. It is also important to be able to generate novel views in interactive frame rates allowing a user to navigate and interact with the rendered scene. The applications of these techniques are diverse and include applications in the entertainment field, with interactive digital televisions that allow the user to choose the viewpoint while watching movies or sports events, and in virtual-reality training simulations, where it is important to have realistic scenes reconstructed from real scenes. We present a color calibration algorithm for minimizing the color and brightness differences between images acquired from cameras that didn\'t have their colors calibrated. We also describe a method for interactive novel view rendering of dynamic scenes that provides novel views with similar quality to the scene videos.
|
Page generated in 0.0566 seconds