1 |
A Cluster based Free Viewpoint Video System using Region-tree based Scene ReconstructionLei, Cheng Unknown Date
No description available.
|
2 |
MULTIPOINT MEASURING SYSTEM FOR VIDEO AND SOUND - 100 - camera and microphone system -Fujii, Toshiaki, Mori, Kensaku, Takeda, Kazuya, Mase, Kenji, Tanimoto, Masayuki, Suenaga, Yasuhito 12 1900 (has links)
No description available.
|
3 |
Supporting collaborative practices across wall-sized displays with video-mediated communication / Communication médiatisée par la vidéo pour les pratiques collaboratives à distance entre murs d’écransAvellino, Ignacio 12 December 2017 (has links)
La collaboration entre plusieurs personnes peut prendre plusieurs formes, et la technologie soutient depuis longtemps ces pratiques. Mais lorsque la collaboration doit se faire à distance, est-elle aussi bien assistée par la technologie ? Dans ce travail, je soutiens l'idée selon laquelle le succès d'un système de télécommunications ne dépend pas de sa capacité à imiter une collaboration colocalisée, mais dans sa capacité à faciliter les pratiques collaboratives découlant des caractéristiques spécifiques de la technologie. J'explore cet argument en utilisant un mur d'écrans en tant que technologie collaborative. J'ai commencé par observer des collaborateurs effectuer leur travail quotidien à distance en utilisant des prototypes. Ensuite j'ai conduit des expériences et j'ai trouvé que les utilisateurs peuvent interpréter avec précision les instructions déictiques à distance et le regard direct quand un collaborateur à distance est affiché par une vidéo, même si celle-ci n'est pas placée directement devant l'observateur. À partir de ces résultats, j'ai créé CamRay, un outil de télécommunication qui utilise une rangée de caméras pour enregistrer le visage des utilisateurs lorsqu'ils parcourent physiquement les données le long de l'écran et présente cette vidéo sur un autre mur d'écrans distant par dessus le contenu existant. Je propose deux possibilités pour afficher la vidéo: Follow-Local, où le flux vidéo de l'utilisateur distant suit l'utilisateur local, et Follow-Remote où il suit l'utilisateur distant. Je montre que Follow-Remote préserve les relations spatiales entre le collaborateur à distance et le contenu de l'écran, créant ainsi la possibilité de désigner les objets par des gestes de pointage, tandis que Follow-Local facilite les conversations grâce à un face-à-face virtuel qui transmet plus facilement la communication gestuelle. Finalement, je me base sur ces résultats pour guider la conception de futurs systèmes de communications à distance entre murs d'écrans, et dégager des considérations à suivre lorsque des capacités de communication à distance sont ajoutées à de nouvelles technologies. / Collaboration can take many forms, for which technology has long provided digital support. But when collaborators are located remotely, to what extent does technology support these activities? In this dissertation, I argue that the success of a telecommunications system does not depend on its capacity to imitate co-located conditions, but in its ability to support the collaborative practices that emerge from the specific characteristics of the technology. I explore this using wall-sized displays as a collaborative technology. I started by observing collaborators perform their daily work at a distance using prototypes. I then conducted experiments and found that people can accurately interpret remote deictic instructions and direct gaze when performed by a remote collaborator through video, even when this video is not placed directly in front of the observer. Based on these findings, I built CamRay, a telecommunication system that uses an array of cameras to capture users' faces as they physically navigate data on a wall-sized display, and presents this video in a remote display on top of existing content. I propose two ways of displaying video: Follow-Local, where the video feed of the remote collaborator follows the local user, and Follow-Remote, where it follows the remote user. I find that Follow-Remote preserves the spatial relations between the remote speaker and the content, supporting pointing gestures, while Follow-Local enables virtual face-to-face conversations, supporting representational gestures. Finally, I summarize these findings to inform the design of future systems for remote collaboration across wall-sized displays.
|
4 |
Fotografování s využitím světelného pole / Light field photographySvoboda, Karel January 2016 (has links)
The aim of this thesis is to explain terms like light field, plenoptic camera or digital lens. Also the goal is to explain the principle of rendering the resulting images with the option to select the plane of focus, depth of field, changes in perspective and partial change in the angle of the point of view. The main outputs of this thesis are scripts for rendering images from Lytro camera and the interactive application, which clearly demonstrates the principles of plenoptic sensing.
|
5 |
Feature Based Image Mosaicing using Regions of Interest for Wide Area Surveillance Camera Arrays with Known Camera OrderingBallard, Brett S. 16 May 2011 (has links)
No description available.
|
6 |
Increasing temporal, structural, and spectral resolution in images using exemplar-based priorsHolloway, Jason 16 September 2013 (has links)
In the past decade, camera manufacturers have offered smaller form factors, smaller pixel sizes (leading to higher resolution images), and faster processing chips to increase the performance of consumer cameras.
However, these conventional approaches have failed to capitalize on the spatio-temporal redundancy inherent in images, nor have they adequately provided a solution for finding $3$D point correspondences for cameras sampling different bands of the visible spectrum. In this thesis, we pose the following question---given the repetitious nature of image patches, and appropriate camera architectures, can statistical models be used to increase temporal, structural, or spectral resolution? While many techniques have been suggested to tackle individual aspects of this question, the proposed solutions either require prohibitively expensive hardware modifications and/or require overly simplistic assumptions about the geometry of the scene.
We propose a two-stage solution to facilitate image reconstruction; 1) design a linear camera system that optically encodes scene information and 2) recover full scene information using prior models learned from statistics of natural images. By leveraging the tendency of small regions to repeat throughout an image or video, we are able to learn prior models from patches pulled from exemplar images.
The quality of this approach will be demonstrated for two application domains, using low-speed video cameras for high-speed video acquisition and multi-spectral fusion using an array of cameras. We also investigate a conventional approach for finding 3D correspondence that enables a generalized assorted array of cameras to operate in multiple modalities, including multi-spectral, high dynamic range, and polarization imaging of dynamic scenes.
|
Page generated in 0.0643 seconds