• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Camera Control for Capturing Collaborative Meetings

Ranjan, Abhishek 25 September 2009 (has links)
The growing size of organizations is making it increasingly expensive to attend meetings and difficult to retain what happened in those meetings. Meeting video capture systems exist to support video conferencing for remote participation or archiving for later review, but they have been regarded ineffective. The reason is twofold. Firstly, the conventional way of capturing video using a single static camera fails to capture focus and context. Secondly, a single static view is often monotonous, making the video onerous to review. To address these issues, often human camera operators are employed to capture effective videos with changing views, but this approach is expensive. In this thesis, we argue that camera views can be changed automatically to produce meeting videos effectively and inexpensively. We automate the camera view control by automatically determining the visual focus of attention as a function of time and moving the camera to capture it. In order to determine visual focus of attention for different meetings, we conducted experiments and interviewed television production professionals who capture meeting videos. Furthermore, television production principles were used to appropriately frame shots and switch between shots. The result of the evaluation of the automatic camera control system indicated its significant benefits over conventional static camera view. By applying television production principles various issues related to shot stability and screen motion were resolved. The performance of the automatic camera control based on television production principles also approached the performance of trained human camera crew. To further reduce the cost of the automation, we also explored the application of computer vision and audio tracking. Results of our explorations provide empirical evidence in support of the utility of camera control encouraging future research in this area. Successful application of television production principles to automatically control cameras suggest various ways to handle issues involved in the automation process.
2

Virtual camera control using dynamic spatial partitions / Contrôle de caméra virtuelle à base de partitions spatiales dynamiques

Lino, Christophe 03 October 2013 (has links)
Le contrôle de caméra virtuelle est aujourd'hui un composant essentiel dans beaucoup d'applications d'infographie. Malgré cette importance, les approches actuelles restent limitées en terme d'expressivité, d'interactivité et de performances. Typiquement, les éléments de style ou de genre cinématographique sont difficiles à modéliser et à simuler dû à l'incapacité des systèmes actuels de calculer simultanément des points de vues, des trajectoires et d'effectuer le montage. Deuxièmement, elles n'explorent pas assez le potentiel créatif offert par le couplage potentiel d'un humain et d'un système intelligent pour assister les utilisateurs dans une tâche complexe de construction de séquences cinématographiques. Enfin, la plupart des approches existantes se basent sur des techniques d'optimisation dans un espace de recherche 6D, qui s'avèrent coûteuses et donc inadaptées à un contexte interactif. Dans cette thèse, nous proposons tout d'abord un cadre unique intégrant les quatre aspects clés de la cinématographie (le calcul de point de vue, la planification de trajectoires, le montage et la visibilité). Ce cadre expressif permet de simuler certaines dimensions de style cinématographique. Nous proposons ensuite une méthodologie permettant de combiner les capacités d'un système automatique avec une interaction utilisateur. Enfin, nous présentons un modèle de contrôle de caméra efficace qui réduit l'espace de recherche de 6D à 3D. Ce modèle a le potentiel pour remplacer un certain nombre de formulations existantes. / Virtual camera control is nowadays an essential component in many computer graphics applications. Despite its importance, current approaches remain limited in their expressiveness, interactive nature and performances. Typically, elements of directorial style and genre cannot be easily modeled nor simulated due to the lack of simultaneous control in viewpoint computation, camera path planning and editing. Second, there is a lack in exploring the creative potential behind the coupling of a human with an intelligent system to assist users in the complex task of designing cinematographic sequences. Finally, most techniques are based on computationally expensive optimization techniques performed in a 6D search space, which prevents their application to real-time contexts. In this thesis, we first propose a unifying approach which handles four key aspects of cinematography (viewpoint computation, camera path planning, editing and visibility computation) in an expressive model which accounts for some elements of directorial style. We then propose a workflow allowing to combine automated intelligence with user interaction. We finally present a novel and efficient approach to virtual camera control which reduces the search space from 6D to 3D and has the potential to replace a number of existing formulations.
3

Data-driven virtual cinematography / Cinématographie virtuelle basée données

Sanokho, Cunka 17 February 2016 (has links)
Le contrôle automatique de caméra est un composant essentiel en cinématographie virtuelle à la fois pour la sélection des points de vue appropriés dans une scène 3D ou pour passer en revue efficacement le contenu d'un environnement 3D. Les applications de cinématographie virtuelle reposent sur des critères réalisme afin de provoquer un impact positif chez le spectateur. Dans cette thèse, nous présentons deux contributions. Tout d'abord, nous proposons une métrique permettant d'évaluer et de corriger l'équilibre visuel dans les images basée sur un large éventail de caractéristiques visuelles, dont la taille, la silhouette, la position et la saillance d'objets cibles, associés à des métriques liées aux positions et orientations des objets. Le procédé consiste à annoter des images bien équilibrées, pour estimer automatiquement la façon dont chaque caractéristique influence l'équilibre visuel dans chaque image. Nous proposons de créer une base de données d'images annotées de façon à (i) évaluer l'équilibre dans une nouvelle image en comparant ses caractéristiques visuelles à celles de la base de donnée, et (ii) d'optimiser automatiquement les points de vue dans une scène 3D de façon à recréer un équilibre visuel. Deuxièmement, nous présentons les Camera Motion Graph, une technique pour générer facilement et efficacement des séquences cinématographiques en temps réel dans des environnements 3D dynamiques. Un Camera Motion Graph est composé de (i) morceaux de trajectoires de caméra réelles exprimées dans le repère local d'une ou plusieurs cibles, (ii) de transitions continues entre les trajectoires des caméras et (iii) de transitions représentant des sauts directs d'une trajectoire à une autre. Les échantillons de trajectoires sont construits en extrayant des mouvements de caméra à partir de véritables films en utilisant des techniques de vision par ordinateur, ou par capture de mouvement en utilisant un système de caméra virtuelle. Une transformation est proposée afin d'exprimer les trajectoires des caméras dans une représentation normalisée. Le Camera Motion Graph est alors construit par échantillonnage de toutes les paires de trajectoires de caméras et le calcul des possibilités de transitions continues ou directes. Les résultats que nous présentons illustrent la simplicité de la technique, son adaptabilité à différents environnements 3D ainsi que son efficacité. / Automated camera control techniques are key components of virtual cinematography systems by providing means to select appropriate viewpoints in a 3D scene and to efficiently review the content of a 3D environment. In this work we present two contributions. First, we propose an example-driven on-screen balance metric which estimates how well balanced is the composition of a shot. Our metric accounts for a large set of visual features including size, silhouette, position and saliency of target objects, together with metrics related to character’s positions, orientations and gaze. The process consists in annotating well balanced images, to estimate automatically how each visual feature influences balance in each image. We then rely on this database of annotated images to (i) estimate how well new images are balanced by comparing their visual features, and (ii) automatically optimize viewpoints in a 3D scene to enforce balance. Second, we present Camera Motion Graphs, a technique to easily and efficiently generate cinematographic sequences in real-time dynamic 3D environments. A camera motion graph consists of (i) pieces of original camera trajectories attached to one or multiple targets, (ii) generated continuous transitions between camera trajectories and (iii) transitions representing cuts between camera trajectories. Pieces of original camera trajectories are built by extracting camera motions from real movies using vision-based techniques, or relying on motion capture techniques using a virtual camera system. A transformation is proposed to recompute all the camera trajectories in a normalized representation, making camera paths easily adaptable to new 3D environments through a specific retargeting technique. The camera motion graph is then constructed by sampling all pairs of camera trajectories and evaluating the possibility and quality of continuous or cut transitions. Results illustrate the simplicity of the technique, its adaptability to different 3D environments and its efficiency.
4

Active visual scene exploration

Sommerlade, Eric Chris Wolfgang January 2011 (has links)
This thesis addresses information theoretic methods for control of one or several active cameras in the context of visual surveillance. This approach has two advantages. Firstly, any system dealing with real inputs must take into account noise in the measurements and the underlying system model. Secondly, the control of cameras in surveillance often has different, potentially conflicting objectives. Information theoretic metrics not only yield a way to assess the uncertainty in the current state estimate, they also provide means to choose the observation parameters that optimally reduce this uncertainty. The latter property allows comparison of sensing actions with respect to different objectives. This allows specification of a preference for objectives, where the generated control will fulfil these desired objectives accordingly. The thesis provides arguments for the utility of information theoretic approaches to control visual surveillance systems, by addressing the following objectives in particular: Firstly, how to choose a zoom setting of a single camera to optimally track a single target with a Kalman filter. Here emphasis is put on an arbitration between loss of track due to noise in the observation process, and information gain due to higher accuracy after successful observation. The resulting method adds a running average of the Kalman filter’s innovation to the observation noise, which not only ameliorates tracking performance in the case of unexpected target motions, but also provides a higher maximum zoom setting. The second major contribution of this thesis is a term that addresses exploration of the supervised area in an information theoretic manner. The reasoning behind this term is to model the appearance of new targets in the supervised environment, and use this as prior uncertainty about the occupancy of areas currently not under observation. Furthermore, this term uses the performance of an object detection method to gauge the information that observations of a single location can yield. Additionally, this thesis shows experimentally that a preference for control objectives can be set using a single scalar value. This linearly combines the objective functions of the two conflicting objectives of detection and exploration, and results in the desired control behaviour. The third contribution is an objective function that addresses classification methods. The thesis shows in detail how the information can be derived that can be gained from the classification of a single target, under consideration of its gaze direction. Quantitative and qualitative validation show the increase in performance when compared to standard methods.
5

Virtual camera control using dynamic spatial partitions

Lino, Christophe 03 October 2013 (has links) (PDF)
Virtual camera control is nowadays an essential component in many computer graphics applications. Despite its importance, current approaches remain limited in their expressiveness, interactive nature and performances. Typically, elements of directorial style and genre cannot be easily modeled nor simulated due to the lack of simultaneous control in viewpoint computation, camera path planning and editing. Second, there is a lack in exploring the creative potential behind the coupling of a human with an intelligent system to assist users in the complex task of designing cinematographic sequences. Finally, most techniques are based on computationally expensive optimization techniques performed in a 6D search space, which prevents their application to real-time contexts. In this thesis, we first propose a unifying approach which handles four key aspects of cinematography (viewpoint computation, camera path planning, editing and visibility computation) in an expressive model which accounts for some elements of directorial style. We then propose a workflow allowing to combine automated intelligence with user interaction. We finally present a novel and efficient approach to virtual camera control which reduces the search space from 6D to 3D and has the potential to replace a number of existing formulations.
6

Modeling of a Gyro-Stabilized Helicopter Camera System Using Neural Networks

Layshot, Nicholas Joseph 01 December 2010 (has links) (PDF)
On-board gimbal systems for camera stabilization in helicopters are typically based on linear models. Such models, however, are inaccurate due to system nonlinearities and complexities. As an alternative approach, artificial neural networks can provide a more accurate model of the gimbal system based on their non-linear mapping and generalization capabilities. This thesis investigates the applications of artificial neural networks to model the inertial characteristics (on the azimuth axis) of the inner gimbal in a gyro-stabilized multi-gimbal system. The neural network is trained with time-domain data obtained from gyro rate sensors of an actual camera system. The network performance is evaluated and compared with measured data and a traditional linear model. Computer simulation results show the neural network model fits well with the measured data and significantly outperforms a traditional model.
7

A head-in-hand metaphor for user-centric direct camera control in virtual reality

Günther, Tobias, Querner, Erich, Groh, Rainer 17 May 2021 (has links)
The explorative examination of constructed 3D models in immersive environments requires suitable user-centric interaction methods. Especially novel concepts for virtual camera control can offer advantages, e.g. for the analysis of model details. We extend the known concept of the camera-in-hand metaphor and implement a multidimensional viewport control technique that can be used with common head-mounted displays and VR-controllers. With our head-in-hand view the user is able to control the virtual camera directly by hand without losing the flexibility of head movements. To ensure convenient operation, the method restricts special rotation parameters and smoothes jerky gestures of the user hand. Inaddition, we discuss implications and improvement potential of the proposed concept as well as adverse effects on the user, such as motion sickness.
8

A hierarchical neural network approach to learning sensor planning and control

Löfwenberg, Nicke January 2023 (has links)
The ability to search their environment is one of the most fundamental skills for any living creature. Visual search in particular is abundantly common for almost all animals. This act of searching is generally active in nature, with vision not simply reacting to incoming stimuli but also actively searching the environment for potential stimuli (such as by moving their head or eyes). Automatic visual search, likewise, is a crucial and powerful tool within a wide variety of different fields. However, performing such an active search is a nontrivial issue for many machine learning approaches. The added complexity of choosing which area to observe, as well as the common case of having a camera with adaptive field-of-view capabilities further complicates the problem. Hierarchical Reinforcement Learning have in recent years proven to be a particularly powerful means of solving hard machine learning problems by a divide-and-conquer methodology, where one highly complex task can be broken down into smaller sub-tasks which on their own may be more easily learnable. In this thesis, we present a hierarchical reinforcement learning system for solving a visual search problem in a stationary camera environment with adjustable pan, tilt and field-of-view capabilities. This hierarchical model also incorporates non-reinforcement learning agents in its workflow to better utilize the strengths of different agents and form a more powerful overall model. This model is then compared to a non-hierarchical baseline as well as some learning-free approaches.
9

互動敘事中客製化之虛擬拍攝實驗平台 / An Experimental Platform for Customized Virtual Cinematography in Interactive Storytelling

賴珮君, Lai, Pei Chun Unknown Date (has links)
近年來由於電腦軟硬體及人機介面介面技術的發展,互動數位敘事(Interactive Digital Storytelling, IDS)的應用也逐漸被重視,特別是在新型態電腦遊戲的設計,而這個趨勢也為即時虛擬攝影機的規劃帶來新的機會與挑戰。本研究旨在透過互動數位敘事腳本內容的分析,建置客製化攝影機運鏡實驗平台,即時自動產生符合情境情節、人物情緒的拍攝方式,並參考電影拍攝手法,結合攝影學的專業知識加入不同拍攝風格,讓同一段影片可以有不同的風格效果。我們希望能夠讓現有的互動敘事系統The Theater [1]中的運鏡技術有跡可循,不再只是以人工的方式憑藉直覺來設定攝影機的位置,而能使得虛擬攝影機的操控變得簡易,修正拍攝效果時將更加簡便,成功快速掌握運鏡的每一個細節。我們在The Theater的實驗平台之上,讓敘事者可以根據故事情境客製化虛擬攝影機的拍攝手法,並由電腦自動產生合宜的攝影機拍攝位置,快速完成攝影機規劃。我們以實例透過實驗的方式驗證此系統的有效性。 / The recent advances in computing technologies and human-computer interactions have attracted much attention in the development of interactive digital storytelling (IDS), especially in the application of novel computer game design. This trend does not only bring new opportunities but also new technological challenges to virtual camera planning. Our research in this work aims at building an experimental platform for customized virtual camera planning through the analysis of screen play in an in-teractive story. By adopting the domain knowledge of camera controls in existing films, we hope to design a computer-assisted system that allows an author to easily experiment with different styles of virtual cameras in a same story. We proposed to design an experimental platform based on “The Theater” IDS, which currently uses a pre-authored way to specify the camera position. In the proposed system, we allow an author to quickly customize virtual camera taking according to the context of a story fragment and let the computer generate appropriate camera configurations automati-cally. We use an example story to verify the effectiveness of the system through ex-periments.
10

Detekce nánosu UV lepidla / UV adhesive coating detection

Pavelka, Radek January 2018 (has links)
This diploma thesis focuses on a design of camera control system used for detecting defects, appearing during a UV luminescent glue application on the bottom of a paper bag. As a part of this thesis, an application was developed, using Baumer VCXG-53C industrial camera, implementing two dierent control methods - 2D cross correlation image pattern matching based on previously user defined pattern and glue area size measuring based on binary segmented image. The result of this work is a fully developed control system, prepared to be put into operation at the customer’s production line.

Page generated in 0.089 seconds