• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 297
  • 297
  • 78
  • 69
  • 67
  • 61
  • 57
  • 51
  • 47
  • 43
  • 43
  • 41
  • 38
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Improved detection and tracking of objects in surveillance video

Denman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
172

Markerless augmented reality on ubiquitous mobile devices with integrated sensors

Van Wyk, Carel 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The computational power of mobile smart-phone devices are ever increasing and high-end phones become more popular amongst consumers every day. The technical speci cations of a high-end smart-phone today rivals those of a home computer system of only a few years ago. Powerful processors, combined with cameras and ease of development encourage an increasing number of Augmented Reality (AR) researchers to adopt mobile smart-phones as AR platform. Implementation of marker-based Augmented Reality systems on mobile phones is mostly a solved problem. Markerless systems still o er challenges due to increased processing requirements. Some researchers adopt purely computer vision based markerless tracking methods to estimate camera pose on mobile devices. In this thesis we propose the use of a hybrid system that employs both computer vision and integrated sensors present in most new smartphones to facilitate pose estimation. We estimate three of the six degrees of freedom of pose using integrated sensors and estimate the remaining three using feature tracking. A proof of concept hybrid system is implemented as part of this thesis. / AFRIKAANSE OPSOMMING: Die berekeningskrag van nuwe-generasie selfone neem elke dag toe en kragtige "slim-fone" word al hoe meer populêr onder verbruikers. Die tegniese spesifikasies van 'n nuwe slim-foon vandag is vergelykbaar met die van 'n persoonlike rekenaar van slegs 'n paar jaar gelede. Die kombinasie van kragtige verwerkers, kameras en die gemaklikheid waarmee programmatuur op hierdie toestelle ontwikkel word, maak dit 'n aantreklike ontwikkelingsplatform vir navorsers in Toegevoegde Realiteit. Die implimentering van 'n merker-gebaseerde Toegevoegde Realiteitstelsel op selfone is 'n probleem wat reeds grotendeels opgelos is. Merker-vrye stelsels, aan die ander kant, bied steeds interessante uitdagings omdat hulle meer prosesseringskrag vereis. 'n Paar navorsers het reeds rekenaarvisie-gebaseerde merker-vrye stelsels aangepas om op selfone te funksioneer. In hierdie tesis stel ons die ontwikkeling voor van 'n hibriede stelsel wat gebruik maak van rekenaarvisie sowel as geintegreerde sensore in die foon om die berekening van kamera-orientasie te vergemaklik. Ons gebruik geintegreerde sensore om drie uit ses vryheidsgrade van orientasie te bereken, terwyl die oorblywende drie met behulp van rekenaarvisie-tegnieke bepaal word. 'n Prototipe stelsel is ontwikkel as deel van hierdie tesis.
173

Tracking of Ground Vehicles : Evaluation of Tracking Performance Using Different Sensors and Filtering Techniques

Homelius, Marcus January 2018 (has links)
It is crucial to find a good balance between positioning accuracy and cost when developing navigation systems for ground vehicles. In open sky or even in a semi-urban environment, a single global navigation satellite system (GNSS) constellation performs sufficiently well. However, the positioning accuracy decreases drastically in urban environments. Because of the limitation in tracking performance for standalone GNSS, particularly in cities, many solutions are now moving toward integrated systems that combine complementary sensors. In this master thesis the improvement of tracking performance for a low-cost ground vehicle navigation system is evaluated when complementary sensors are added and different filtering techniques are used. How the GNSS aided inertial navigation system (INS) is used to track ground vehicles is explained in this thesis. This has shown to be a very effective way of tracking a vehicle through GNSS outages. Measurements from an accelerometer and a gyroscope are used as inputs to inertial navigation equations. GNSS measurements are then used to correct the tracking solution and to estimate the biases in the inertial sensors. When velocity constraints on the vehicle’s motion in the y- and z-axis are included, the GNSS aided INS has shown very good performance, even during long GNSS outages. Two versions of the Rauch-Tung-Striebel (RTS) smoother and a particle filter (PF) version of the GNSS aided INS have also been implemented and evaluated. The PF has shown to be computationally demanding in comparison with the other approaches and a real-time implementation on the considered embedded system is not doable. The RTS smoother has shown to give a smoother trajectory but a lot of extra information needs to be stored and the position accuracy is not significantly improved. Moreover, map matching has been combined with GNSS measurements and estimates from the GNSS aided INS. The Viterbi algorithm is used to output the the road segment identification numbers of the most likely path and then the estimates are matched to the closest position of these roads. A suggested solution to acquire reliable tracking with high accuracy in all environments is to run the GNSS aided INS in real-time in the vehicle and simultaneously send the horizontal position coordinates to a back office where map information is kept and map matching is performed.
174

Tecnologia assistiva para detecção de quedas : desenvolvimento de sensor vestível integrado ao sistema de casa inteligente

Torres, Guilherme Gerzson January 2018 (has links)
O uso de tecnologias assistivas objetivando proporcionar melhor qualidade de vida a idosos está em franca ascensão. Uma das linhas de pesquisa nessa área é o uso de dispositivos para detecção de quedas de idosos, um problema cuja ocorrência é cada vez maior devido a diversos fatores, incluindo maior longevidade, maior número de pessoas vivendo sozinhas na velhice, entre outros. Este trabalho apresenta o desenvolvimento de um dispositivo vestível, um nó sensor de redes de sensores sem fio de ultra-baixo consumo. Também descreve a expansão de um sistema KNX, ao qual o dispositivo é integrado. O dispositivo é capaz de identificar quedas, auxiliando no monitoramento de idosos e, por sua vez, aumentando a segurança dos mesmos. O monitoramento é realizado através de acelerômetro e giroscópio de 3 eixos, acoplados ao peito do usuário, capaz de detectar quedas através de um algoritmo de análise de limites determinados a partir da fusão dos dados dos sensores. O sensor vestível utiliza tecnologia EnOcean, que propicia conexão sem fio com um sistema de automação de casas inteligentes, de acordo com a norma KNX, através da plataforma Home Assistant. Telegramas de alarmes são automaticamente enviados no caso de detecção de quedas, e acionam um atuador pertencente ao sistema KNX. Além de validar a tecnologia EnOcean para uso em dispositivos vestíveis, o protótipo desenvolvido não indicou nenhum falso positivo através de testes realizados com dois usuários de características corporais diferentes, onde foram reproduzidos 100 vezes cada um dos oito tipos de movimentos (quatro movimentos de quedas e quatro de não quedas). Os testes realizados com o dispositivo revelaram sensibilidade e de especificidade de até 96% e 100%, respectivamente. / The use of assistive technologies to provide quality of life for elderly is increasing. One of the research lines of this area is the use of devices for fall detection, which is an increasing problem due to many factors, including greater longevity, more elders living alone, among others. This work presents the development of a wearable device, a sensor node for ultra-low power networks. Also, describes the expansion of a KNX system, which the device is integrated. The device is able to detect falls which can aid the monitoring of the elderly people and improve security. The monitoring is done through a 3-axis accelerometer and gyroscope attached on the user’s chest. The fall detection is done by a threshold algorithm based on data fusion of the sensors. The wearable sensor is an EnOcean node, which includes a wireless connection with a smart home system, according to the KNX standard, through the Home Assistant platform. Alarm telegrams are automatically sent in case of fall detection, and fires an actuator that is part of the KNX system to alarm. In addition to validating the EnOcean’s Technology for use on wearable devices, the developed prototype didn’t indicated any false positives through tests performed with two users of different body characteristics, where each of the eight types of movements (four movements of falls and four of non-falls) were reproduced 100 times. The tests done with the device revealed sensitivity and specificity of up to 96% and 100%, respectively.
175

Détection de personnes pour des systèmes de videosurveillance multi-caméra intelligents / People detection methods for intelligent multi-Camera surveillance systems

Mehmood, Muhammad Owais 28 September 2015 (has links)
La détection de personnes dans les vidéos est un défi bien connu du domaine de la vision par ordinateur avec un grand nombre d'applications telles que le développement de systèmes de surveillance visuels. Même si les détecteurs monoculaires sont plus simples à mettre en place, ils sont dans l’incapacité de gérer des scènes complexes avec des occultations, une grande densité de personnes ou des scènes avec beaucoup de profondeur de champ menant à une grande variabilité dans la taille des personnes. Dans cette thèse, nous étudions la détection de personnes multi-vues et notamment l'utilisation de cartes d'occupation probabilistes créées en fusionnant les différentes vues grâce à la connaissance de la géométrie du système. La détection à partir de ces cartes d'occupation amène cependant des fausses détections (appelées « fantômes ») dues aux différentes projections. Nous proposons deux nouvelles techniques afin de remédier à ce phénomène et améliorer la détection des personnes. La première utilise une déconvolution par un noyau dont la forme varie spatialement tandis que la seconde est basée sur un principe de validation d’hypothèse. Ces deux approches n'utilisent volontairement pas l'information temporelle qui pourra être réintroduite par la suite dans des algorithmes de suivi. Les deux approches ont été validées dans des conditions difficiles présentant des occultations, une densité de personnes plus ou moins élevée et de fortes variations dans les réponses colorimétriques des caméras. Une comparaison avec d'autres méthodes de l’état de l'art a également été menée sur trois bases de données publiques, validant les méthodes proposées pour la surveillance d'une gare et d'un aéroport / People detection is a well-studied open challenge in the field of Computer Vision with applications such as in the visual surveillance systems. Monocular detectors have limited ability to handle occlusion, clutter, scale, density. Ubiquitous presence of cameras and computational resources fuel the development of multi-camera detection systems. In this thesis, we study the multi-camera people detection; specifically, the use of multi-view probabilistic occupancy maps based on the camera calibration. Occupancy maps allow multi-view geometric fusion of several camera views. Detection with such maps create several false detections and we study this phenomenon: ghost pruning. Further, we propose two novel techniques in order to improve multi-view detection based on: (a) kernel deconvolution, and (b) occupancy shape modeling. We perform non-temporal, multi-view reasoning in occupancy maps to recover accurate positions of people in challenging conditions such as of occlusion, clutter, lighting, and camera variations. We show improvements in people detections across three challenging datasets for visual surveillance including comparison with state-of-the-art techniques. We show the application of this work in exigent transportation scenarios i.e. people detection for surveillance at a train station and at an airport
176

Human Inspired Control System for an Unmanned Ground Vehicle

January 2015 (has links)
abstract: In this research work, a novel control system strategy for the robust control of an unmanned ground vehicle is proposed. This strategy is motivated by efforts to mitigate the problem for scenarios in which the human operator is unable to properly communicate with the vehicle. This novel control system strategy consisted of three major components: I.) Two independent intelligent controllers, II.) An intelligent navigation system, and III.) An intelligent controller tuning unit. The inner workings of the first two components are based off the Brain Emotional Learning (BEL), which is a mathematical model of the Amygdala-Orbitofrontal, a region in mammalians brain known to be responsible for emotional learning. Simulation results demonstrated the implementation of the BEL model to be very robust, efficient, and adaptable to dynamical changes in its application as controller and as a sensor fusion filter for an unmanned ground vehicle. These results were obtained with significantly less computational cost when compared to traditional methods for control and sensor fusion. For the intelligent controller tuning unit, the implementation of a human emotion recognition system was investigated. This system was utilized for the classification of driving behavior. Results from experiments showed that the affective states of the driver are accurately captured. However, the driver's affective state is not a good indicator of the driver's driving behavior. As a result, an alternative method for classifying driving behavior from the driver's brain activity was explored. This method proved to be successful at classifying the driver's behavior. It obtained results comparable to the common approach through vehicle parameters. This alternative approach has the advantage of directly classifying driving behavior from the driver, which is of particular use in UGV domain because the operator's information is readily available. The classified driving mode was used tune the controllers' performance to a desired mode of operation. Such qualities are required for a contingency control system that would allow the vehicle to operate with no operator inputs. / Dissertation/Thesis / Doctoral Dissertation Engineering 2015
177

?Localiza??o e planejamento de caminhos para um rob? human?ide e um rob? escravo com rodas

Santana, Andr? Mac?do 10 July 2007 (has links)
Made available in DSpace on 2014-12-17T14:55:02Z (GMT). No. of bitstreams: 1 AndreMS.pdf: 553881 bytes, checksum: b2c5c5f17b0c8205f6f6da967252fae4 (MD5) Previous issue date: 2007-07-10 / ?This work presents the localization and path planning systems for two robots: a non-instrumented humanoid and a slave wheeled robot. The localization of wheeled robot is made using odometry information and landmark detection. These informations are fused using a Extended Kalman Filter. The relative position of humanoid is acquired fusing (using another Kalman Filter) the wheeled robot pose with the characteristics of the landmark on the back of humanoid. Knowing the wheeled robot position and the humanoid relative position in relation to it, we acquired the absolute position of humanoid. The path planning system was developed to provide the cooperative movement of the two robots,incorporating the visibility restrictions of the robotic system / ?Esse trabalho apresentar? os sistemas de localiza??o e planejamento de caminho para um sistema rob?tico formado por um human?ide n?o instrumentado e um rob? escravo com rodas. O objetivo do sistema ? efetuar a navega??o do human?ide, que n?o possui sensores mas que pode ser remotamente controlado por infra-vermelhos, utilizando um rob? escravo com rodas. O rob? com rodas dever? se posicionar atr?s do human?ide e, atrav?s da imagem, estabelecer o posicionamento relativo do human?ide em rela??o a ele. A localiza??o do rob? com rodas ser? obtida fundindo informa??es de odometria e detec??o de marcos utilizando o Filtro de Kalman Extendido. A posi??o relativa do hu-man?ide ser? encontrada a partir da fus?o da pose do rob? com rodas juntamente com as caracter?sticas do marco fixado nas costas do human?ide utilizando outro Filtro de Kalman. Sabendo a posi??o do rob? com rodas e a posi??o relativa do human?ide em rela??o a ele tem-se a posi??o absoluta do human?ide. O planejador de caminho foi desenvolvido de forma a proporcionar a movimenta??o cooperativa dos dois rob?s incorporando as restri??es de visibilidade existente do sistema rob?tico.
178

Tecnologia assistiva para detecção de quedas : desenvolvimento de sensor vestível integrado ao sistema de casa inteligente

Torres, Guilherme Gerzson January 2018 (has links)
O uso de tecnologias assistivas objetivando proporcionar melhor qualidade de vida a idosos está em franca ascensão. Uma das linhas de pesquisa nessa área é o uso de dispositivos para detecção de quedas de idosos, um problema cuja ocorrência é cada vez maior devido a diversos fatores, incluindo maior longevidade, maior número de pessoas vivendo sozinhas na velhice, entre outros. Este trabalho apresenta o desenvolvimento de um dispositivo vestível, um nó sensor de redes de sensores sem fio de ultra-baixo consumo. Também descreve a expansão de um sistema KNX, ao qual o dispositivo é integrado. O dispositivo é capaz de identificar quedas, auxiliando no monitoramento de idosos e, por sua vez, aumentando a segurança dos mesmos. O monitoramento é realizado através de acelerômetro e giroscópio de 3 eixos, acoplados ao peito do usuário, capaz de detectar quedas através de um algoritmo de análise de limites determinados a partir da fusão dos dados dos sensores. O sensor vestível utiliza tecnologia EnOcean, que propicia conexão sem fio com um sistema de automação de casas inteligentes, de acordo com a norma KNX, através da plataforma Home Assistant. Telegramas de alarmes são automaticamente enviados no caso de detecção de quedas, e acionam um atuador pertencente ao sistema KNX. Além de validar a tecnologia EnOcean para uso em dispositivos vestíveis, o protótipo desenvolvido não indicou nenhum falso positivo através de testes realizados com dois usuários de características corporais diferentes, onde foram reproduzidos 100 vezes cada um dos oito tipos de movimentos (quatro movimentos de quedas e quatro de não quedas). Os testes realizados com o dispositivo revelaram sensibilidade e de especificidade de até 96% e 100%, respectivamente. / The use of assistive technologies to provide quality of life for elderly is increasing. One of the research lines of this area is the use of devices for fall detection, which is an increasing problem due to many factors, including greater longevity, more elders living alone, among others. This work presents the development of a wearable device, a sensor node for ultra-low power networks. Also, describes the expansion of a KNX system, which the device is integrated. The device is able to detect falls which can aid the monitoring of the elderly people and improve security. The monitoring is done through a 3-axis accelerometer and gyroscope attached on the user’s chest. The fall detection is done by a threshold algorithm based on data fusion of the sensors. The wearable sensor is an EnOcean node, which includes a wireless connection with a smart home system, according to the KNX standard, through the Home Assistant platform. Alarm telegrams are automatically sent in case of fall detection, and fires an actuator that is part of the KNX system to alarm. In addition to validating the EnOcean’s Technology for use on wearable devices, the developed prototype didn’t indicated any false positives through tests performed with two users of different body characteristics, where each of the eight types of movements (four movements of falls and four of non-falls) were reproduced 100 times. The tests done with the device revealed sensitivity and specificity of up to 96% and 100%, respectively.
179

Filtres de Kalman étendus reposant sur une variable d'erreur non linéaire avec applications à la navigation / Non-linear state error based extended Kalman filters with applications to navigation

Barrau, Axel 15 September 2015 (has links)
Cette thèse étudie l'utilisation de variables d'erreurs non linéaires dans la conception de filtres de Kalman étendus (EKF). La théorie des observateurs invariants sur les groupes de Lie sert de point de départ au développement d'un cadre plus général mais aussi plus simple, fournissant des variables d'erreur non linéaires assurant la propriété nouvelle et surprenante de suivre une équation différentielle (partiellement) linéaire. Ce résultat est mis à profit pour prouver, sous des hypothèses naturelles d'observabilité, la stabilité de l'EKF invariant (IEKF) une fois adapapté à la classe de systèmes (non-invariants) introduite. Le gain de performance remarquable par rapport à l'EKF classique est illustré par des applications à des problèmes industriels réels, réalisées en partenariat avec l'entreprise SAGEM.Dans une seconde approche, les variables d'erreurs sont étudiées en tant que processus stochastiques. Pour les observateurs convergeant globalement si les bruits sont ignorés, on montre que les ajouter conduit la variable d'erreur à converger en loi vers une distribution limite indépendante de l'initialisation. Ceci permet de choisir des gains à l'avance en optimisant la densité asymptotique. La dernière approche adoptée consiste à prendre un peu de recul vis-à-vis des groupes de Lie, et à étudier les EKF utilisant des variables d'erreur non linéaires de façon générale. Des propriété globales nouvelles sont obtenues. En particulier, on montre que ces méthodes permettent de résoudre le célèbre problème de fausse observabilité créé par l'EKF s'il est appliqué aux questions de localisation et cartographie simultanées (SLAM). / The present thesis explores the use of non-linear state errors to devise extended Kalman filters (EKFs). First we depart from the theory of invariant observers on Lie groups and propose a more general yet simpler framework allowing to obtain non-linear error variables having the novel unexpected property of being governed by a (partially) linear differential equation. This result is leveraged to ensure local stability of the invariant EKF (IEKF) under standard observability assumptions, when extended to this class of (non-invariant) systems. Real applications to some industrial problems in partnership with the company SAGEM illustrate the remarkable performance gap over the conventional EKF. A second route we investigate is to turn the noise on and consider the invariant errors as stochastic processes. Convergence in law of the error to a fixed probability distribution, independent of the initialization, is obtained if the error with noise turned off is globally convergent, which in turn allows to assess gains in advance that minimize the error's asymptotic dispersion. The last route consists in stepping back a little and exploring general EKFs (beyond the Lie group case) relying on a non-linear state error. Novel mathematical (global) properties are derived. In particular, these methods are shown to remedy the famous problem of false observability created by the EKF if applied to simultaneous localization and mapping (SLAM), which is a novel result.
180

Gaining Depth : Time-of-Flight Sensor Fusion for Three-Dimensional Video Content Creation

Schwarz, Sebastian January 2014 (has links)
The successful revival of three-dimensional (3D) cinema has generated a great deal of interest in 3D video. However, contemporary eyewear-assisted displaying technologies are not well suited for the less restricted scenarios outside movie theaters. The next generation of 3D displays, autostereoscopic multiview displays, overcome the restrictions of traditional stereoscopic 3D and can provide an important boost for 3D television (3DTV). Then again, such displays require scene depth information in order to reduce the amount of necessary input data. Acquiring this information is quite complex and challenging, thus restricting content creators and limiting the amount of available 3D video content. Nonetheless, without broad and innovative 3D television programs, even next-generation 3DTV will lack customer appeal. Therefore simplified 3D video content generation is essential for the medium's success. This dissertation surveys the advantages and limitations of contemporary 3D video acquisition. Based on these findings, a combination of dedicated depth sensors, so-called Time-of-Flight (ToF) cameras, and video cameras, is investigated with the aim of simplifying 3D video content generation. The concept of Time-of-Flight sensor fusion is analyzed in order to identify suitable courses of action for high quality 3D video acquisition. In order to overcome the main drawback of current Time-of-Flight technology, namely the high sensor noise and low spatial resolution, a weighted optimization approach for Time-of-Flight super-resolution is proposed. This approach incorporates video texture, measurement noise and temporal information for high quality 3D video acquisition from a single video plus Time-of-Flight camera combination. Objective evaluations show benefits with respect to state-of-the-art depth upsampling solutions. Subjective visual quality assessment confirms the objective results, with a significant increase in viewer preference by a factor of four. Furthermore, the presented super-resolution approach can be applied to other applications, such as depth video compression, providing bit rate savings of approximately 10 percent compared to competing depth upsampling solutions. The work presented in this dissertation has been published in two scientific journals and five peer-reviewed conference proceedings.  In conclusion, Time-of-Flight sensor fusion can help to simplify 3D video content generation, consequently supporting a larger variety of available content. Thus, this dissertation provides important inputs towards broad and innovative 3D video content, hopefully contributing to the future success of next-generation 3DTV.

Page generated in 0.1077 seconds