• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 6
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 15
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

3-D Face Recognition using the Discrete Cosine Transform (DCT)

Hantehzadeh, Neda 01 January 2009 (has links)
Face recognition can be used in various biometric applications ranging from identifying criminals entering an airport to identifying an unconscious patient in the hospital With the introduction of 3-dimensional scanners in the last decade, researchers have begun to develop new methods for 3-D face recognition. This thesis focuses on 3-D face recognition using the one- and two-dimensional Discrete Cosine Transform (DCT) . A feature ranking based dimensionality reduction strategy is introduced to select the DCT coefficients that yield the best classification accuracies. Two forms of 3-D representation are used: point cloud and depth map images. These representations are extracted from the original VRML files in a face database and are normalized during the extraction process. Classification accuracies exceeding 97% are obtained using the point cloud images in conjunction with the 2-D DCT.
22

[en] GENERATING SUPERRESOLVED DEPTH MAPS USING LOW COST SENSORS AND RGB IMAGES / [pt] GERAÇÃOO DE MAPAS DE PROFUNDIDADE SUPER-RESOLVIDOS A PARTIR DE SENSORES DE BAIXO CUSTO E IMAGENS RGB

LEANDRO TAVARES ARAGAO DOS SANTOS 11 January 2017 (has links)
[pt] As aplicações da reconstrução em três dimensões de uma cena real são as mais diversas. O surgimento de sensores de profundidade de baixo custo, tal qual o Kinect, sugere o desenvolvimento de sistemas de reconstrução mais baratos que aqueles já existentes. Contudo, os dados disponibilizados por este dispositivo ainda carecem em muito quando comparados àqueles providos por sistemas mais sofisticados. No mundo acadêmico e comercial, algumas iniciativas, como aquelas de Tong et al. [1] e de Cui et al. [2], se propõem a solucionar tal problema. A partir do estudo das mesmas, este trabalho propôs a modificação do algoritmo de super-resolução descrito por Mitzel et al. [3] no intuito de considerar em seus cálculos as imagens coloridas também fornecidas pelo dispositivo, conforme abordagem de Cui et al. [2]. Tal alteração melhorou os mapas de profundidade super-resolvidos fornecidos, mitigando interferências geradas por movimentações repentinas na cena captada. Os testes realizados comprovam a melhoria dos mapas gerados, bem como analisam o impacto da implementação em CPU e GPU dos algoritmos nesta etapa da super-resolução. O trabalho se restringe a esta etapa. As etapas seguintes da reconstrução 3D não foram implementadas. / [en] There are a lot of three dimensions reconstruction applications of real scenes. The rise of low cost sensors, like the Kinect, suggests the development of systems cheaper than the existing ones. Nevertheless, data provided by this device are worse than that provided by more sophisticated sensors. In the academic and commercial world, some initiatives, described in Tong et al. [1] and in Cui et al. [2], try to solve that problem. Studying that attempts, this work suggests the modification of super-resolution algorithm described for Mitzel et al. [3] in order to consider in its calculations coloured images provided by Kinect, like the approach of Cui et al. [2]. This change improved the super resolved depth maps provided, mitigating interference caused by sudden changes of captured scenes. The tests proved the improvement of generated maps and analysed the impact of CPU and GPU algorithms implementation in the superresolution step. This work is restricted to this step. The next stages of 3D reconstruction have not been implemented.
23

Structure from Forward Motion / 3D-struktur från framåtrörelse

Svensson, Fredrik January 2010 (has links)
This master thesis investigates the difficulties of constructing a depth map using one low resolution grayscale camera mounted in the front of a car. The goal is to produce a depth map in real-time to assist other algorithms in the safety system of a car. This has been shown to be difficult using the evaluated combination of camera position and choice of algorithms. The main problem is to estimate an accurate optical flow. Another problem is to handle moving objects. The conclusion is that the implementations, mainly triangulation of corresponding points tracked using a Lucas Kanade tracker, provide information of too poor quality to be useful for the safety system of a car. / I detta examensarbete undersöks svårigheterna kring att skapa en djupbild från att endast använda en lågupplöst gråskalekamera monterad framtill i en bil. Målet är att producera en djupbild i realtid som kan nyttjas i andra delar av bilens säkerhetssystem. Detta har visat sig vara svårt att lösa med den undersökta kombinationen av kameraplacering och val av algoritmer. Det huvudsakliga problemet är att räkna ut ett noggrant optiskt flöde. Andra problem härrör från objekt som rör på sig. Slutsatsen är att implementationerna, mestadels triangulering av korresponderande punktpar som följts med hjälp av en Lucas Kanade-följare, ger resultat av för dålig kvalitet för att vara till nytta för bilens säkerhetssystem.
24

Gaining Depth : Time-of-Flight Sensor Fusion for Three-Dimensional Video Content Creation

Schwarz, Sebastian January 2014 (has links)
The successful revival of three-dimensional (3D) cinema has generated a great deal of interest in 3D video. However, contemporary eyewear-assisted displaying technologies are not well suited for the less restricted scenarios outside movie theaters. The next generation of 3D displays, autostereoscopic multiview displays, overcome the restrictions of traditional stereoscopic 3D and can provide an important boost for 3D television (3DTV). Then again, such displays require scene depth information in order to reduce the amount of necessary input data. Acquiring this information is quite complex and challenging, thus restricting content creators and limiting the amount of available 3D video content. Nonetheless, without broad and innovative 3D television programs, even next-generation 3DTV will lack customer appeal. Therefore simplified 3D video content generation is essential for the medium's success. This dissertation surveys the advantages and limitations of contemporary 3D video acquisition. Based on these findings, a combination of dedicated depth sensors, so-called Time-of-Flight (ToF) cameras, and video cameras, is investigated with the aim of simplifying 3D video content generation. The concept of Time-of-Flight sensor fusion is analyzed in order to identify suitable courses of action for high quality 3D video acquisition. In order to overcome the main drawback of current Time-of-Flight technology, namely the high sensor noise and low spatial resolution, a weighted optimization approach for Time-of-Flight super-resolution is proposed. This approach incorporates video texture, measurement noise and temporal information for high quality 3D video acquisition from a single video plus Time-of-Flight camera combination. Objective evaluations show benefits with respect to state-of-the-art depth upsampling solutions. Subjective visual quality assessment confirms the objective results, with a significant increase in viewer preference by a factor of four. Furthermore, the presented super-resolution approach can be applied to other applications, such as depth video compression, providing bit rate savings of approximately 10 percent compared to competing depth upsampling solutions. The work presented in this dissertation has been published in two scientific journals and five peer-reviewed conference proceedings.  In conclusion, Time-of-Flight sensor fusion can help to simplify 3D video content generation, consequently supporting a larger variety of available content. Thus, this dissertation provides important inputs towards broad and innovative 3D video content, hopefully contributing to the future success of next-generation 3DTV.
25

Dynamická prezentace fotografií s využitím hloubkové mapy / Dynamic Image Presentations Using Depth Maps

Hanzlíček, Jiří January 2019 (has links)
This master's thesis focuses on the dynamic presentation of still photography using a depth map. This text presents an algorithm that describes the process of creating a spatial model which is used to render input photography so that the movement of virtual camera creates parallax effect due to depth in image. The thesis also presents an approach how to infill the missing data in the model. It is suggested that a guided texture synthesis is used for this problem by using rendering outputs of the model themselves as guides. Additional information in model allows the virtual camera to move more freely. The final result of the camera movement can be saved to simple video sequence which can be used for presenting the input photography.
26

Vybrané problémy analýzy fotogrammetrických systémů / Selected Problems in Photogrammetric Systems Analysis

Boleček, Libor January 2015 (has links)
Disertační práce se zabývá vybranými partiemi digitální fotogrammetrie. V první části práce je definované téma a popsán současný stav poznání. V následujících kapitolách jsou postupně řešeny čtyři dílčí navzájem navazující cíle. První oblastí je návrh metody pro hledání souhlasných bodů v obraze. Byly navrženy dvě nové metody. První z nich používá konverzi snímků do nepravých barev a druhá využívá pravděpodobností model získaný ze známých párů souhlasných bodů. Druhým tématem je analýza přesnosti výsledné rekonstrukce prostorových bodů. Postupně je analyzován vliv různých faktorů na přesnost rekonstrukce. Stěžejní oblastí je zkoumání vlivu chybného zarovnání kamer a chyby v určení souhlasných bodů. Třetím tématem je tvorba hloubkových map. Byly navrženy dva postupy. První přístup spočívá v kombinaci pasivní a aktivní metody druhý přístup vychází z pasivní metody a využívá spojitosti hloubkové mapy. Poslední zvolenou oblastí zájmu je hodnocení kvality 3D videa. Byly provedeny a statisticky vyhodnoceny subjektvní testy 3D vjemu pro různé zobrazovací systémy v závislosti na úhlu pozorování
27

Analyse quantifiée de l'asymétrie de la marche par application de Poincaré

Brignol, Arnaud 08 1900 (has links)
La marche occupe un rôle important dans la vie quotidienne. Ce processus apparaît comme facile et naturel pour des gens en bonne santé. Cependant, différentes sortes de maladies (troubles neurologiques, musculaires, orthopédiques...) peuvent perturber le cycle de la marche à tel point que marcher devient fastidieux voire même impossible. Ce projet utilise l'application de Poincaré pour évaluer l'asymétrie de la marche d'un patient à partir d'une carte de profondeur acquise avec un senseur Kinect. Pour valider l'approche, 17 sujets sains ont marché sur un tapis roulant dans des conditions différentes : marche normale et semelle de 5 cm d'épaisseur placée sous l'un des pieds. Les descripteurs de Poincaré sont appliqués de façon à évaluer la variabilité entre un pas et le cycle complet de la marche. Les résultats montrent que la variabilité ainsi obtenue permet de discriminer significativement une marche normale d'une marche avec semelle. Cette méthode, à la fois simple à mettre en oeuvre et suffisamment précise pour détecter une asymétrie de la marche, semble prometteuse pour aider dans le diagnostic clinique. / Gait plays an important part in daily life. This process appears to be very easy and natural for healthy people. However, different kinds of diseases (neurological, muscular, orthopedic...) can impede the gait cycle to such an extent that gait becomes tedious or even infeasible. This project applied Poincare plot analysis to assess the gait asymmetry of a patient from a depth map acquired with a Kinect sensor. To validate the approach, 17 healthy subjects had to walk on a treadmill under different conditions : normal walk and with a 5 cm thick sole under one foot. Poincare descriptors were applied in such a way that they assess the variability between a step and the corresponding complete gait cycle. Results showed that variability significantly discriminates between a normal walk and a walk with a sole. This method seems promising for a clinical use as it is simple to implement and precise enough to assess gait asymmetry.
28

A Book Reader Design for Persons with Visual Impairment and Blindness

Galarza, Luis E. 16 November 2017 (has links)
The objective of this dissertation is to provide a new design approach to a fully automated book reader for individuals with visual impairment and blindness that is portable and cost effective. This approach relies on the geometry of the design setup and provides the mathematical foundation for integrating, in a unique way, a 3-D space surface map from a low-resolution time of flight (ToF) device with a high-resolution image as means to enhance the reading accuracy of warped images due to the page curvature of bound books and other magazines. The merits of this low cost, but effective automated book reader design include: (1) a seamless registration process of the two imaging modalities so that the low resolution (160 x 120 pixels) height map, acquired by an Argos3D-P100 camera, accurately covers the entire book spread as captured by the high resolution image (3072 x 2304 pixels) of a Canon G6 Camera; (2) a mathematical framework for overcoming the difficulties associated with the curvature of open bound books, a process referred to as the dewarping of the book spread images, and (3) image correction performance comparison between uniform and full height map to determine which map provides the highest Optical Character Recognition (OCR) reading accuracy possible. The design concept could also be applied to address the challenging process of book digitization. This method is dependent on the geometry of the book reader setup for acquiring a 3-D map that yields high reading accuracy once appropriately fused with the high-resolution image. The experiments were performed on a dataset consisting of 200 pages with their corresponding computed and co-registered height maps, which are made available to the research community (cate-book3dmaps.fiu.edu). Improvements to the characters reading accuracy, due to the correction steps, were quantified and measured by introducing the corrected images to an OCR engine and tabulating the number of miss-recognized characters. Furthermore, the resilience of the book reader was tested by introducing a rotational misalignment to the book spreads and comparing the OCR accuracy to those obtained with the standard alignment. The standard alignment yielded an average reading accuracy of 95.55% with the uniform height map (i.e., the height values of the central row of the 3-D map are replicated to approximate all other rows), and 96.11% with the full height maps (i.e., each row has its own height values as obtained from the 3D camera). When the rotational misalignments were taken into account, the results obtained produced average accuracies of 90.63% and 94.75% for the same respective height maps, proving added resilience of the full height map method to potential misalignments.
29

Dense Depth Map Estimation For Object Segmentation In Multi-view Video

Cigla, Cevahir 01 August 2007 (has links) (PDF)
In this thesis, novel approaches for dense depth field estimation and object segmentation from mono, stereo and multiple views are presented. In the first stage, a novel graph-theoretic color segmentation algorithm is proposed, in which the popular Normalized Cuts 59H[6] segmentation algorithm is improved with some modifications on its graph structure. Segmentation is obtained by the recursive partitioning of the weighted graph. The simulation results for the comparison of the proposed segmentation scheme with some well-known segmentation methods, such as Recursive Shortest Spanning Tree 60H[3] and Mean-Shift 61H[4] and the conventional Normalized Cuts, show clear improvements over these traditional methods. The proposed region-based approach is also utilized during the dense depth map estimation step, based on a novel modified plane- and angle-sweeping strategy. In the proposed dense depth estimation technique, the whole scene is assumed to be region-wise planar and 3D models of these plane patches are estimated by a greedy-search algorithm that also considers visibility constraint. In order to refine the depth maps and relax the planarity assumption of the scene, at the final step, two refinement techniques that are based on region splitting and pixel-based optimization via Belief Propagation 62H[32] are also applied. Finally, the image segmentation algorithm is extended to object segmentation in multi-view video with the additional depth and optical flow information. Optical flow estimation is obtained via two different methods, KLT tracker and region-based block matching and the comparisons between these methods are performed. The experimental results indicate an improvement for the segmentation performance by the usage of depth and motion information.
30

Depth-based 3D videos: quality measurement and synthesized view enhancement

Solh, Mashhour M. 13 December 2011 (has links)
Three dimensional television (3DTV) is believed to be the future of television broadcasting that will replace current 2D HDTV technology. In the future, 3DTV will bring a more life-like and visually immersive home entertainment experience, in which users will have the freedom to navigate through the scene to choose a different viewpoint. A desired view can be synthesized at the receiver side using depth image-based rendering (DIBR). While this approach has many advantages, one of the key challenges in DIBR is generating high quality synthesized views. This work presents novel methods to measure and enhance the quality of 3D videos generated through DIBR. For quality measurements we describe a novel method to characterize and measure distortions by multiple cameras used to capture stereoscopic images. In addition, we present an objective quality measure for DIBR-based 3D videos by evaluating the elements of visual discomfort in stereoscopic 3D videos. We also introduce a new concept called the ideal depth estimate, and define the tools to estimate that depth. Full-reference and no-reference profiles for calculating the proposed measures are also presented. Moreover, we introduce two innovative approaches to improve the quality of the synthesized views generated by DIBR. The first approach is based on hierarchical blending of the background and foreground information around the disocclusion areas which produces a natural looking, synthesized view with seamless hole-filling. This approach yields virtual images that are free of any geometric distortions, unlike other algorithms that preprocess the depth map. In contrast to the other hole-filling approaches, our approach is not sensitive to depth maps with high percentage of bad pixels from stereo matching. The second approach further enhances the results through a depth-adaptive preprocessing of the colored images. Finally, we propose an enhancement over depth estimation algorithm using the depth monocular cues from luminance and chrominance. The estimated depth will be evaluated using our quality measure, and the hole-filling algorithm will be used to generate synthesized views. This application will demonstrate how our quality measures and enhancement algorithms could help in the development of high quality stereoscopic depth-based synthesized videos.

Page generated in 0.0334 seconds