• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fully digital, phase-domain ΔΣ 3D range image sensor in 130nm CMOS imaging technology

Walker, Richard John January 2012 (has links)
Three-Dimensional (3D) optical range-imaging is a field experiencing rapid growth, expanding into a wide variety of machine vision applications, most recently including consumer gaming. Time of Flight (ToF) cameras, akin to RADAR with light, sense distance by measuring the round trip time of modulated Infra-Red (IR) illumination light projected into the scene and reflected back to the camera. Such systems generate 'depth maps' without requiring the complex processing utilised by other 3D imaging techniques such as stereo vision and structured light. Existing range-imaging solutions within the ToF category either perform demodulation in the analogue domain, and are therefore susceptible to noise and non-uniformities, or by digitally detecting individual photons using a Single Photon Avalanche Diode (SPAD), generating large volumes of raw data. In both cases, external processing is required in order to calculate a distance estimate from this raw information. To address these limitations, this thesis explores alternative system architectures for ToF range imaging. Specifically, a new pixel concept is presented, coupling a SPAD for accurate detection of the arrival time of photons to an all-digital Phase- Domain Delta-Sigma (PDΔΣ) loop for the first time. This processes the SPAD pulses locally, converging to estimate the mean phase of the incoming photons with respect to the outgoing illumination light. A 128×96 pixel sensor was created to demonstrate this principle. By incorporating all of the steps in the range-imaging process – from time resolved photon detection with SPADs, through phase extraction with the in-pixel phase-domain ΔΣ loop, to depth map creation with on-chip decimation filters – this sensor is the first fully integrated 3D camera-on-achip to be published. It is implemented in a 130nm CMOS imaging process, the most modern technology used in 3D imaging work presented to date, enabled by the recent availability of a very low noise SPAD structure in this process. Excellent linearity of ±5mm is obtained, although the 1σ repeatability error was limited to 160mm by a number of factors. While the dimensions of the current pixel prevent the implementation of very high resolution arrays, the all-digital nature of this technique will scale well if manufactured in a more advanced CMOS imaging process such as the 90nm or 65nm nodes. Repartitioning of the logic could enhance fill factor further. The presented characterisation results nevertheless serve as first validation of a new concept in 3D range-imaging, while proposals for its future refinement are presented.
2

Concurrent validity and reliability of a time of-flight camera on measuring muscle’s mechanical properties during sprint running

Stattin, Sebastian January 2019 (has links)
Recent advancements in 3D data gathering have made it possible to measure the distance to an object at different time stamps through the use of time-of-flight cameras. Therefore, the purpose of this study was to investigate the validity and reliability of a time-of-flight camera on different mechanical sprint properties of the muscle. Fifteen male football players performed four 30m maximal sprint bouts which was simultaneously recorded with a time-of-flight camera and 1080 sprint device. By using an exponential function on the collected positional- and velocity-time data from both the devices, following variables were derived and analyzed: Maximal velocity (nmax), time constant (t), theoretical maximal force (F0), theoretical maximal velocity (V0), peak power output (Pmax), F-V mechanical profile (Sfv) and decrease in ratio of force (Drf). The results showed strong correlation in vmax along with a fairly small standard error of estimate (SEE) (r = 0,817, SEE = 0,27 m/s), while t displayed moderate correlation and relatively high SEE (r = 0,620, SEE = 0,12 s). Furthermore, moderate mean bias (>5%) were revealed for most of the variables, except for vmax and V0. The within-sessions reliability using Intraclass correlation coefficient (ICC) and standard error of measurement (SEM) ranged from excellent to poor with Pmax displaying excellent reliability (ICC = 0,91, SEM = 72W), while vmax demonstrated moderate reliability (ICC = 0,61, SEM = 0,26 m/s) and t poor(ICC = 0,44, SEM = 0,11 s). In conclusion, these findings showed that in its current state, the time-of-flight camera is not a reliable or valid device in estimating different mechanical properties of the muscle during sprint running using Samozino et al’s computations. Further development is needed.
3

Holoscopic 3D imaging and display technology : camera/processing/display

Swash, Mohammad Rafiq January 2013 (has links)
Holoscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”. While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display. Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation. Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability. Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 × 384 3D-Pixels whereas the traditional spatial resolution is 341 × 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images.
4

Precision analysis of 3D camera

Peppa, Maria Valasia January 2013 (has links)
Three dimensional mapping is becoming an increasingly attractive product nowadays. Many devices like laser scanner or stereo systems provide 3D scene reconstruction. A new type of active sensor, the Time of Flight (ToF) camera obtains direct depth observations (3rd dimensional coordinate) in a high video rate, useful for interactive robotic and navigation applications. The high frame rate combined with the low weight and the compact design of the ToF cameras constitute an alternative solution of the 3D measuring technology. However a deep understanding of the error involved in the ToF camera observations is essential in order to upgrade their accuracy and enhance the ToF camera performance. This thesis work addresses the depth error characteristics of the SR4000 ToF camera and indicates potential error models for compensating the impact. In the beginning of the work the thesis investigates the error sources, their characteristics and how they influence the depth measurements. In the practical part, the work covers the above analysis via experiments. Last, the work proposes simple methods in order to reduce the depth error so that the ToF camera can be used for high accuracy applications.   An overall result of the work indicates that the depth acquired by the Time of Flight (ToF) camera deviates several centimeters, specifically the SR4000 camera provides 35 cm error size for the working range of 1-8 m. After the error compensation the depth offset fluctuates 15cm within the same working range. The error is smaller when the camera is set up close to the test field than when it is further away.
5

Investigating Simultaneous Localization and Mapping for an Automated Guided Vehicle

Manhed, Joar January 2019 (has links)
The aim of the thesis is to apply simultaneous localization and mapping (SLAM) to automated guided vehicles (AGVs) in a Robot Operating System (ROS) environment. Different sensor setups are used and evaluated. The SLAM applications used is the open-source solution Cartographer as well as Intel's own commercial SLAM in their T265 tracking camera. The different sensor setups are evaluated based on how well the localization will give the exact pose of the AGV in comparison to another positioning system acting as ground truth.
6

A Generic Gesture Recognition Approach based on Visual Perception

Hu, Gang 22 June 2012 (has links)
Current developments of hardware devices have allowed the computer vision technologies to analyze complex human activities in real time. High quality computer algorithms for human activity interpretation are required by many emerging applications, such as patient behavior analysis, surveillance, gesture control video games, and other human computer interface systems. Despite great efforts that have been made in the past decades, it is still a challenging task to provide a generic gesture recognition solution that can facilitate the developments of different gesture-based applications. Human vision is able to perceive scenes continuously, recognize objects and grasp motion semantics effortlessly. Neuroscientists and psychologists have tried to understand and explain how exactly the visual system works. Some theories/hypotheses on visual perception such as the visual attention and the Gestalt Laws of perceptual organization (PO) have been established and shed some light on understanding fundamental mechanisms of human visual perception. In this dissertation, inspired by those visual attention models, we attempt to model and integrate important visual perception discoveries into a generic gesture recognition framework, which is the fundamental component of full-tier human activity understanding tasks. Our approach handles challenging tasks by: (1) organizing the complex visual information into a hierarchical structure including low-level feature, object (human body), and 4D spatiotemporal layers; 2) extracting bottom-up shape-based visual salience entities at each layer according to PO grouping laws; 3) building shape-based hierarchical salience maps in favor of high-level tasks for visual feature selection by manipulating attention conditions of the top-down knowledge about gestures and body structures; and 4) modeling gesture representations by a set of perceptual gesture salience entities (PGSEs) that provide qualitative gesture descriptions in 4D space for recognition tasks. Unlike other existing approaches, our gesture representation method encodes both extrinsic and intrinsic properties and reflects the way humans perceive the visual world so as to reduce the semantic gaps. Experimental results show our approach outperforms the others and has great potential in real-time applications. / PhD Thesis
7

Studie av mätosäkerhet hos punktmoln skapade med Matterport Pro2 3D-kamera vid IR-skanning i olika ljusförhållanden

Belander West, Markus January 2020 (has links)
Med den tekniska utvecklingen inom 3D-skanning det senaste decenniet har användningen av punkmolnsdata ökat signifikant. För att skapa dessa punkmoln används en mängd olika metoder och instrument. Bland annat används ofta fotogrammetri, terrester laserskanning eller mobil laserskanning. Med de nyare mobila skannrarna används oftast en SLAM-algoritm för att kunna korrekt skanna omgivningen samtidigt som skannern förflyttas. Till detta används oftast en IMU som positionerar skannern genom tröghetsnavigering eller kameror för att med triangulering bestämma positionen. Med nya förbättrade algoritmer och utrustning blir systemen hela tiden noggrannare och det utvecklas fler och fler nya system, ofta för specifika användningsområden. Matterport Pro2 3D-kamera som testades i detta projekt är ett sådant system som huvudsakligen utvecklats för att genom skanning, RGB-D och 360°- bilder visualisera och skapa digitala modeller av bostäder. Dessa modeller skapas både i form av punktmoln och meshmodeller. I projektet undersöks hur olika ljusförhållanden påverkar resultatet vid skapande av 3D-modeller med Matterport Pro2 kameran. Uppmätta längder mellan signaler utplacerade i testrummet användes för att kontrollera punktmolnen. Totalt skannades rummet fem gånger vid olika ljussättning varierande från 1 till 800 lux. Avvikelserna i längderna från punktmolnen jämfördes för att avgöra vilket punktmoln som avvek minst från de uppmätta längderna i rummet. Resultatet tyder på att bästa ljussättningen är runt 30 - 60 lux. Ingen skillnad i mätosäkerhet mellan övriga ljusnivåer kunde ses. Utöver det visar avvikelserna också tecken på påtagliga systematiskt fel vilket inte är helt oväntat och har påvisats av en tidigare studie av samma kamera. Detta betyder att kameran behöver kalibreras innan den används för skanning som kräver låg mätosäkerhet. / Due to the technological development within 3D-scanning the last decade usage of pointcloud data has increased significantly. To generate these pointclouds a plethora of methods and instrument are used. Among other photogrammetry, terrestrial laser scanning and mobile laser scanning are commonly used. With the newer mobile scanning systems a SLAM algorithm is usually used for the scanner to correctly scan the surroundings while being moved at the same time. To achieve this a IMU is usually used for positioning or cameras using triangulation. With new algorithms and equipment scanning systems keeps improving. This leads to more and more systems being developed, usually for a specific area of usage. Matterport Pro2 3D-camera which was tested in this project is such a system developed mainly for visualising and creating 3D-models of housing through scanning, RGB-D and 360°-images. These models generated are pointclouds aswell as meshmodels. In this project the effect of different illuminance has on the results when creating 3D-models with the Pro2 camera is tested. Measured distances between targets placed around the testing room were used for checking the point clouds for errors. In total five scans were performed at different illuminance varying from 1 – 800 lux. Deviations between measured distances and point cloud distances were compared to determine which point cloud deviated the least. Results show that an illuminance of about 30 - 60 lux gave the best result. Any significant differences between the other light conditions could not be determined. Furthermore, the results imply there is a systematic error which is not completely unexpected and has been shown in a previous study with the same camera. This means the camera needs a calibration before being used to scan where higher accuracy is needed.
8

Color Fusion and Super-Resolution for Time-of-Flight Cameras

Zins, Matthieu January 2017 (has links)
The recent emergence of time-of-flight cameras has opened up new possibilities in the world of computer vision. These compact sensors, capable of recording the depth of a scene in real-time, are very advantageous in many applications, such as scene or object reconstruction. This thesis first addresses the problem of fusing depth data with color images. A complete process to combine a time-of-flight camera with a color camera is described and its accuracy is evaluated. The results show that a satisfying precision is reached and that the step of calibration is very important. The second part of the work consists of applying super-resolution techniques to the time-of-flight camera in order to improve its low resolution. Different types of super-resolution algorithms exist but this thesis focuses on the combination of multiple shifted depth maps. The proposed framework is made of two steps: registration and reconstruction. Different methods for each step are tested and compared according to the improvements reached in term of level of details, sharpness and noise reduction. The results obtained show that Lucas-Kanade performs the best for the registration and that a non-uniform interpolation gives the best results in term of reconstruction. Finally, a few suggestions are made about future work and extensions for our solutions.
9

Human Motion Tracking Using 3D Camera / Följning av människa med 3D-kamera

Karlsson, Daniel January 2010 (has links)
<p>The interest in video surveillance has increased in recent years. Cameras are now installed in e.g. stores, arenas and prisons. The video data is analyzed to detect abnormal or undesirable events such as thefts, fights and escapes. At the Informatics Unit at the division of Information Systems, FOI in Linköping, algorithms are developed for automatic detection and tracking of humans in video data. This thesis deals with the target tracking problem when a 3D camera is used. A 3D camera creates images whose pixels represent the ranges to the scene. In recent years, new camera systems have emerged where the range images are delivered at up to video rate (30 Hz). One goal of the thesis is to determine how range data affects the frequency with which the measurement update part of the tracking algorithm must be performed. Performance of the 2D tracker and the 3D tracker are evaluated with both simulated data and measured data from a 3D camera. It is concluded that the errors in the estimated image coordinates are independent of whether range data is available or not. The small angle and the relatively large distance to the target explains the good performance of the 2D tracker. The 3D tracker however shows superior tracking ability (much smaller tracking error) if the comparison is made in the world coordinates.</p>
10

Human Motion Tracking Using 3D Camera / Följning av människa med 3D-kamera

Karlsson, Daniel January 2010 (has links)
The interest in video surveillance has increased in recent years. Cameras are now installed in e.g. stores, arenas and prisons. The video data is analyzed to detect abnormal or undesirable events such as thefts, fights and escapes. At the Informatics Unit at the division of Information Systems, FOI in Linköping, algorithms are developed for automatic detection and tracking of humans in video data. This thesis deals with the target tracking problem when a 3D camera is used. A 3D camera creates images whose pixels represent the ranges to the scene. In recent years, new camera systems have emerged where the range images are delivered at up to video rate (30 Hz). One goal of the thesis is to determine how range data affects the frequency with which the measurement update part of the tracking algorithm must be performed. Performance of the 2D tracker and the 3D tracker are evaluated with both simulated data and measured data from a 3D camera. It is concluded that the errors in the estimated image coordinates are independent of whether range data is available or not. The small angle and the relatively large distance to the target explains the good performance of the 2D tracker. The 3D tracker however shows superior tracking ability (much smaller tracking error) if the comparison is made in the world coordinates.

Page generated in 0.0313 seconds