• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 606
  • 222
  • 105
  • 104
  • 67
  • 57
  • 17
  • 13
  • 11
  • 11
  • 11
  • 11
  • 9
  • 9
  • 7
  • Tagged with
  • 1459
  • 244
  • 210
  • 208
  • 201
  • 195
  • 193
  • 180
  • 150
  • 122
  • 117
  • 107
  • 103
  • 97
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Precision analysis of 3D camera

Peppa, Maria Valasia January 2013 (has links)
Three dimensional mapping is becoming an increasingly attractive product nowadays. Many devices like laser scanner or stereo systems provide 3D scene reconstruction. A new type of active sensor, the Time of Flight (ToF) camera obtains direct depth observations (3rd dimensional coordinate) in a high video rate, useful for interactive robotic and navigation applications. The high frame rate combined with the low weight and the compact design of the ToF cameras constitute an alternative solution of the 3D measuring technology. However a deep understanding of the error involved in the ToF camera observations is essential in order to upgrade their accuracy and enhance the ToF camera performance. This thesis work addresses the depth error characteristics of the SR4000 ToF camera and indicates potential error models for compensating the impact. In the beginning of the work the thesis investigates the error sources, their characteristics and how they influence the depth measurements. In the practical part, the work covers the above analysis via experiments. Last, the work proposes simple methods in order to reduce the depth error so that the ToF camera can be used for high accuracy applications.   An overall result of the work indicates that the depth acquired by the Time of Flight (ToF) camera deviates several centimeters, specifically the SR4000 camera provides 35 cm error size for the working range of 1-8 m. After the error compensation the depth offset fluctuates 15cm within the same working range. The error is smaller when the camera is set up close to the test field than when it is further away.
122

HOT CAMERA DESIGN FOR A 1000 HOUR VENUSIAN SURFACE LANDER

Martin, Keith R. 29 January 2019 (has links)
No description available.
123

Automatic Camera Calibration Techniques for Collaborative Vehicular Applications

Tummala, Gopi Krishna 19 June 2019 (has links)
No description available.
124

High-speed Imaging with Less Data

Baldwin, Raymond Wesley 09 August 2021 (has links)
No description available.
125

Tower-Tracking Heliostat Array

Masters, Joel T 01 March 2011 (has links) (PDF)
This thesis presents a method of tracking and correcting for the swaying of a central receiver tower in concentrated solar production plants. The method uses a camera with image processing algorithms to detect movement of the center of the tower. A prototype was constructed utilizing a CMOS camera connected to a microcontroller to control the movements of three surrounding heliostats. The prototype uses blob-tracking algorithms to detect and correct for movements of a colored model target. The model was able to detect movements in the tower with average error of 0.32 degrees, and was able to correctly orient the surrounding heliostats to within 1.2 and 2.6 degrees of accuracy while testing indoors and outdoors, respectively.
126

Layered Sensing Using Master-Slave Cameras

McLemore, Donald Rodney, Jr. 01 October 2009 (has links)
No description available.
127

Measurement of range of motion of human finger joints, using a computer vision system

Ben-Naser, Abdusalam January 2011 (has links)
Assessment of finger range of motion (ROM) is often required for monitoring the effectiveness of rehabilitative treatments and for evaluating patients' functional impairment. There are several devices which are used to measure this motion, such as wire tracing, tracing onto paper and mechanical and electronic goniometry. These devices are quite cheap, excluding electronic goniometry; however the drawbacks of these devices are their lack of accuracy and the time- consuming nature of the measurement process. The work described in this thesis considers the design, implementation and validation of a new medical measurement system utilized in the evaluation of the range of motion of the human finger joints instead of the current measurement tools. The proposed system is a non-contact measurement device based on computer vision technology and has many advantages over the existing measurement devices. In terms of accuracy, better results are achieved by this system, it can be operated by semi-skilled person, and is time saving for the evaluator. The computer vision system in this study consists of CCD cameras to capture the images, a frame-grabber to change the analogue signal from the cameras to digital signals which can be manipulated by a computer, Ultra Violet light (UV) to illuminate the measurement space, software to process the images and perform the required computation, a darkened enclosure to accommodate the cameras and UV light and to shield the working area from any undesirable ambient light. Two calibration techniques were used to calibrate the cameras, Direct Linear Transformation and Tsai. A calibration piece that suits this application was designed and manufactured. A steel hand model was used to measure the fingers joint angles. The average error from measuring the finger angles using this system was around 1 degree compared with 5 degrees for the existing used techniques.
128

Object Detection and Tracking Using Uncalibrated Cameras

Amara, Ashwini 14 May 2010 (has links)
This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object's feature points over frames, tracking the object over frames and analyzing object's motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms.
129

AUTOMATED SYSTEM FOR IDENTIFYING USABLE SENSORS IN ALARGE SCALE SENSOR NETWORK FOR COMPUTER VISION

Aniesh Chawla (6630980) 11 June 2019 (has links)
<div>Numerous organizations around the world deploy sensor networks, especially visual sensor networks for various applications like monitoring traffic, security, and emergencies. With advances in computer vision technology, the potential application of these sensor networks has expanded. This has led to an increase in demand for deployment of large scale sensor networks.</div><div>Sensors in a large network have differences in location, position, hardware, etc. These differences lead to varying usefulness as they provide different quality of information. As an example, consider the cameras deployed by the Department of Transportation (DOT). We want to know whether the same traffic cameras could be used for monitoring the damage by a hurricane.</div><div>Presently, significant manual effort is required to identify useful sensors for different applications. There does not exist an automated system which determines the usefulness of the sensors based on the application. Previous methods on visual sensor networks focus on finding the dependability of sensors based on only the infrastructural and system issues like network congestion, battery failures, hardware failures, etc. These methods do not consider the quality of information from the sensor network. In this paper, we present an automated system which identifies the most useful sensors in a network for a given application. We evaluate our system on 2,500 real-time live sensors from four cities for traffic monitoring and people counting applications. We compare the result of our automated system with the manual score for each camera.</div><div>The results suggest that the proposed system reliably finds useful sensors and it output matches the manual scoring system. It also shows that a camera network deployed for a certain application can also be useful for another application.</div>
130

The Application of Index Based, Region Segmentation, and Deep Learning Approaches to Sensor Fusion for Vegetation Detection

Stone, David L. 01 January 2019 (has links)
This thesis investigates the application of index based, region segmentation, and deep learning methods to the sensor fusion of omnidirectional (O-D) Infrared (IR) sensors, Kinnect sensors, and O-D vision sensors to increase the level of intelligent perception for unmanned robotic platforms. The goals of this work is first to provide a more robust calibration approach and improve the calibration of low resolution and noisy IR O-D cameras. Then our goal was to explore the best approach to sensor fusion for vegetation detection. We looked at index based, region segmentation, and deep learning methods and compared them with a goal of significant reduction in false positives while maintaining reasonable vegetation detection. The results are as follows: Direct Spherical Calibration of the IR camera provided a more consistent and robust calibration board capture and resulted in the best overall calibration results with sub-pixel accuracy The best approach for sensor fusion for vegetation detection was the deep learning approach, the three methods are detailed in the following chapters with the results summarized here. Modified Normalized Difference Vegetation Index approach achieved 86.74% recognition and 32.5% false positive, with peaks to 80% Thermal Region Fusion (TRF) achieved a lower recognition rate at 75.16% but reduced false positives to 11.75% (a 64% reduction) Our Deep Learning Fusion Network (DeepFuseNet) results demonstrated that deep learning approach showed the best results with a significant (92%) reduction in false positives when compared to our modified normalized difference vegetation index approach. The recognition was 95.6% with 2% false positive. Current approaches are primarily focused on O-D color vision for localization, mapping, and tracking and do not adequately address the application of these sensors to vegetation detection. We will demonstrate the contradiction between current approaches and our deep sensor fusion (DeepFuseNet) for vegetation detection. The combination of O-D IR and O-D color vision coupled with deep learning for the extraction of vegetation material type, has great potential for robot perception. This thesis will look at two architectures: 1) the application of Autoencoders Feature Extractors feeding a deep Convolution Neural Network (CNN) fusion network (DeepFuseNet), and 2) Bottleneck CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors. We show that the vegetation recognition rate and the number of false detects inherent in the classical indices based spectral decomposition are greatly improved using our DeepFuseNet architecture. We first investigate the calibration of omnidirectional infrared (IR) camera for intelligent perception applications. The low resolution omnidirectional (O-D) IR image edge boundaries are not as sharp as with color vision cameras, and as a result, the standard calibration methods were harder to use and less accurate with the low definition of the omnidirectional IR camera. In order to more fully address omnidirectional IR camera calibration, we propose a new calibration grid center coordinates control point discovery methodology and a Direct Spherical Calibration (DSC) approach for a more robust and accurate method of calibration. DSC addresses the limitations of the existing methods by using the spherical coordinates of the centroid of the calibration board to directly triangulate the location of the camera center and iteratively solve for the camera parameters. We compare DSC to three Baseline visual calibration methodologies and augment them with additional output of the spherical results for comparison. We also look at the optimum number of calibration boards using an evolutionary algorithm and Pareto optimization to find the best method and combination of accuracy, methodology and number of calibration boards. The benefits of DSC are more efficient calibration board geometry selection, and better accuracy than the three Baseline visual calibration methodologies. In the context of vegetation detection, the fusion of omnidirectional (O-D) Infrared (IR) and color vision sensors may increase the level of vegetation perception for unmanned robotic platforms. A literature search found no significant research in our area of interest. The fusion of O-D IR and O-D color vision sensors for the extraction of feature material type has not been adequately addressed. We will look at augmenting indices based spectral decomposition with IR region based spectral decomposition to address the number of false detects inherent in indices based spectral decomposition alone. Our work shows that the fusion of the Normalized Difference Vegetation Index (NDVI) from the O-D color camera fused with the IR thresholded signature region associated with the vegetation region, minimizes the number of false detects seen with NDVI alone. The contribution of this work is the demonstration of two new techniques, Thresholded Region Fusion (TRF) technique for the fusion of O-D IR and O-D Color. We also look at the Kinect vision sensor fused with the O-D IR camera. Our experimental validation demonstrates a 64% reduction in false detects in our method compared to classical indices based detection. We finally compare our DeepFuseNet results with our previous work with Normalized Difference Vegetation index (NDVI) and IR region based spectral fusion. This current work shows that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet deep learning approach out performs the previous NVDI fused with far infrared region segmentation. Our experimental validation demonstrates an 92% reduction in false detects in our method compared to classical indices based detection. This work contributes a new technique for the fusion of O-D vision and O-D IR sensors using two deep CNN feature extractors feeding into a fully connected CNN Network (DeepFuseNet).

Page generated in 0.0565 seconds