• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 615
  • 222
  • 105
  • 104
  • 67
  • 57
  • 17
  • 13
  • 11
  • 11
  • 11
  • 11
  • 9
  • 9
  • 7
  • Tagged with
  • 1470
  • 247
  • 212
  • 211
  • 201
  • 198
  • 193
  • 181
  • 152
  • 125
  • 119
  • 110
  • 103
  • 98
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

The Application of Index Based, Region Segmentation, and Deep Learning Approaches to Sensor Fusion for Vegetation Detection

Stone, David L. 01 January 2019 (has links)
This thesis investigates the application of index based, region segmentation, and deep learning methods to the sensor fusion of omnidirectional (O-D) Infrared (IR) sensors, Kinnect sensors, and O-D vision sensors to increase the level of intelligent perception for unmanned robotic platforms. The goals of this work is first to provide a more robust calibration approach and improve the calibration of low resolution and noisy IR O-D cameras. Then our goal was to explore the best approach to sensor fusion for vegetation detection. We looked at index based, region segmentation, and deep learning methods and compared them with a goal of significant reduction in false positives while maintaining reasonable vegetation detection. The results are as follows: Direct Spherical Calibration of the IR camera provided a more consistent and robust calibration board capture and resulted in the best overall calibration results with sub-pixel accuracy The best approach for sensor fusion for vegetation detection was the deep learning approach, the three methods are detailed in the following chapters with the results summarized here. Modified Normalized Difference Vegetation Index approach achieved 86.74% recognition and 32.5% false positive, with peaks to 80% Thermal Region Fusion (TRF) achieved a lower recognition rate at 75.16% but reduced false positives to 11.75% (a 64% reduction) Our Deep Learning Fusion Network (DeepFuseNet) results demonstrated that deep learning approach showed the best results with a significant (92%) reduction in false positives when compared to our modified normalized difference vegetation index approach. The recognition was 95.6% with 2% false positive. Current approaches are primarily focused on O-D color vision for localization, mapping, and tracking and do not adequately address the application of these sensors to vegetation detection. We will demonstrate the contradiction between current approaches and our deep sensor fusion (DeepFuseNet) for vegetation detection. The combination of O-D IR and O-D color vision coupled with deep learning for the extraction of vegetation material type, has great potential for robot perception. This thesis will look at two architectures: 1) the application of Autoencoders Feature Extractors feeding a deep Convolution Neural Network (CNN) fusion network (DeepFuseNet), and 2) Bottleneck CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors. We show that the vegetation recognition rate and the number of false detects inherent in the classical indices based spectral decomposition are greatly improved using our DeepFuseNet architecture. We first investigate the calibration of omnidirectional infrared (IR) camera for intelligent perception applications. The low resolution omnidirectional (O-D) IR image edge boundaries are not as sharp as with color vision cameras, and as a result, the standard calibration methods were harder to use and less accurate with the low definition of the omnidirectional IR camera. In order to more fully address omnidirectional IR camera calibration, we propose a new calibration grid center coordinates control point discovery methodology and a Direct Spherical Calibration (DSC) approach for a more robust and accurate method of calibration. DSC addresses the limitations of the existing methods by using the spherical coordinates of the centroid of the calibration board to directly triangulate the location of the camera center and iteratively solve for the camera parameters. We compare DSC to three Baseline visual calibration methodologies and augment them with additional output of the spherical results for comparison. We also look at the optimum number of calibration boards using an evolutionary algorithm and Pareto optimization to find the best method and combination of accuracy, methodology and number of calibration boards. The benefits of DSC are more efficient calibration board geometry selection, and better accuracy than the three Baseline visual calibration methodologies. In the context of vegetation detection, the fusion of omnidirectional (O-D) Infrared (IR) and color vision sensors may increase the level of vegetation perception for unmanned robotic platforms. A literature search found no significant research in our area of interest. The fusion of O-D IR and O-D color vision sensors for the extraction of feature material type has not been adequately addressed. We will look at augmenting indices based spectral decomposition with IR region based spectral decomposition to address the number of false detects inherent in indices based spectral decomposition alone. Our work shows that the fusion of the Normalized Difference Vegetation Index (NDVI) from the O-D color camera fused with the IR thresholded signature region associated with the vegetation region, minimizes the number of false detects seen with NDVI alone. The contribution of this work is the demonstration of two new techniques, Thresholded Region Fusion (TRF) technique for the fusion of O-D IR and O-D Color. We also look at the Kinect vision sensor fused with the O-D IR camera. Our experimental validation demonstrates a 64% reduction in false detects in our method compared to classical indices based detection. We finally compare our DeepFuseNet results with our previous work with Normalized Difference Vegetation index (NDVI) and IR region based spectral fusion. This current work shows that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet deep learning approach out performs the previous NVDI fused with far infrared region segmentation. Our experimental validation demonstrates an 92% reduction in false detects in our method compared to classical indices based detection. This work contributes a new technique for the fusion of O-D vision and O-D IR sensors using two deep CNN feature extractors feeding into a fully connected CNN Network (DeepFuseNet).
132

Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking And Event Recognition

Akman, Oytun 01 August 2007 (has links) (PDF)
In this thesis, novel methods for background modeling, tracking, occlusion handling and event recognition via multi-camera configurations are presented. As the initial step, building blocks of typical single camera surveillance systems that are moving object detection, tracking and event recognition, are discussed and various widely accepted methods for these building blocks are tested to asses on their performance. Next, for the multi-camera surveillance systems, background modeling, occlusion handling, tracking and event recognition for two-camera configurations are examined. Various foreground detection methods are discussed and a background modeling algorithm, which is based on multi-variate mixture of Gaussians, is proposed. During occlusion handling studies, a novel method for segmenting the occluded objects is proposed, in which a top-view of the scene, free of occlusions, is generated from multi-view data. The experiments indicate that the occlusion handling algorithm operates successfully on various test data. A novel tracking method by using multi-camera configurations is also proposed. The main idea of multi-camera employment is fusing the 2D information coming from the cameras to obtain a 3D information for better occlusion handling and seamless tracking. The proposed algorithm is tested on different data sets and it shows clear improvement over single camera tracker. Finally, multi-camera trajectories of objects are classified by proposed multi-camera event recognition method. In this method, concatenated different view trajectories are used to train Gaussian Mixture Hidden Markov Models. The experimental results indicate an improvement for the multi-camera event recognition performance over the event recognition by using single camera.
133

Structure-from-motion For Systems With Perspective And Omnidirectional Cameras

Bastanlar, Yalin 01 July 2009 (has links) (PDF)
In this thesis, a pipeline for structure-from-motion with mixed camera types is described and methods for the steps of this pipeline to make it effective and automatic are proposed. These steps can be summarized as calibration, feature point matching, epipolar geometry and pose estimation, triangulation and bundle adjustment. We worked with catadioptric omnidirectional and perspective cameras and employed the sphere camera model, which encompasses single-viewpoint catadioptric systems as well as perspective cameras. For calibration of the sphere camera model, a new technique that has the advantage of linear and automatic parameter initialization is proposed. The projection of 3D points on a catadioptric image is represented linearly with a 6x10 projection matrix using lifted coordinates. This projection matrix is computed with an adequate number of 3D-2D correspondences and decomposed to obtain intrinsic and extrinsic parameters. Then, a non-linear optimization is performed to refine the parameters. For feature point matching between hybrid camera images, scale invariant feature transform (SIFT) is employed and a method is proposed to improve the SIFT matching output. With the proposed approach, omnidirectional-perspective matching performance significantly increases to enable automatic point matching. In addition, the use of virtual camera plane (VCP) images is evaluated, which are perspective images produced by unwarping the corresponding region in the omnidirectional image. The hybrid epipolar geometry is estimated using random sample consensus (RANSAC) and alternatives of pose estimation methods are evaluated. A weighting strategy for iterative linear triangulation which improves the structure estimation accuracy is proposed. Finally, multi-view structure-from-motion (SfM) is performed by employing the approach of adding views to the structure one by one. To refine the structure estimated with multiple views, sparse bundle adjustment method is employed with a modification to use the sphere camera model. Experiments on simulated and real images for the proposed approaches are conducted. Also, the results of hybrid multi-view SfM with real images are demonstrated, emphasizing the cases where it is advantageous to use omnidirectional cameras with perspective cameras.
134

3D Reconstruction in Scanning Electron Microscope : from image acquisition to dense point cloud / Reconstruction 3D dans le microscope électronique à balayage non-calibre

Kudryavtsev, Andrey 31 October 2017 (has links)
L’objectif de ce travail est d’obtenir un modèle 3D d’un objet à partir d’une série d’images prisesavec un Microscope Electronique à Balayage (MEB). Pour cela, nous utilisons la technique dereconstruction 3D qui est une application bien connue du domaine de vision par ordinateur.Cependant, en raison des spécificités de la formation d’images dans le MEB et dans la microscopieen général, les techniques existantes ne peuvent pas être appliquées aux images MEB. Lesprincipales raisons à cela sont la projection parallèle et les problèmes d’étalonnage de MEB entant que caméra. Ainsi, dans ce travail, nous avons développé un nouvel algorithme permettant deréaliser une reconstruction 3D dans le MEB tout en prenant en compte ces difficultés. De plus,comme la reconstruction est obtenue par auto-étalonnage de la caméra, l’utilisation des mires n’estplus requise. La sortie finale des techniques présentées est un nuage de points dense, pouvant donccontenir des millions de points, correspondant à la surface de l’objet. / The goal of this work is to obtain a 3D model of an object from its multiple views acquired withScanning Electron Microscope (SEM). For this, the technique of 3D reconstruction is used which isa well known application of computer vision. However, due to the specificities of image formation inSEM, and in microscale in general, the existing techniques are not applicable to the SEM images. Themain reasons for that are the parallel projection and the problems of SEM calibration as a camera.As a result, in this work we developed a new algorithm allowing to achieve 3D reconstruction in SEMwhile taking into account these issues. Moreover, as the reconstruction is obtained through cameraautocalibration, there is no need in calibration object. The final output of the presented techniques isa dense point cloud corresponding to the surface of the object that may contain millions of points.
135

Fusion of Stationary Monocular and Stereo Camera Technologies for Traffic Parameters Estimation

Ali, Syed Musharaf 07 March 2017 (has links)
Modern day intelligent transportation system (ITS) relies on reliable and accurate estimated traffic parameters. Travel speed, traffic flow, and traffic state classification are the main traffic parameters of interest. These parameters can be estimated through efficient vision-based algorithms and appropriate camera sensor technology. With the advances in camera technologies and increasing computing power, use of monocular vision, stereo vision, and camera sensor fusion technologies have been an active research area in the field of ITS. In this thesis, we investigated stationary monocular and stereo camera technology for traffic parameters estimation. Stationary camera sensors provide large spatial-temporal information of the road section with relatively low installation costs. Two novel scientific contributions for vehicle detection and recognition are proposed. The first one is the use stationary stereo camera technology, and the second contribution is the fusion of monocular and stereo camera technologies. A vision-based ITS consists of several hardware and software components. The overall performance of such a system does not only depend on these single modules but also on their interaction. Therefore, a systematic approach considering all essential modules was chosen instead of focusing on one element of the complete system chain. This leads to detailed investigations of several core algorithms, e.g. background subtraction, histogram based fingerprints, and data fusion methods. From experimental results on standard datasets, we concluded that proposed fusion-based approach, consisting of monocular and stereo camera technologies performs better than each particular technology for vehicle detection and vehicle recognition. Moreover, this research work has a potential to provide a low-cost vision-based solution for online traffic monitoring systems in urban and rural environments.
136

The development of a Hardware-in-the-Loop test setup for event-based vision near-space space objects.

van den Boogaard, Rik January 2023 (has links)
The purpose of this thesis work was to develop a Hardware-in-the-Loop imaging setup that enables experimenting with an event-based and frame-based camera under simulated space conditions. The generated data sets were used to compare visual navigation algorithms in terms of an event-based and frame-based feature detection and tracking algorithm. The comparative analyses of the feature detection and tracking algorithms were used to get insights into the feasibility of event-based vision near-space space objects. Event-based cameras differ from frame-based cameras by how they produce an asynchronous and independent stream of events caused by brightness changes at each pixel instead of capturing images at a fixed rate. The setup design is based on a theoretical framework incorporating optical calculations. These calculations indicating the asteroid model needed to be scaled down by a factor of 3192 to fit inside the camera depth-of-view. This resulted in a scaled Bennu asteroid with a size of 16.44 centimeters.The cameras under testing conducted three experiments to generate data sets. The utilization of a feature detection and tracking algorithm on both camera data sets revealed that the absolute number of tracked features, computation time, and robustness in various scenarios of the frame-based camera algorithm outperforms the event-based camera algorithm. However, when considering the percentages of tracked features relative to the total detected features, the event-based algorithm tracks a significantly higher percentage of features for at least one key frame than the frame-based algorithm.  The comparative analysis of the experiments performed in space-simulated conditions during this project showed that the feasibility of an event-based camera using solely events is low compared to the frame-based camera.
137

Cybersecurity Evaluation of an IP Camera

Stroeven, Tova, Söderman, Felix January 2022 (has links)
The prevalence of affordable internet-connected cameras has provided many with new possibilities, including keeping a watchful eye on property and family members from afar. In order to avoid serious breaches of privacy, it is necessary to consider whether these devices are secure. This project aims to evaluate the cybersecurity of one such device, an IP camera from Biltema. This was done by performing an extensive analysis of the camera, determining possible vulnerabilities, and performing penetration tests based on identified vulnerabilities. The tests included capturing and analyzing network traffic, attempting to crack the camera credentials, and attempting to disable the camera completely. The conclusions were that the camera should not be used for any security applications and is unsuitable to use in situations where one's privacy is important. / Det breda utbudet av prisvärda och kameror med internet uppkopling har medfört helt nya möjligheter. Idag är det till exempel möjligt att hålla koll på sina barn utan att vara i rummet, eller hålla ett öga på hemmet via mobilen. Det är dock nödvändigt att reflektera över om dessa enheter är säkra, för att undvika allvarliga integritetsintrång. Projekets syfte är att utvärdera cybersäkerheten hos en sådan enhet, en IP-kamera från Biltema. Utvärderingen bestod av en omfattande analys av kameran, identifikation av möjliga sårbarheter och utförande av ett antal penetrationstester baserat på de upptäckta sårbarheterna. Testerna omfattade en analys av nätverkstrafik, att försöka knäcka kamerans inloggningssuppgifter samt att försöka inaktivera kameran. Slutsatsen var att kameran inte bör användas inom säkerhetstillämpningar och att den är olämplig i situationer där integritet är viktigt. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
138

Markerless Motion Capture and Analysis System to Enhance Exercise Professional Effectiveness: Preliminary Study

Hanson, Andrew Todd January 2016 (has links)
No description available.
139

Remote Imaging System Acquisition (RISA) Space Environment Multispectral Imager

Lizarrage, Adrian, Lynn, Brittany, Lange, Jeremiah 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / The purpose of the NASA Remote Imaging System Acquisition space camera prototype is to integrate multiple optical instruments into a small wireless system using radiation tolerant components. This stage of prototyping was the development of a broadband variable-focus camera that can transmit data wirelessly. A liquid lens in conjunction with a cerium doped double gauss eliminates traditional focusing mechanisms.
140

Modelling and Measurements of MAST Neutron Emission

Klimek, Iwona January 2016 (has links)
Measurements of neutron emission from a fusion plasma can provide a wealth of information on the underlying temporal, spatial and energy distributions of reacting ions and how they are affected by a wide range of magneto-hydro-dynamic (MHD) instabilities. This thesis focuses on the interpretation of the experimental measurements recorded by neutron flux monitors with and without spectroscopic capabilities installed on the Mega Ampere Spherical Tokamak (MAST). In particular, the temporally and spatially resolved measurements of the neutron rate measured by the neutron camera, which also possesses spectroscopic capabilities, are combined with the temporally resolved measurements of the total neutron rate provided by the absolutely calibrated fission chamber in order to study the properties of the fast ion distributions in different plasma scenarios. The first part of the thesis describes in detail the two forward modelling methods, which employ the set of interconnected codes developed to interpret experimental observations such as neutron count rate profiles and recoil proton pulse height spectra provided by the neutron camera. In the second part of the thesis the developed methods are applied to model the neutron camera observations performed in a variety of plasma scenarios. The first method, which involves only TRANSP/NUBEAM and LINE2 codes, was used to validate the neutron count rate profiles measured by the neutron camera in three different plasma scenarios covering the wide range of total neutron rate typically observed on MAST. In addition, the first framework was applied to model the changes in the total and local neutron rates caused by fishbone instability as well as to estimate the Hydrogen and Deuterium ion ratio. The second modelling method, which involves TRANSP/NUBEAM, LINE2, DRESS and NRESP, was used to validate the measured recoil proton pulse height spectra in a MHD-quiescent plasma scenario.

Page generated in 0.054 seconds