• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 12
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 81
  • 81
  • 38
  • 21
  • 17
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Towards 3D reconstruction of outdoor scenes by mmw radar and a vision sensor fusion / Reconstruction 3D des scènes urbaines par fusion de donnée d'un radar hyperfréquence et de vision

El Natour, Ghina 14 December 2016 (has links)
L’objectif de cette thèse est de développer des méthodes permettant la cartographie d’un environnement tridimensionnel de grande dimension en combinant radar panoramique MMW et caméras optiques. Contrairement aux méthodes existantes de fusion de données multi-capteurs, telles que le SLAM, nous souhaitons réaliser un capteur de type RGB-D fournissant directement des mesures de profondeur enrichies par l’apparence (couleur, texture...). Après avoir modélisé géométriquement le système radar/caméra, nous proposons une méthode de calibrage originale utilisant des correspondances de points. Pour obtenir ces correspondances, des cibles permettant une mesure ponctuelle aussi bien par le radar que la caméra ont été conçues. L’approche proposée a été élaborée pour pouvoir être mise en oeuvre dans un environnement libre et par un opérateur non expert. Deuxièmement, une méthode de reconstruction de points tridimensionnels sur la base de correspondances de points radar et image a été développée. Nous montrons par une analyse théorique des incertitudes combinées des deux capteurs et par des résultats expérimentaux, que la méthode proposée est plus précise que la triangulation stéréoscopique classique pour des points éloignés comme on en trouve dans le cas de cartographie d’environnements extérieurs. Enfin, nous proposons une stratégie efficace de mise en correspondance automatique des données caméra et radar. Cette stratégie utilise deux caméras calibrées. Prenant en compte l’hétérogénéité des données radar et caméras, l’algorithme développé commence par segmenter les données radar en régions polygonales. Grâce au calibrage, l’enveloppe de chaque région est projetée dans deux images afin de définir des régions d’intérêt plus restreintes. Ces régions sont alors segmentées à leur tour en régions polygonales générant ainsi une liste restreinte d’appariement candidats. Un critère basé sur l’inter corrélation et la contrainte épipolaire est appliqué pour valider ou rejeter des paires de régions. Tant que ce critère n’est pas vérifié, les régions sont, elles même, subdivisées par segmentation. Ce processus, favorise l’appariement de régions de grande dimension en premier. L’objectif de cette approche est d’obtenir une cartographie sous forme de patchs localement denses. Les méthodes proposées, ont été testées aussi bien sur des données de synthèse que sur des données expérimentales réelles. Les résultats sont encourageants et montrent, à notre sens, la faisabilité de l’utilisation de ces deux capteurs pour la cartographie d’environnements extérieurs de grande échelle. / The main goal of this PhD work is to develop 3D mapping methods of large scale environment by combining panoramic radar and cameras. Unlike existing sensor fusion methods, such as SLAM (simultaneous localization and mapping), we want to build a RGB-D sensor which directly provides depth measurement enhanced with texture and color information. After modeling the geometry of the radar/camera system, we propose a novel calibration method using points correspondences. To obtain these points correspondences, we designed special targets allowing accurate point detection by both the radar and the camera. The proposed approach has been developed to be implemented by non-expert operators and in unconstrained environment. Secondly, a 3D reconstruction method is elaborated based on radar data and image point correspondences. A theoretical analysis is done to study the influence of the uncertainty zone of each sensor on the reconstruction method. This theoretical study, together with the experimental results, show that the proposed method outperforms the conventional stereoscopic triangulation for large scale outdoor scenes. Finally, we propose an efficient strategy for automatic data matching. This strategy uses two calibrated cameras. Taking into account the heterogeneity of cameras and radar data, the developed algorithm starts by segmenting the radar data into polygonal regions. The calibration process allows the restriction of the search by defining a region of interest in the pair of images. A similarity criterion based on both cross correlation and epipolar constraint is applied in order to validate or reject region pairs. While the similarity test is not met, the image regions are re-segmented iteratively into polygonal regions, generating thereby a shortlist of candidate matches. This process promotes the matching of large regions first which allows obtaining maps with locally dense patches. The proposed methods were tested on both synthetic and real experimental data. The results are encouraging and prove the feasibility of radar and vision sensor fusion for the 3D mapping of large scale urban environment.
12

Multisensorsystem für die automatisierte Detektion von Gangerzlagerstätten und seltenen Erden in einer Mine

Varga, Sebastian 29 July 2016 (has links) (PDF)
Im Rahmen von UPNS4D+ wird von mir der Teilbereich der automatisierten untertägigen Detektion von Gangerzlagerstätten und seltenen Erden bearbeitet. Dies erfolgt mittels eines Multisensoransatzes, der aus einer Hyperspektralkamera, einer RGB-Kamera und einem Laserscanner besteht. Die Grundlagen für die Kombination von hyperspektraler Bildverarbeitung und einer RGB-Kamera sind in der Industrie im Bereich von automatisierten Sortieranlagen zu finden. Im Bereich der Fernerkundung ist der Einsatz hyperspektraler Bilder für die Detektion geologischer Merkmale seit einigen Jahrzehnten üblich. Hier kann im Rahmen meiner Forschung gezeigt werden, dass mittels hyperspektraler Bilder Pyrit unter Tage detektiert werden kann. / In my research I work on a system which detects automatically the ore and rare earth element in a mine. This is part of UPNS4D+. For the detection I use a multi sensor system which consists of a hyperspectral camera, a RGB camera and a Laser scanner. Basics of this combination can be found in the industry. The combination of a RGB camera and a hyperspectral camera enables an automatic sorting of for example waste materials. Landsat satellites in the 1970 uses spectral information in order to detect the geology of the surface. I have tested the hyperspectral imaging in the Reiche Zeche and I can now show that Pyrite can be detected.
13

Combined sensor of dielectric constant and visible and near infrared spectroscopy to measure soil compaction using artificial neural networks

Al-Asadi, Raed January 2014 (has links)
Soil compaction is a widely spread problem in agricultural soils that has negative agronomic and environmental impacts. The former may lead to poor crop growth and yield, whereas the latter may lead to poor hydraulic properties of soils, and high risk to flooding, soil erosion and degradation. Therefore, the elimination of soil compaction must be done on regular bases. One of the main parameters to quantify soil compaction is soil bulk density (BD). Mapping of within field variation in soil BD will be a main requirement for within field management of soil compaction. The aim of this research was to develop a new approach for the measurement of soil BD as an indicator of soil compaction. The research relies on the fusion of data from visible and near infrared spectroscopy (vis-NIRS), to measure soil gravimetric moisture content (ω), with frequency domain reflectometry (FDR) data to measure soil volumetric moisture content (θv). The values of the estimated ω and θv, for the same undisturbed soil samples were collected from selected locations, textures, soil moisture contents and land use systems to derive soil BD. A total of 1013 samples were collected from 32 sites in the England and Wales. Two calibration techniques for vis-NIRS were evaluated, namely, partial least squares regression (PLSR) and artificial neural networks (ANN). ThetaProbe calibration was performed using the general formula (GF), soil specific calibration (SSC), the output voltage (OV) and artificial neural networks (ANN). ANN analyses for both ω and θv properties were based either on a single input variable or multiple input variables (data fusion). Effects of texture, moisture content, and land use on the prediction accuracy on ω, θv and BD were evaluated to arrive at the best experimental conditions for the measurement of BD with the proposed new system. A prototype was developed and tested under laboratory conditions and implemented in-situ for mapping of ω, θv and BD. When using the entire dataset (general data set), results proved that high measurement accuracy can be obtained for ω and θv with PLSR and the best performing traditional calibration method of the ThetaProbe with R2 values of 0.91 and 0.97, and root mean square error of prediction (RMSEp) of 0.027 g g-1 and 0.019 cm3 cm-3, respectively. However, the ANN – data fusion method resulted in improved accuracy (R2 = 0.98 and RMSEp = 0.014 g g-1 and 0.015 cm3 cm-3, respectively). This data fusion approach gave the best accuracy for BD assessment when only vis-NIRS spectra and ThetaProbe V were used as an input data (R2 = 0.81 and RMSEp = 0.095 g cm-3). The moisture level (L) impact on BD prediction revealed that the accuracy improved with soil moisture increasing, with RMSEp values of 0.081, 0.068 and 0.061 g cm-3, for average ω of 0.11, 0.20 and 0.28 g g-1, respectively. The influence of soil texture was discussed in relation with the clay content in %. It was found that clay positively affected vis-NIRS accuracy for ω measurement and no obvious impact on the dielectric sensor readings was observed, hence, no clear influence of the soil textures on the accuracy of BD prediction. But, RMSEp values of BD assessment ranged from 0.046 to 0.115 g cm-3. The land use effect of BD prediction showed measurement of grassland soils are more accurate compared to arable land soils, with RMSEp values of 0.083 and 0.097 g cm-3, respectively. The prototype measuring system showed moderate accuracy during the laboratory test and encouraging precision of measuring soil BD in the field test, with RMSEp of 0.077 and 0.104 g cm-3 of measurement for arable land and grassland soils, respectively. Further development of the prototype measuring system expected to improve prediction accuracy of soil BD. It can be concluded that BD can be measured accurately by combining the vis-NIRS and FDR techniques based on an ANN-data fusion approach.
14

Comparison of Linear-Correction Spherical-Interpolation Location Methods in Multi-Sensor Environments

Yu, Cheng-lung 22 August 2007 (has links)
In indoor environment, the multi-sensor system can be used as an efficient solution for target location process, in terms of lower estimation cost, due to the factor that sensors have the advantages of low power, simple, cheap, and low operation complexity. However, the location methods and the placements of designed multisensor have great impact on the location performance. Based on the time difference of arrival (TDOA), the present research utilizes linear-correction spherical-interpolation (LCSI) method to estimate the location of its targets. The method is a combination of the linear-correction least-squares method and the spherical-interpolation method. Apart from the usual process of iterative, nonlinear minimization, and consequently, under the influence of noise interference and target-sensor geometry, the spherical-interpolation method will produce better results; therefore, SI method is used in place of the LS part of the LCLS method and named as the LCSI method. The objective is to correct the SI method to generate a better estimate performance. In addition to the performance issues, the limitation of the methods will also be examined. The geometric dilution of precision (GDOP) of the TDOA location method in the 3-D scenario is demonstrated with the effects on location performance of both inside and outside of the multi-sensor formation. Programmed 3-D scenario are used in the simulations, where cases with three different multiple sensor formations and two different target heights are investigated. From the simulation results of various location methods, it can be seen that LCSI has has its advantages over other methods in the wireless TDOA location.
15

Performance Analysis of Closed-Form Least-Squares TDOA Location Methods in Multi-Sensor Environments

Ou, Wen-chin 26 July 2006 (has links)
In indoor environment, the multi-sensor system has been proved to be an efficient solution for target locating process in terms of lower estimation cost. However, the placement of designed multi-sensor has great impact on the location performance in an indoor environment. Based on the time difference of arrival (TDOA), closed-form least-square location methods, including the spherical-interpolation (SI) and the spherical-intersection (SX) methods, are used in the estimation of target locations. The two methods are apart from the usual process of iterative and nonlinear minimization. Consequently, under the influence of noise interference, the performance of the two methods also produce different results. In addition to the above issues, the limitation of these methods will also be examined. The geometric dilution of precision (GDOP) effects of TDOA location on location performance of both inside and outside of the multi-sensor environment in the 2-D scenario have been studied in the past. This thesis aims to further advance the performance of GDOP in 3-D scenarios, analyze the differences, and propose the suitable needs. Programmed 3-D scenario simulations are used in this research, designed according to multiple sensor arrays and the moving latitude of a target. The Setup interprets the degree of multi-sensor separation, and distances from targets to the sensor array. A suitable location algorithm and optimal multi-sensor deployments in an indoor environment were proposed according to the simulation results.
16

Algorithms and performance optimization for distributed radar automatic target recognition

Wilcher, John S. 08 June 2015 (has links)
This thesis focuses upon automatic target recognition (ATR) with radar sensors. Recent advancements in ATR have included the processing of target signatures from multiple, spatially-diverse perspectives. The advantage of multiple perspectives in target classification results from the angular sensitivity of reflected radar transmissions. By viewing the target at different angles, the classifier has a better opportunity to distinguish between target classes. This dissertation extends recent advances in multi-perspective target classification by: 1) leveraging bistatic target reflectivity signatures observed from multiple, spatially-diverse radar sensors; and, 2) employing a statistical distance measure to identify radar sensor locations yielding improved classification rates. The algorithms provided in this thesis use high resolution range (HRR) profiles – formed by each participating radar sensor – as input to a multi-sensor classification algorithm derived using the fundamentals of statistical signal processing. Improvements to target classification rates are demonstrated for multiple configurations of transmitter, receiver, and target locations. These improvements are shown to emanate from the multi-static characteristics of a target class’ range profile and not merely from non-coherent gain. The significance of dominant scatterer reflections is revealed in both classification performance and the “statistical distance” between target classes. Numerous simulations have been performed to interrogate the robustness of the derived classifier. Errors in target pose angle and the inclusion of camouflage, concealment, and deception (CCD) effects are considered in assessing the validity of the classifier. Consideration of different transmitter and receiver combinations and low signal-to-noise ratios are analyzed in the context of deterministic, Gaussian, and uniform target pose uncertainty models. Performance metrics demonstrate increases in classification rates of up to 30% for multiple-transmit, multiple-receive platform configurations when compared to multi-sensor monostatic configurations. A distance measure between probable target classes is derived using information theoretic techniques pioneered by Kullback and Leibler. The derived measure is shown to suggest radar sensor placements yielding better target classification rates. The predicted placements consider two-platform and three-platform configurations in a single-transmit, multiple-receive environment. Significant improvements in classification rates are observed when compared to ad-hoc sensor placement. In one study, platform placements identified by the distance measure algorithm are shown to produce classification rates exceeding 98.8% of all possible platform placements.
17

Multi-Sensor Data Fusion for Vehicular Navigation Applications

Iqbal, Umar 08 August 2012 (has links)
Global position system (GPS) is widely used in land vehicles but suffers deterioration in its accuracy in urban canyons; mostly due to satellite signal blockage and signal multipath. To obtain accurate, reliable, and continuous positioning solutions, GPS is usually augmented with inertial sensors, including accelerometers and gyroscopes to monitor both translational and rotational motions of a moving vehicle. Due to space and cost requirements, micro-electro-mechanical-system (MEMS) inertial sensors, which are typically inexpensive are presently utilized in land vehicles for various reasons and can be used for integration with GPS for navigation purposes. Kalman filtering (KF) usually used to performs this integration. However, the complex error characteristics of these MEMS based sensors lead to divergence of the positioning solution. Furthermore, the residual GPS pseudorange correlated errors are always ignored, thus reducing the GPS overall positioning accuracy. This thesis targets enhancing the performance of integrated MEMS based INS/GPS navigation systems through exploring new non-linear modelling approaches that can deal with the non-linear and correlated parts of INS and GPS errors. The research approach in this thesis relies on reduced inertial sensor systems (RISS) incorporating single axis gyroscope, vehicle odometer, and accelerometers is considered for the integration with GPS in one of two schemes; either loosely-coupled where GPS position and velocity are used for the integration or tightly-coupled where GPS pseudorange and pseudorange rates are utilized. A new method based on parallel cascade identification (PCI) is developed in this research to enhance the performance of KF by modelling azimuth errors for the RISS/GPS loosely-coupled integration scheme. In addition, PCI is also utilized for the modelling of residual GPS pseudorange correlated errors. This thesis develops a method to augment a PCI – based model of GPS pseudorange correlated errors to a tightly-coupled KF. In order to take full advantage of the PCI based models, this thesis explores the Particle filter (PF) as a non-linear integration scheme that is capable of accommodating the arbitrary sensor characteristics, motion dynamics, and noise distributions. The performance of the proposed methods is examined through several road test experiments in land vehicles involving different types of inertial sensors and GPS receivers. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2012-07-31 16:09:16.559
18

Multi-sensor Information Fusion for Classification of Driver's Physiological Sensor Data

Barua, Shaibal January 2013 (has links)
Physiological sensor signals analysis is common practice in medical domain for diagnosis andclassification of various physiological conditions. Clinicians’ frequently use physiologicalsensor signals to diagnose individual’s psychophysiological parameters i.e., stress tiredness,and fatigue etc. However, parameters obtained from physiological sensors could vary becauseof individual’s age, gender, physical conditions etc. and analyzing data from a single sensorcould mislead the diagnosis result. Today, one proposition is that sensor signal fusion canprovide more reliable and efficient outcome than using data from single sensor and it is alsobecoming significant in numerous diagnosis fields including medical diagnosis andclassification. Case-Based Reasoning (CBR) is another well established and recognizedmethod in health sciences. Here, an entropy based algorithm, “Multivariate MultiscaleEntropy analysis” has been selected to fuse multiple sensor signals. Other physiologicalsensor signals measurements are also taken into consideration for system evaluation. A CBRsystem is proposed to classify ‘healthy’ and ‘stressed’ persons using both fused features andother physiological i.e. Heart Rate Variability (HRV), Respiratory Sinus Arrhythmia (RSA),Finger Temperature (FT) features. The evaluation and performance analysis of the system have been done and the results ofthe classification based on data fusion and physiological measurements are presented in thisthesis work.
19

Filtrage PHD multicapteur avec application à la gestion de capteurs / Multi-sensor PHD filtering with application to sensor management

Delande, Emmanuel 30 January 2012 (has links)
Le filtrage multiobjet est une technique de résolution du problème de détection et/ou suivi dans un contexte multicible. Cette thèse s'intéresse au filtre PHD (Probability Hypothesis Density), une célèbre approximation du filtre RFS (Random Finite Set) adaptée au cas où les observations sont le fruit d'un seul capteur. La première partie propose une construction rigoureuse du filtre PHD multicapteur exact et son expression simplifiée, sans approximation, grâce à un partitionnement joint de l'espace d'état des cibles et des capteurs. Avec cette nouvelle méthode, la solution exacte du filtre PHD multicapteur peut être propagée dans des scénarios de surveillance simples. La deuxième partie aborde le problème de gestion des capteurs dans le cadre du PHD. A chaque itération, le BET (Balanced Explorer and Tracker) construit une prédiction du PHD multicapteur a posteriori grâce au PIMS (Predicted Ideal Measurement Set) et définit un contrôle multicapteur en respectant quelques critères opérationnels simples adaptés aux missions de surveillance / The aim of multi-object filtering is to address the multiple target detection and/or tracking problem. This thesis focuses on the Probability Hypothesis Density (PHD) filter, a well-known tractable approximation of the Random Finite Set (RFS) filter when the observation process is realized by a single sensor. The first part proposes the rigorous construction of the exact multi-sensor PHD filter and its simplified expression, without approximation, through a joint partitioning of the target state space and the sensors. With this new method, the exact multi-sensor PHD can be propagated in simple surveillance scenarii. The second part deals with the sensor management problem in the PHD framework. At each iteration, the Balanced Explorer and Tracker (BET) builds a prediction of the posterior multi-sensor PHD thanks to the Predicted Ideal Measurement Set (PIMS) and produces a multi-sensor control according to a few simple operational principles adapted to surveillance activities
20

Multi Sensor Multi Object Tracking in Autonomous Vehicles

Surya Kollazhi Manghat (8088146) 06 December 2019 (has links)
<div>Self driving cars becoming more popular nowadays, which transport with it's own intelligence and take appropriate actions at adequate time. Safety is the key factor in driving environment. A simple fail of action can cause many fatalities. Computer Vision has major part in achieving this, it help the autonomous vehicle to perceive the surroundings. Detection is a very popular technique in helping to capture the surrounding for an autonomous car. At the same time tracking also has important role in this by providing dynamic of detected objects. Autonomous cars combine a variety of sensors such as RADAR, LiDAR, sonar, GPS, odometry and inertial measurement units to perceive their surroundings. Driver-assistive technologies like Adaptive Cruise Control, Forward Collision Warning system (FCW) and Collision Mitigation by Breaking (CMbB) ensure safety while driving.</div><div>Perceiving the information from environment include setting up sensors on the car. These sensors will collect the data it sees and this will be further processed for taking actions. The sensor system can be a single sensor or multiple sensor. Different sensors have different strengths and weaknesses which makes the combination of them important for technologies like Autonomous Driving. Each sensor will have a limit of accuracy on it's readings, so multi sensor system can help to overcome this defects. This thesis is an attempt to develop a multi sensor multi object tracking method to perceive the surrounding of the ego vehicle. When the Object detection gives information about the presence of objects in a frame, Object Tracking goes beyond simple observation to more useful action of monitoring objects. The experimental results conducted on KITTI dataset indicate that our proposed state estimation system for Multi Object Tracking works well in various challenging environments.</div>

Page generated in 0.0302 seconds