Spelling suggestions: "subject:"range sensor"" "subject:"tange sensor""
1 |
Distance Estimation of Two Distance SensorsVamsi Bhargav, Kamuju, Aditya Pavan Kumar, Yenuga January 2022 (has links)
In modern world sensors play important role where they help to acquire information about the procecess, such as temperature, velocity,distance, etc. Based on this information acquired from the sensorsdecisions can be made, for example to increase heating in the buildingor accelerate the car.In many cases, a single sensor type cannot provide enough information for complex decision making, for example, when the physicalproperties of the process are outside of the measurement range of thesensor. As a result, in order to achieve desired performance levels, acombination of sensors should be used in an integrated manner.Sensor generated data need to be processed into information throughthe use of appropriate decision making models in order to improveoverall performance. Here we compare two sensors which are shortrange and long-range sensor. We use a short-range and long-rangesensor, and calculates the distance from both sensors to the same object by using Arduino UNO microcontroller. The sensors that we usein our work have overlapping or common interval in their measurementranges. Therefore we investigated how we can make a decision aboutthe distance to an object when the acquired data from both sensors isin that common range.
|
2 |
Vehicle Perception: Localization, Mapping with Detection, Classification and Tracking of Moving ObjectsVu, Trung-Dung 18 September 2009 (has links) (PDF)
Perceiving or understanding the environment surrounding of a vehicle is a very important step in building driving assistant systems or autonomous vehicles. In this thesis, we study problems of simultaneous localization and mapping (SLAM) with detection, classification and tracking moving objects in context of dynamic outdoor environments focusing on using laser scanner as a main perception sensor. It is believed that if one is able to accomplish these tasks reliably in real time, this will open a vast range of potential automotive applications. The first contribution of this research is made by a grid-based approach to solve both problems of SLAM with detection of moving objects. To correct vehicle location from odometry we introduce a new fast incremental scan matching method that works reliably in dynamic outdoor environments. After good vehicle location is estimated, the surrounding map is updated incrementally and moving objects are detected without a priori knowledge of the targets. Experimental results on datasets collected from different scenarios demonstrate the efficiency of the method. The second contribution follows the first result after a good vehicle localization and a reliable map are obtained. We now focus on moving objects and present a method of simultaneous detection, classification and tracking moving objects. A model-based approach is introduced to interpret the laser measurement sequence over a sliding window of time by hypotheses of moving object trajectories. The data-driven Markov chain Monte Carlo (DDMCMC) technique is used to solve the data association in the spatio-temporal space to effectively find the most likely solution. We test the proposed algorithm on real-life data of urban traffic and present promising results. The third contribution is an integration of our perception module on a real vehicle for a particular safety automotive application, named Pre-Crash. This work has been performed in the framework of the European Project PReVENT-ProFusion in collaboration with Daimler AG. A comprehensive experimental evaluation based on relevant crash and non-crash scenarios is presented which confirms the robustness and reliability of our proposed method.
|
3 |
A novel 3D recovery method by dynamic (de)focused projectionLertrusdachakul, Intuon 30 November 2011 (has links) (PDF)
This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object's depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object's depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
|
4 |
A novel 3D recovery method by dynamic (de)focused projection / Nouvelle méthode de reconstruction 3D par projection dynamique (dé)focaliséeLertrusdachakul, Intuoun 30 November 2011 (has links)
Ce mémoire présente une nouvelle méthode pour l’acquisition 3D basée sur la lumière structurée. Cette méthode unifie les techniques de depth from focus (DFF) et depth from defocus (DFD) en utilisant une projection dynamique (dé)focalisée. Avec cette approche, le système d’acquisition d’images est construit de manière à conserver la totalité de l’objet nette sur toutes les images. Ainsi, seuls les motifs projetés sont soumis aux déformations de défocalisation en fonction de la profondeur de l’objet. Quand les motifs projetés ne sont pas focalisés, leurs Point Spread Function (PSF) sont assimilées à une distribution gaussienne. La profondeur finale est calculée en utilisant la relation entre les PSF de différents niveaux de flous et les variations de la profondeur de l’objet. Notre nouvelle estimation de la profondeur peut être utilisée indépendamment. Elle ne souffre pas de problèmes d’occultation ou de mise en correspondance. De plus, elle gère les surfaces sans texture et semi-réfléchissante. Les résultats expérimentaux sur des objets réels démontrent l’efficacité de notre approche, qui offre une estimation de la profondeur fiable et un temps de calcul réduit. La méthode utilise moins d’images que les approches DFF et contrairement aux approches DFD, elle assure que le PSF est localement unique / This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object’s depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object’s depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
|
5 |
Three-Dimensional Hand Tracking and Surface-Geometry Measurement for a Robot-Vision SystemLiu, Chris Yu-Liang 17 January 2009 (has links)
Tracking of human motion and object identification and recognition are important in many applications including motion capture for human-machine interaction systems. This research is part of a global project to enable a service robot to recognize new objects and perform different object-related tasks based on task guidance and demonstration provided by a general user. This research consists of the calibration and testing of two vision systems which are part of a robot-vision system. First, real-time tracking of a human hand is achieved using images acquired from three calibrated synchronized cameras. Hand pose is determined from the positions of physical markers and input to the robot system in real-time. Second, a multi-line laser camera range sensor is designed, calibrated, and mounted on a robot end-effector to provide three-dimensional (3D) geometry information about objects in the robot environment. The laser-camera sensor includes two cameras to provide stereo vision. For the 3D hand tracking, a novel score-based hand tracking scheme is presented employing dynamic multi-threshold marker detection, a stereo camera-pair utilization scheme, marker matching and labeling using epipolar geometry and hand pose axis analysis, to enable real-time hand tracking under occlusion and non-uniform lighting environments. For surface-geometry measurement using the multi-line laser range sensor, two different approaches are analyzed for two-dimensional (2D) to 3D coordinate mapping, using Bezier surface fitting and neural networks, respectively. The neural-network approach was found to be a more viable approach for surface-geometry measurement worth future exploration for its lower magnitude of 3D reconstruction error and consistency over different regions of the object space.
|
6 |
Three-Dimensional Hand Tracking and Surface-Geometry Measurement for a Robot-Vision SystemLiu, Chris Yu-Liang 17 January 2009 (has links)
Tracking of human motion and object identification and recognition are important in many applications including motion capture for human-machine interaction systems. This research is part of a global project to enable a service robot to recognize new objects and perform different object-related tasks based on task guidance and demonstration provided by a general user. This research consists of the calibration and testing of two vision systems which are part of a robot-vision system. First, real-time tracking of a human hand is achieved using images acquired from three calibrated synchronized cameras. Hand pose is determined from the positions of physical markers and input to the robot system in real-time. Second, a multi-line laser camera range sensor is designed, calibrated, and mounted on a robot end-effector to provide three-dimensional (3D) geometry information about objects in the robot environment. The laser-camera sensor includes two cameras to provide stereo vision. For the 3D hand tracking, a novel score-based hand tracking scheme is presented employing dynamic multi-threshold marker detection, a stereo camera-pair utilization scheme, marker matching and labeling using epipolar geometry and hand pose axis analysis, to enable real-time hand tracking under occlusion and non-uniform lighting environments. For surface-geometry measurement using the multi-line laser range sensor, two different approaches are analyzed for two-dimensional (2D) to 3D coordinate mapping, using Bezier surface fitting and neural networks, respectively. The neural-network approach was found to be a more viable approach for surface-geometry measurement worth future exploration for its lower magnitude of 3D reconstruction error and consistency over different regions of the object space.
|
7 |
Senzorika a řízení pohonů 4 kolového mobilního robotu / Sensors and motor control of mobile robotZatloukal, Jiří January 2013 (has links)
The diploma thesis is dealing with the proposal and realization of the sensor and drive system of the four wheel mobile robot. The control unit is a miniature computer Raspberry Pi. The robot will be employed in the future for the environment mapping and location. For this purpose robot exploits the different types of sensors. The information of these sensors is being processed by the Xmega microcontroller. Another microcontroller together with H-bridge DRV-8432 is used to control the direct current drives.
|
8 |
Localisation et cartographie simultanées en environnement extérieur à partir de données issues d'un radar panoramique hyperfréquence / Simultaneous localization and mapping in extensive outdoor environments from hyper-frequency radar measurementsGérossier, Franck 05 June 2012 (has links)
Le SLAM, « Simultaneous Localisation And Mapping », représente à l'heure actuelle l'une des principales thématiques investiguées dans le domaine des robots mobiles autonomes. Il permet, à l'aide de capteurs extéroceptifs (laser, caméra, radar, etc.) et proprioceptifs (odomètre, gyromètre, etc.), de trouver l'orientation et la localisation d'un robot dans un environnement extérieur vaste, inconnu ou modifié, avec la possibilité de créer une carte au fur et à mesure des déplacements du véhicule. Les travaux de thèse décrits dans ce manuscrit s'intègrent dans ce courant de recherche. Ils visent à développer un SLAM innovant qui utilise un radar à modulation de fréquence continue « FMCW » comme capteur extéroceptif. Ce capteur est insensible aux conditions climatiques et possède une portée de détection importante. Néanmoins, c'est un capteur tournant qui, dans une utilisation mobile, va fournir des données corrompues par le déplacement du véhicule. Pour mener à bien ces travaux, nous avons proposés différentes contributions : une correction de la distorsion par l'utilisation de capteurs proprioceptifs ; le développement d'une technique de localisation et cartographie simultanées nommée RS-SLAM-FMT qui effectue un scan matching sur les observations et utilise un algorithme estimatif de type EKF-SLAM ; l'utilisation, pour la première fois en SLAM, de la mise en correspondance par Transformée de Fourier-Mellin pour réaliser l'opération de scan matching ; la création d'un outil expérimental pour déterminer la matrice de covariance associée aux observations ; des tests de robustesse de l'algorithme dans des conditions d'utilisation réelles : dans des zones avec un faible nombre de points d'intérêts, sur des parcours effectués à vitesse élevée, dans des environnements péri-urbains avec une forte densité d'objets mobiles ; la réalisation d'une application temps réel pour le test du procédé sur un véhicule d'exploration qui se déplace dans un environnement extérieur vaste. / Simultaneous Localization And Mapping (SLAM) is one of the main topics investigated in the field of autonomous mobile robots. It permits the Localization and mapping of a robot in a large outdoor environment, using exteroceptive (laser, camera, radar, etc.) and proprioceptive (odometer, gyroscope, etc.) sensors. The objective of this PhD thesis is to develop innovative SLAM that uses a radar frequency modulated continuous wave (FMCW) as an exteroceptive sensor. Microwave radar provides an alternative solution for environmental imaging and overcomes the shortcomings of laser, video and sonar sensors such as their high sensitivity to atmospheric conditions. However, data obtained with this rotating range sensor is adversely affected by the vehicle’s own movement. In order to efficiently manage the work, we propose : a correction, on-the-fly, of the rotating distortion with an algorithm that uses the proprioceptive sensors’ measurements ; development of a new technique for simultaneous localization and mapping named RS-SLAM-FMT ; for the first time in SLAM, the use of the Fourier-Mellin Transform provides an accurate and efficient way of computing the rigid transformation between consecutive scans ; creation of an experimental tool to determine the covariance matrix associated with the observations. It is based on an uncertainty analysis of a Fourier-Mellin image registration ; tests of the robustness of the SLAM algorithm in real-life conditions : in an environment containing a small number of points of interest, in real full speed driving conditions, in peri-urban environments with a high density of moving objects etc. ; creation and experiment of a real-time RS-SLAM-FMT implemented on a mobile exploration vehicle in an extensive outdoor environment.
|
9 |
Perception de l'environnement par radar hyperfréquence. Application à la localisation et la cartographie simultanées, à la détection et au suivi d'objets mobiles en milieu extérieur / Perception of the environment with a hyper-frequency radar. Application to simultaneous localization and mapping, to detection and tracking of moving objects in outdoor environment.Vivet, Damien 05 December 2011 (has links)
Dans le cadre de la robotique mobile extérieure, les notions de perception et de localisation sont essentielles au fonctionnement autonome d’un véhicule. Les objectifs de ce travail de thèse sont multiples et mènent vers un but de localisation et de cartographie simultanée d’un environnement extérieur dynamique avec détection et suivi d’objet mobiles (SLAMMOT) à l’aide d’un unique capteur extéroceptif tournant de type radar dans des conditions de circulation dites "réalistes", c’est-à-dire à haute vitesse soit environ 30 km/h. Il est à noter qu’à de telles vitesses, les données acquises par un capteur tournant son corrompues par le déplacement propre du véhicule. Cette distorsion, habituellement considérée comme une perturbation, est analysée ici comme une source d’information. Cette étude vise également à évaluer les potentialités d’un capteur radar de type FMCW (onde continue modulée en fréquence) pour le fonctionnement d’un véhicule robotique autonome. Nous avons ainsi proposé différentes contributions : – une correction de la distorsion à la volée par capteurs proprioceptifs qui a conduit à une application de localisation et de cartographie simultanées (SLAM), – une méthode d’évaluation de résultats de SLAM basées segment, – une considération de la distorsion des données dans un but proprioceptif menant à une application SLAM, – un principe d’odométrie fondée sur les données Doppler propres au capteur radar, – une méthode de détection et de pistage d’objets mobiles : DATMO avec un unique radar. / In outdoor robotic context, notion of perception and localization is essential for an autonomous navigation of a mobile robot. The objectives of this PhD are multiple and tend to develop a simultaneous localization and mapping approach in a dynamic outdoor environment with detection and tracking of moving objects (SLAMMOT) with a unique exteroceptive radar sensor in real driving conditions, around 30 km/h. At such high speed, data obtained with a rotating range sensor are corrupted by the own vehicle displacement. This distortion, usually considered as a disturbance, is analyzed here as a source of information. This study explores radar frequency modulated continuous wave (FMCW) technology potential for mobile robotics in extended outdoor environment. In this work, we propose : – a distortion correction on-the-fly with proprioceptive sensors in order to realize a localization and mapping application (SLAM), – a line based SLAM evaluation method, – a consideration of distortion in a proprioceptive purpose for localization and mapping, – an odometry principle based on Doppler velocimetry provided by radar sensor, – a detection and tracking of mobile objects : DATMO, with a unique radar sensor.
|
10 |
Design of Intelligent Internet of Things and Internet of Bodies Sensor NodesShitij Tushar Avlani (11037774) 23 July 2021 (has links)
<div>Energy-efficient communication has remained the primary bottleneck in achieving fully energy-autonomous IoT nodes. Several scenarios including In-Sensor-Analytics (ISA), Collaborative Intelligence (CI) and Context-Aware-Switching (CAS) of the cluster-head during CI have been explored to trade-off the energies required for communication and computation in a wireless sensor network deployed in a mesh for multi-sensor measurement. A real-time co-optimization algorithm was developed for minimizing the energy consumption in the network for maximizing the overall battery lifetime of individual nodes.</div><div><br></div><div>The difficulty of achieving the design goals of lifetime, information accuracy, transmission distance, and cost, using traditional battery powered devices has driven significant research in energy-harvested wireless sensor nodes. This challenge is further amplified by the inherent power intensive nature of long-range communication when sensor networks are required to span vast areas such as agricultural fields and remote terrain. Solar power is a common energy source is wireless sensor nodes, however, it is not reliable due to fluctuations in power stemming from the changing seasons and weather conditions. This paper tackles these issues by presenting a perpetually-powered, energy-harvesting sensor node which utilizes a minimally sized solar cell and is capable of long range communication by dynamically co-optimizing energy consumption and information transfer, termed as Energy-Information Dynamic Co-Optimization (EICO). This energy-information intelligence is achieved by adaptive duty cycling of information transfer based on the total amount of energy available from the harvester and charge storage element to optimize the energy consumption of the sensor node, while employing event driven communication to minimize loss of information. We show results of continuous monitoring across 1Km without replacing the battery and maintaining an information accuracy of at least 95%.</div><div><br></div><div>Decades of continuous scaling in semiconductor technology has resulted in a drastic reduction in the cost and size of unit computing. This has enabled the design and development of small form factor wearable devices which communicate with each other to form a network around the body, commonly known as the Wireless Body Area Network (WBAN). These devices have found significant application for medical purposes such as reading surface bio-potential signals for monitoring, diagnosis, and therapy. One such device for the management of oropharyngeal swallowing disorders is described in this thesis. Radio wave transmission over air is the commonly used method of communication among these devices, but in recent years Human Body Communication has shown great promise to replace wireless communication for information exchange in a WBAN. However, there are very few studies in literature, that systematically study the channel loss of capacitive HBC for <i>wearable devices</i> over a wide frequency range with different terminations at the receiver, partly due to the need for <i>miniaturized wearable devices</i> for an accurate study. This thesis also measures and explores the channel loss of capacitive HBC from 100KHz to 1GHz for both high-impedance and 50Ohm terminations using wearable, battery powered devices; which is mandatory for accurate measurement of the HBC channel-loss, due to ground coupling effects. The measured results provide a consistent wearable, wide-frequency HBC channel loss data and could serve as a backbone for the emerging field of HBC by aiding in the selection of an appropriate operation frequency and termination.</div><div><br></div><div>Lastly, the power and security benefits of human body communication is demonstrated by extending it to animals (animal body communication). A sub-inch^3, custom-designed sensor node is built using off the shelf components which is capable of sensing and transmitting biopotential signals, through the body of the rat at significantly lower powers compared to traditional wireless transmissions. In-vivo experimental analysis proves that ABC successfully transmits acquired electrocardiogram (EKG) signals through the body with correlation accuracy >99% when compared to traditional wireless communication modalities, with a 50x reduction in power consumption.</div>
|
Page generated in 0.0698 seconds