Spelling suggestions: "subject:"[een] LASER RANGE FINDER"" "subject:"[enn] LASER RANGE FINDER""
1 |
3D Map Construction and Data Analysis by LiDAR for VehiclesTai, Chia-Hui 03 September 2012 (has links)
Nowadays, LiDAR(Light Detection And Ranging, LiDAR) is the more important and widely applicable measurement technique. The rise of visual system in 3D is very useful to the measurement of LiDAR and gets more importance value for 3D reconstruction technology, in which abundant surface features are implied in the point cloud data. Combined with the image and laser technique for real-time rendering, the LiDAR will be more functional.
This thesis proposes and designs a system which combined with Laser Range Finder and 3D visual interface for vehicles, and also equipped with rotary encoder and initial measurement unit to DR(Dead Reckoning) function. Through the coordinate transform method of 2D to 3D, the 3D coordinate of each point will be calculated, and embedded with the color information which captured from the camera to take 3D color point cloud collection. This method is also called Mobile Mapping System(MMS). In addition, this mapping system uses Direct Memory Access technology to display the point cloud synchronous in 3D visual system.
Except for the point cloud collection, the reconstruction of point cloud data is used in this system. The surface reconstruction is based on Nearest Neighbor Interpolation method. There are two factors to conduct the interpolation process: the angle and distance between two sample points from the points sequence. The reconstruction of point cloud and calibration of DR is not only to confirm the accuracy of 3D point cloud map but also the ¡§New Geography¡¨ of the 3D electronic map. This research will build up an independent Mobile Mapping System.
|
2 |
COMPARISON OF THREE OBSTACLE AVOIDANCE METHODS FOR AN AUTONOMOUS GUIDED VEHICLEMODI, SACHIN BRISMOHAN 16 September 2002 (has links)
No description available.
|
3 |
Development of an End-effector Sensory Suite for a Rehabilitation RobotStiber, Stephanie A. 19 July 2006 (has links)
This research presents an approach in assisting the control and operation of a rehabilitation robot manipulator to execute simple grasping tasks for persons with severe disabilities. It outlines the development of an end-effector sensory suite that includes the BarrettHand end-effector, laser range finder, and a low cost camera.
The approach taken in this research differs greatly from the currently available rehabilitation robot arms in that it requires minimal user instruction, it is easy to operate and more effective for persons severely disabled. A thorough study of the currently available systems; Manus, Raptor and Kares II arm, is also presented.
In order to test the end-effector sensory suite, experiments were performed to find the centroid of an object of interest to direct the robot end-effector towards it with minimal error. Analyses of centroid location data to ensure accurate results are also presented.
The long term goal of this research is to significantly enhance the ability of severely disabled persons to perform activities of daily living using wheelchair mounted robot arms. The sensory suite developed through this project is expected to be integrated into a seven-degree of freedom wheelchair mounted robot arm currently under development at the Rehabilitation Robots Laboratory at the University of South Florida.
|
4 |
Adapting Monte Carlo Localization to Utilize Floor and Wall Texture DataKrapil, Stephanie 01 September 2014 (has links)
Monte Carlo Localization (MCL) is an algorithm that allows a robot to determine its location when provided a map of its surroundings. Particles, consisting of a location and an orientation, represent possible positions where the robot could be on the map. The probability of the robot being at each particle is calculated based on sensor input.
Traditionally, MCL only utilizes the position of objects for localization. This thesis explores using wall and floor surface textures to help the algorithm determine locations more accurately. Wall textures are captured by using a laser range finder to detect patterns in the surface. Floor textures are determined by using an inertial measurement unit (IMU) to capture acceleration vectors which represent the roughness of the floor. Captured texture data is classified by an artificial neural network and used in probability calculations.
The best variations of Texture MCL improved accuracy by 19.1\% and 25.1\% when all particles and the top fifty particles respectively were used to calculate the robot's estimated position. All implementations achieved comparable performance speeds when run in real-time on-board a robot.
|
5 |
Mapeamento de ambientes externos utilizando robôs móveis / Outdoor mapping using mobile robotsHata, Alberto Yukinobu 24 May 2010 (has links)
A robótica móvel autônoma é uma área relativamente recente que tem como objetivo a construção de mecanismos capazes de executar tarefas sem a necessidade de um controlador humano. De uma forma geral, a robótica móvel defronta com três problemas fundamentais: mapeamento de ambientes, localização e navegação do robô. Sem esses elementos, o robô dificilmente poderia se deslocar autonomamente de um lugar para outro. Um dos problemas existentes nessa área é a atuação de robôs móveis em ambientes externos como parques e regiões urbanas, onde a complexidade do cenário é muito maior em comparação aos ambientes internos como escritórios e casas. Para exemplificar, nos ambientes externos os sensores estão sujeitos às condições climáticas (iluminação do sol, chuva e neve). Além disso, os algoritmos de navegação dos robôs nestes ambientes devem tratar uma quantidade bem maior de obstáculos (pessoas, animais e vegetações). Esta dissertação apresenta o desenvolvimento de um sistema de classificação da navegabilidade de terrenos irregulares, como por exemplo, ruas e calçadas. O mapeamento do cenário é realizado através de uma plataforma robótica equipada com um sensor laser direcionado para o solo. Foram desenvolvidos dois algoritmos para o mapeamento de terrenos. Um para a visualização dos detalhes finos do ambiente, gerando um mapa de nuvem de pontos e outro para a visualização das regiões próprias e impróprias para o tráfego do robô, resultando em um mapa de navegabilidade. No mapa de navegabilidade, são utilizados métodos de aprendizado de máquina supervisionado para classificar o terreno em navegável (regiões planas), parcialmente navegável (grama, casacalho) ou não navegável (obstáculos). Os métodos empregados foram, redes neurais artificais e máquinas de suporte vetorial. Os resultados de classificação obtidos por ambos foram posteriormente comparados para determinar a técnica mais apropriada para desempenhar esta tarefa / Autonomous mobile robotics is a recent research area that focus on the construction of mechanisms capable of executing tasks without a human control. In general, mobile robotics deals with three fundamental problems: environment mapping, robot localization and navigation. Without these elements, the robot hardly could move autonomously from a place to another. One problem of this area is the operation of the mobile robots in outdoors (e.g. parks and urban areas), which are considerably more complex than indoor environments (e.g. offices and houses). To exemplify, in outdoor environments, sensors are subjected to weather conditions (sunlight, rain and snow), besides that the navigation algorithms must process a larger quantity of obstacles (people, animals and vegetation). This dissertation presents the development of a system that classifies the navigability of irregular terrains, like streets and sidewalks. The scenario mapping has been done using a robotic platform equipped with a laser range finder sensor directed to the ground. Two terrain mapping algorithms has been devolped. One for environment fine details visualization, generating a point cloud map, and other to visualize appropriated and unappropriated places to robot navigation, resulting in a navigability map. In this map, it was used supervised learning machine methods to classify terrain portions in navigable (plane regions), partially navigable (grass, gravel) or non-navigable (obstacles). The classification methods employed were artificial neural networks and support vector machines. The classification results obtained by both were later compared to determine the most appropriated technique to execute this task
|
6 |
Observatoire de trajectoire de piétons à l'aide d'un réseau de télémètre laser à balayage : application à l'intérieur des bâtiments / Pedestrian path monitoring using a scanning laser rangefinder network : application inside buildingsAdiaviakoye, Ladji 10 September 2015 (has links)
Dans la vie de tous les jours, nous assistons à des chorégraphies surprenantes dans les déplacements de foules de piétons. Les mécanismes qui sont à la base de la dynamique des foules humaines restent peu connus. Un des modes d’observation des piétons consiste à réaliser des mesures en conditions réelles (exemple : aéroport, gare, etc.). La trajectoire empruntée, la vitesse et l’accélération sont les données de base pour une telle analyse. C’est dans ce contexte que se placent nos travaux qui combinent étroitement observations en milieu naturel et expérimentations contrôlées. Nous avons proposé un système pour le suivi de plusieurs piétons dans un environnement fermé, à l’aide d’un réseau de télémètres lasers à balayage. Nous avons fait avancer l’état de l’art sur quatre plans.Premièrement, nous avons introduit une méthode de fusion automatique des données, permettant de discriminer les objets statiques (murs, poteaux, etc.) et aussi d’augmenter le taux de détection.Deuxièmement, nous avons proposé une méthode de détection non paramétrique basée sur la modélisation de la marche. L’algorithme estime la position du piéton, que celui-ci soit immobile ou en mouvement.Finalement, notre suivi repose sur la méthode Rao-Blackwell Monte Carlo Association de Données, avec la particularité de suivre un nombre variable de piétons.L’algorithme a été évalué quantitativement par des expériences de comportement social à différents niveaux de densité. Ces expériences ont eu lieu dans une école, près de 300 piétons ont été suivis dont une trentaine simultanément. / In everyday life, we witness surprising choreographies in the movements of crowds of pedestrians. The mechanisms that underlie the dynamics of human crowd dynamics remain poorly understood. One of the ways of observing pedestrians consists in taking measurements in real conditions (e. g. airport, station, etc.). The trajectory, speed and acceleration are the basic data for such an analysis. It is in this context that our work is placed, which closely combines observations in the natural environment with controlled experiments. We proposed a system for tracking multiple pedestrians in a closed environment using a network of scanning laser rangefinders. We have advanced the state of the art on four levels: first, we have introduced an automatic data fusion method to discriminate static objects (walls, poles, etc.) and also to increase the detection rate; second, we have proposed a non-parametric detection method based on walking modeling. The algorithm estimates the position of the pedestrian, whether stationary or moving, and finally, our monitoring is based on the Rao-Blackwell Monte Carlo Association Data Method, with the particularity of tracking a variable number of pedestrians, which was quantitatively evaluated by experiments in social behaviour at different levels of density. These experiments took place in a school, nearly 300 pedestrians were followed, about thirty of them simultaneously.
|
7 |
Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework / Localisation de véhicules intelligents par fusion de données multi-capteurs en milieu urbainWei, Lijun 17 July 2013 (has links)
Afin d’améliorer la précision des systèmes de navigation ainsi que de garantir la sécurité et la continuité du service, il est essentiel de connaitre la position et l’orientation du véhicule en tout temps. La localisation absolue utilisant des systèmes satellitaires tels que le GPS est souvent utilisée `a cette fin. Cependant, en environnement urbain, la localisation `a l’aide d’un récepteur GPS peut s’avérer peu précise voire même indisponible `a cause des phénomènes de réflexion des signaux, de multi-trajet ou de la faible visibilité satellitaire. Afin d’assurer une estimation précise et robuste du positionnement, d’autres capteurs et méthodes doivent compléter la mesure. Dans cette thèse, des méthodes de localisation de véhicules sont proposées afin d’améliorer l’estimation de la pose en prenant en compte la redondance et la complémentarité des informations du système multi-capteurs utilisé. Tout d’abord, les mesures GPS sont fusionnées avec des estimations de la localisation relative du véhicule obtenues `a l’aide d’un capteur proprioceptif (gyromètre), d’un système stéréoscopique(Odométrie visuelle) et d’un télémètre laser (recalage de scans télémétriques). Une étape de sélection des capteurs est intégrée pour valider la cohérence des observations provenant des différents capteurs. Seules les informations validées sont combinées dans un formalisme de couplage lâche avec un filtre informationnel. Si l’information GPS est indisponible pendant une longue période, la trajectoire estimée par uniquement les approches relatives tend `a diverger, en raison de l’accumulation de l’erreur. Pour ces raisons, les informations d’une carte numérique (route + bâtiment) ont été intégrées et couplées aux mesures télémétriques de deux télémètres laser montés sur le toit du véhicule (l’un horizontalement, l’autre verticalement). Les façades des immeubles détectées par les télémètres laser sont associées avec les informations_ bâtiment _ de la carte afin de corriger la position du véhicule.Les approches proposées sont testées et évaluées sur des données réelles. Les résultats expérimentaux obtenus montrent que la fusion du système stéréoscopique et du télémètre laser avec le GPS permet d’assurer le service de localisation lors des courtes absences de mesures GPS et de corriger les erreurs GPS de type saut. Par ailleurs, la prise en compte des informations de la carte numérique routière permet d’obtenir une approximation de la position du véhicule en projetant la position du véhicule sur le tronc¸on de route correspondant et enfin l’intégration de la carte numérique des bâtiments couplée aux données télémétriques permet d’affiner cette estimation, en particulier la position latérale. / In some dense urban environments (e.g., a street with tall buildings around), vehicle localization result provided by Global Positioning System (GPS) receiver might not be accurate or even unavailable due to signal reflection (multi-path) or poor satellite visibility. In order to improve the accuracy and robustness of assisted navigation systems so as to guarantee driving security and service continuity on road, a vehicle localization approach is presented in this thesis by taking use of the redundancy and complementarities of multiple sensors. At first, GPS localization method is complemented by onboard dead-reckoning (DR) method (inertial measurement unit, odometer, gyroscope), stereovision based visual odometry method, horizontal laser range finder (LRF) based scan alignment method, and a 2D GIS road network map based map-matching method to provide a coarse vehicle pose estimation. A sensor selection step is applied to validate the coherence of the observations from multiple sensors, only information provided by the validated sensors are combined under a loosely coupled probabilistic framework with an information filter. Then, if GPS receivers encounter long term outages, the accumulated localization error of DR-only method is proposed to be bounded by adding a GIS building map layer. Two onboard LRF systems (a horizontal LRF and a vertical LRF) are mounted on the roof of the vehicle and used to detect building facades in urban environment. The detected building facades are projected onto the 2D ground plane and associated with the GIS building map layer to correct the vehicle pose error, especially for the lateral error. The extracted facade landmarks from the vertical LRF scan are stored in a new GIS map layer. The proposed approach is tested and evaluated with real data sequences. Experimental results with real data show that fusion of the stereoscopic system and LRF can continue to localize the vehicle during GPS outages in short period and to correct the GPS positioning error such as GPS jumps; the road map can help to obtain an approximate estimation of the vehicle position by projecting the vehicle position on the corresponding road segment; and the integration of the building information can help to refine the initial pose estimation when GPS signals are lost for long time.
|
8 |
Sensor Fusion with Coordinated Mobile Robots / Sensorfusion med koordinerade mobila robotarHolmberg, Per January 2003 (has links)
<p>Robust localization is a prerequisite for mobile robot autonomy. In many situations the GPS signal is not available and thus an additional localization system is required. A simple approach is to apply localization based on dead reckoning by use of wheel encoders but it results in large estimation errors. With exteroceptive sensors such as a laser range finder natural landmarks in the environment of the robot can be extracted from raw range data. Landmarks are extracted with the Hough transform and a recursive line segment algorithm. By applying data association and Kalman filtering along with process models the landmarks can be used in combination with wheel encoders for estimating the global position of the robot. If several robots can cooperate better position estimates are to be expected because robots can be seen as mobile landmarks and one robot can supervise the movement of another. The centralized Kalman filter presented in this master thesis systematically treats robots and extracted landmarks such that benefits from several robots are utilized. Experiments in different indoor environments with two different robots show that long distances can be traveled while the positional uncertainty is kept low. The benefit from cooperating robots in the sense of reduced positional uncertainty is also shown in an experiment. </p><p>Except for localization algorithms a typical autonomous robot task in the form of change detection is solved. The change detection method, which requires robust localization, is aimed to be used for surveillance. The implemented algorithm accounts for measurement- and positional uncertainty when determining whether something in the environment has changed. Consecutive true changes as well as sporadic false changes are detected in an illustrative experiment.</p>
|
9 |
Sensor Fusion with Coordinated Mobile Robots / Sensorfusion med koordinerade mobila robotarHolmberg, Per January 2003 (has links)
Robust localization is a prerequisite for mobile robot autonomy. In many situations the GPS signal is not available and thus an additional localization system is required. A simple approach is to apply localization based on dead reckoning by use of wheel encoders but it results in large estimation errors. With exteroceptive sensors such as a laser range finder natural landmarks in the environment of the robot can be extracted from raw range data. Landmarks are extracted with the Hough transform and a recursive line segment algorithm. By applying data association and Kalman filtering along with process models the landmarks can be used in combination with wheel encoders for estimating the global position of the robot. If several robots can cooperate better position estimates are to be expected because robots can be seen as mobile landmarks and one robot can supervise the movement of another. The centralized Kalman filter presented in this master thesis systematically treats robots and extracted landmarks such that benefits from several robots are utilized. Experiments in different indoor environments with two different robots show that long distances can be traveled while the positional uncertainty is kept low. The benefit from cooperating robots in the sense of reduced positional uncertainty is also shown in an experiment. Except for localization algorithms a typical autonomous robot task in the form of change detection is solved. The change detection method, which requires robust localization, is aimed to be used for surveillance. The implemented algorithm accounts for measurement- and positional uncertainty when determining whether something in the environment has changed. Consecutive true changes as well as sporadic false changes are detected in an illustrative experiment.
|
10 |
Design and Test of Algorithms for the Evaluation of Modern Sensors in Close-Range Photogrammetry / Entwicklung und Test von Algorithmen für die 3D-Auswertung von Daten moderner Sensorsysteme in der NahbereichsphotogrammetrieScheibe, Karsten 01 December 2006 (has links)
No description available.
|
Page generated in 0.0359 seconds