• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 168
  • 25
  • 11
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 269
  • 269
  • 70
  • 64
  • 62
  • 54
  • 50
  • 49
  • 46
  • 40
  • 39
  • 38
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Automatic geo-referencing by integrating camera vision and inertial measurements

Randeniya, Duminda I. B 01 June 2007 (has links)
Importance of an alternative sensor system to an inertial measurement unit (IMU) is essential for intelligent land navigation systems when the vehicle travels in a GPS deprived environment. The sensor system that has to be used in updating the IMU for a reliable navigation solution has to be a passive sensor system which does not depend on any outside signal. This dissertation presents the results of an effort where position and orientation data from vision and inertial sensors are integrated. Information from a sequence of images captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames per second was used in upgrading the inertial system installed in the same vehicle for its inherent error accumulation. Specifically, the rotations and translations estimated from point correspondences tracked through a sequence of images were used in the integration. However, for such an effort, two types of tasks need to be performed. The first task is the calibration to estimate the intrinsic properties of the vision sensors (cameras), such as the focal length and lens distortion parameters and determination of the transformation between the camera and the inertial systems. Calibration of a two sensor system under indoor conditions does not provide an appropriate and practical transformation for use in outdoor maneuvers due to invariable differences between outdoor and indoor conditions. Also, use of custom calibration objects in outdoor operational conditions is not feasible due to larger field of view that requires relatively large calibration object sizes. Hence calibration becomes one of the critical issues particularly if the integrated system is used in Intelligent Transportation Systems applications. In order to successfully estimate the rotations and translations from vision system the calibration has to be performed prior to the integration process. The second task is the effective fusion of inertial and vision sensor systems. The automated algorithm that identifies point correspondences in images enables its use in real-time autonomous driving maneuvers. In order to verify the accuracy of the established correspondences, independent constraints such as epipolar lines and correspondence flow directions were used. Also a pre-filter was utilized to smoothen out the noise associated with the vision sensor (camera) measurements. A novel approach was used to obtain the geodetic coordinates, i.e. latitude, longitude and altitude, from the normalized translations determined from the vision sensor. Finally, the position locations based on the vision sensor was integrated with those of the inertial system in a decentralized format using a Kalman filter. The vision/inertial integrated position estimates are successfully compared with those from 1) inertial/GPS system output and 2) actual survey performed on the same roadway. This comparison demonstrates that vision can in fact be used successfully to supplement the inertial measurements during potential GPS outages. The derived intrinsic properties and the transformation between individual sensors are also verified during two separate test runs on an actual roadway section.
142

Estimation of Local Map from Radar Data / Skattning av lokal karta från radardata

Moritz, Malte, Pettersson, Anton January 2014 (has links)
Autonomous features in vehicles is already a big part of the automobile area and now many companies are looking for ways to make vehicles fully autonomous. Autonomous vehicles need to get information about the surrounding environment. The information is extracted from exteroceptive sensors and today vehicles often use laser scanners for this purpose. Laser scanners are very expensive and fragile, it is therefore interesting to investigate if cheaper radar sensors could be used. One big challenge when it comes to autonomous vehicles is to be able to use the exteroceptive sensors and extract a position of the vehicle and at the same time get a map of the environment. The area of Simultaneous Localization and Mapping (SLAM) is a well explored area when using laser scanners but is not that well explored when using radars. It has been investigated if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The truck has been equipped with ego-motion sensors and radars and the data from them has been fused together to get a position of the truck and to get a map of the surrounding environment, i.e. a SLAM algorithm has been implemented. The map is represented by an Occupancy Grid Map (OGM) which should only consist of static objects. The OGM is updated probabilistically by using a binary Bayes filter. To localize the truck with help of motion sensors an Extended Kalman Filter (EKF) is used together with a map and a scan match method. All these methods are put together to create a SLAM algorithm. A range rate filter method is used to filter out noise and non-static measurements from the radar. The results of this thesis show that it is possible to use radar sensors to create a map of a truck's surroundings. The quality of the map is considered to be good and details such as space between parked trucks, signs and light posts can be distinguished. It has also been proven that methods with low performance on their own can together with other methods work very well in the SLAM algorithm. Overall the SLAM algorithm works well but when driving in unexplored areas with a low number of objects problems with positioning might occur. A real time system has also been implemented and the map can be seen at the same time as the truck is manoeuvred.
143

Improved detection and tracking of objects in surveillance video

Denman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
144

Markerless augmented reality on ubiquitous mobile devices with integrated sensors

Van Wyk, Carel 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The computational power of mobile smart-phone devices are ever increasing and high-end phones become more popular amongst consumers every day. The technical speci cations of a high-end smart-phone today rivals those of a home computer system of only a few years ago. Powerful processors, combined with cameras and ease of development encourage an increasing number of Augmented Reality (AR) researchers to adopt mobile smart-phones as AR platform. Implementation of marker-based Augmented Reality systems on mobile phones is mostly a solved problem. Markerless systems still o er challenges due to increased processing requirements. Some researchers adopt purely computer vision based markerless tracking methods to estimate camera pose on mobile devices. In this thesis we propose the use of a hybrid system that employs both computer vision and integrated sensors present in most new smartphones to facilitate pose estimation. We estimate three of the six degrees of freedom of pose using integrated sensors and estimate the remaining three using feature tracking. A proof of concept hybrid system is implemented as part of this thesis. / AFRIKAANSE OPSOMMING: Die berekeningskrag van nuwe-generasie selfone neem elke dag toe en kragtige "slim-fone" word al hoe meer populêr onder verbruikers. Die tegniese spesifikasies van 'n nuwe slim-foon vandag is vergelykbaar met die van 'n persoonlike rekenaar van slegs 'n paar jaar gelede. Die kombinasie van kragtige verwerkers, kameras en die gemaklikheid waarmee programmatuur op hierdie toestelle ontwikkel word, maak dit 'n aantreklike ontwikkelingsplatform vir navorsers in Toegevoegde Realiteit. Die implimentering van 'n merker-gebaseerde Toegevoegde Realiteitstelsel op selfone is 'n probleem wat reeds grotendeels opgelos is. Merker-vrye stelsels, aan die ander kant, bied steeds interessante uitdagings omdat hulle meer prosesseringskrag vereis. 'n Paar navorsers het reeds rekenaarvisie-gebaseerde merker-vrye stelsels aangepas om op selfone te funksioneer. In hierdie tesis stel ons die ontwikkeling voor van 'n hibriede stelsel wat gebruik maak van rekenaarvisie sowel as geintegreerde sensore in die foon om die berekening van kamera-orientasie te vergemaklik. Ons gebruik geintegreerde sensore om drie uit ses vryheidsgrade van orientasie te bereken, terwyl die oorblywende drie met behulp van rekenaarvisie-tegnieke bepaal word. 'n Prototipe stelsel is ontwikkel as deel van hierdie tesis.
145

Tracking of Ground Vehicles : Evaluation of Tracking Performance Using Different Sensors and Filtering Techniques

Homelius, Marcus January 2018 (has links)
It is crucial to find a good balance between positioning accuracy and cost when developing navigation systems for ground vehicles. In open sky or even in a semi-urban environment, a single global navigation satellite system (GNSS) constellation performs sufficiently well. However, the positioning accuracy decreases drastically in urban environments. Because of the limitation in tracking performance for standalone GNSS, particularly in cities, many solutions are now moving toward integrated systems that combine complementary sensors. In this master thesis the improvement of tracking performance for a low-cost ground vehicle navigation system is evaluated when complementary sensors are added and different filtering techniques are used. How the GNSS aided inertial navigation system (INS) is used to track ground vehicles is explained in this thesis. This has shown to be a very effective way of tracking a vehicle through GNSS outages. Measurements from an accelerometer and a gyroscope are used as inputs to inertial navigation equations. GNSS measurements are then used to correct the tracking solution and to estimate the biases in the inertial sensors. When velocity constraints on the vehicle’s motion in the y- and z-axis are included, the GNSS aided INS has shown very good performance, even during long GNSS outages. Two versions of the Rauch-Tung-Striebel (RTS) smoother and a particle filter (PF) version of the GNSS aided INS have also been implemented and evaluated. The PF has shown to be computationally demanding in comparison with the other approaches and a real-time implementation on the considered embedded system is not doable. The RTS smoother has shown to give a smoother trajectory but a lot of extra information needs to be stored and the position accuracy is not significantly improved. Moreover, map matching has been combined with GNSS measurements and estimates from the GNSS aided INS. The Viterbi algorithm is used to output the the road segment identification numbers of the most likely path and then the estimates are matched to the closest position of these roads. A suggested solution to acquire reliable tracking with high accuracy in all environments is to run the GNSS aided INS in real-time in the vehicle and simultaneously send the horizontal position coordinates to a back office where map information is kept and map matching is performed.
146

Tecnologia assistiva para detecção de quedas : desenvolvimento de sensor vestível integrado ao sistema de casa inteligente

Torres, Guilherme Gerzson January 2018 (has links)
O uso de tecnologias assistivas objetivando proporcionar melhor qualidade de vida a idosos está em franca ascensão. Uma das linhas de pesquisa nessa área é o uso de dispositivos para detecção de quedas de idosos, um problema cuja ocorrência é cada vez maior devido a diversos fatores, incluindo maior longevidade, maior número de pessoas vivendo sozinhas na velhice, entre outros. Este trabalho apresenta o desenvolvimento de um dispositivo vestível, um nó sensor de redes de sensores sem fio de ultra-baixo consumo. Também descreve a expansão de um sistema KNX, ao qual o dispositivo é integrado. O dispositivo é capaz de identificar quedas, auxiliando no monitoramento de idosos e, por sua vez, aumentando a segurança dos mesmos. O monitoramento é realizado através de acelerômetro e giroscópio de 3 eixos, acoplados ao peito do usuário, capaz de detectar quedas através de um algoritmo de análise de limites determinados a partir da fusão dos dados dos sensores. O sensor vestível utiliza tecnologia EnOcean, que propicia conexão sem fio com um sistema de automação de casas inteligentes, de acordo com a norma KNX, através da plataforma Home Assistant. Telegramas de alarmes são automaticamente enviados no caso de detecção de quedas, e acionam um atuador pertencente ao sistema KNX. Além de validar a tecnologia EnOcean para uso em dispositivos vestíveis, o protótipo desenvolvido não indicou nenhum falso positivo através de testes realizados com dois usuários de características corporais diferentes, onde foram reproduzidos 100 vezes cada um dos oito tipos de movimentos (quatro movimentos de quedas e quatro de não quedas). Os testes realizados com o dispositivo revelaram sensibilidade e de especificidade de até 96% e 100%, respectivamente. / The use of assistive technologies to provide quality of life for elderly is increasing. One of the research lines of this area is the use of devices for fall detection, which is an increasing problem due to many factors, including greater longevity, more elders living alone, among others. This work presents the development of a wearable device, a sensor node for ultra-low power networks. Also, describes the expansion of a KNX system, which the device is integrated. The device is able to detect falls which can aid the monitoring of the elderly people and improve security. The monitoring is done through a 3-axis accelerometer and gyroscope attached on the user’s chest. The fall detection is done by a threshold algorithm based on data fusion of the sensors. The wearable sensor is an EnOcean node, which includes a wireless connection with a smart home system, according to the KNX standard, through the Home Assistant platform. Alarm telegrams are automatically sent in case of fall detection, and fires an actuator that is part of the KNX system to alarm. In addition to validating the EnOcean’s Technology for use on wearable devices, the developed prototype didn’t indicated any false positives through tests performed with two users of different body characteristics, where each of the eight types of movements (four movements of falls and four of non-falls) were reproduced 100 times. The tests done with the device revealed sensitivity and specificity of up to 96% and 100%, respectively.
147

Détection de personnes pour des systèmes de videosurveillance multi-caméra intelligents / People detection methods for intelligent multi-Camera surveillance systems

Mehmood, Muhammad Owais 28 September 2015 (has links)
La détection de personnes dans les vidéos est un défi bien connu du domaine de la vision par ordinateur avec un grand nombre d'applications telles que le développement de systèmes de surveillance visuels. Même si les détecteurs monoculaires sont plus simples à mettre en place, ils sont dans l’incapacité de gérer des scènes complexes avec des occultations, une grande densité de personnes ou des scènes avec beaucoup de profondeur de champ menant à une grande variabilité dans la taille des personnes. Dans cette thèse, nous étudions la détection de personnes multi-vues et notamment l'utilisation de cartes d'occupation probabilistes créées en fusionnant les différentes vues grâce à la connaissance de la géométrie du système. La détection à partir de ces cartes d'occupation amène cependant des fausses détections (appelées « fantômes ») dues aux différentes projections. Nous proposons deux nouvelles techniques afin de remédier à ce phénomène et améliorer la détection des personnes. La première utilise une déconvolution par un noyau dont la forme varie spatialement tandis que la seconde est basée sur un principe de validation d’hypothèse. Ces deux approches n'utilisent volontairement pas l'information temporelle qui pourra être réintroduite par la suite dans des algorithmes de suivi. Les deux approches ont été validées dans des conditions difficiles présentant des occultations, une densité de personnes plus ou moins élevée et de fortes variations dans les réponses colorimétriques des caméras. Une comparaison avec d'autres méthodes de l’état de l'art a également été menée sur trois bases de données publiques, validant les méthodes proposées pour la surveillance d'une gare et d'un aéroport / People detection is a well-studied open challenge in the field of Computer Vision with applications such as in the visual surveillance systems. Monocular detectors have limited ability to handle occlusion, clutter, scale, density. Ubiquitous presence of cameras and computational resources fuel the development of multi-camera detection systems. In this thesis, we study the multi-camera people detection; specifically, the use of multi-view probabilistic occupancy maps based on the camera calibration. Occupancy maps allow multi-view geometric fusion of several camera views. Detection with such maps create several false detections and we study this phenomenon: ghost pruning. Further, we propose two novel techniques in order to improve multi-view detection based on: (a) kernel deconvolution, and (b) occupancy shape modeling. We perform non-temporal, multi-view reasoning in occupancy maps to recover accurate positions of people in challenging conditions such as of occlusion, clutter, lighting, and camera variations. We show improvements in people detections across three challenging datasets for visual surveillance including comparison with state-of-the-art techniques. We show the application of this work in exigent transportation scenarios i.e. people detection for surveillance at a train station and at an airport
148

Human Inspired Control System for an Unmanned Ground Vehicle

January 2015 (has links)
abstract: In this research work, a novel control system strategy for the robust control of an unmanned ground vehicle is proposed. This strategy is motivated by efforts to mitigate the problem for scenarios in which the human operator is unable to properly communicate with the vehicle. This novel control system strategy consisted of three major components: I.) Two independent intelligent controllers, II.) An intelligent navigation system, and III.) An intelligent controller tuning unit. The inner workings of the first two components are based off the Brain Emotional Learning (BEL), which is a mathematical model of the Amygdala-Orbitofrontal, a region in mammalians brain known to be responsible for emotional learning. Simulation results demonstrated the implementation of the BEL model to be very robust, efficient, and adaptable to dynamical changes in its application as controller and as a sensor fusion filter for an unmanned ground vehicle. These results were obtained with significantly less computational cost when compared to traditional methods for control and sensor fusion. For the intelligent controller tuning unit, the implementation of a human emotion recognition system was investigated. This system was utilized for the classification of driving behavior. Results from experiments showed that the affective states of the driver are accurately captured. However, the driver's affective state is not a good indicator of the driver's driving behavior. As a result, an alternative method for classifying driving behavior from the driver's brain activity was explored. This method proved to be successful at classifying the driver's behavior. It obtained results comparable to the common approach through vehicle parameters. This alternative approach has the advantage of directly classifying driving behavior from the driver, which is of particular use in UGV domain because the operator's information is readily available. The classified driving mode was used tune the controllers' performance to a desired mode of operation. Such qualities are required for a contingency control system that would allow the vehicle to operate with no operator inputs. / Dissertation/Thesis / Doctoral Dissertation Engineering 2015
149

?Localiza??o e planejamento de caminhos para um rob? human?ide e um rob? escravo com rodas

Santana, Andr? Mac?do 10 July 2007 (has links)
Made available in DSpace on 2014-12-17T14:55:02Z (GMT). No. of bitstreams: 1 AndreMS.pdf: 553881 bytes, checksum: b2c5c5f17b0c8205f6f6da967252fae4 (MD5) Previous issue date: 2007-07-10 / ?This work presents the localization and path planning systems for two robots: a non-instrumented humanoid and a slave wheeled robot. The localization of wheeled robot is made using odometry information and landmark detection. These informations are fused using a Extended Kalman Filter. The relative position of humanoid is acquired fusing (using another Kalman Filter) the wheeled robot pose with the characteristics of the landmark on the back of humanoid. Knowing the wheeled robot position and the humanoid relative position in relation to it, we acquired the absolute position of humanoid. The path planning system was developed to provide the cooperative movement of the two robots,incorporating the visibility restrictions of the robotic system / ?Esse trabalho apresentar? os sistemas de localiza??o e planejamento de caminho para um sistema rob?tico formado por um human?ide n?o instrumentado e um rob? escravo com rodas. O objetivo do sistema ? efetuar a navega??o do human?ide, que n?o possui sensores mas que pode ser remotamente controlado por infra-vermelhos, utilizando um rob? escravo com rodas. O rob? com rodas dever? se posicionar atr?s do human?ide e, atrav?s da imagem, estabelecer o posicionamento relativo do human?ide em rela??o a ele. A localiza??o do rob? com rodas ser? obtida fundindo informa??es de odometria e detec??o de marcos utilizando o Filtro de Kalman Extendido. A posi??o relativa do hu-man?ide ser? encontrada a partir da fus?o da pose do rob? com rodas juntamente com as caracter?sticas do marco fixado nas costas do human?ide utilizando outro Filtro de Kalman. Sabendo a posi??o do rob? com rodas e a posi??o relativa do human?ide em rela??o a ele tem-se a posi??o absoluta do human?ide. O planejador de caminho foi desenvolvido de forma a proporcionar a movimenta??o cooperativa dos dois rob?s incorporando as restri??es de visibilidade existente do sistema rob?tico.
150

Tecnologia assistiva para detecção de quedas : desenvolvimento de sensor vestível integrado ao sistema de casa inteligente

Torres, Guilherme Gerzson January 2018 (has links)
O uso de tecnologias assistivas objetivando proporcionar melhor qualidade de vida a idosos está em franca ascensão. Uma das linhas de pesquisa nessa área é o uso de dispositivos para detecção de quedas de idosos, um problema cuja ocorrência é cada vez maior devido a diversos fatores, incluindo maior longevidade, maior número de pessoas vivendo sozinhas na velhice, entre outros. Este trabalho apresenta o desenvolvimento de um dispositivo vestível, um nó sensor de redes de sensores sem fio de ultra-baixo consumo. Também descreve a expansão de um sistema KNX, ao qual o dispositivo é integrado. O dispositivo é capaz de identificar quedas, auxiliando no monitoramento de idosos e, por sua vez, aumentando a segurança dos mesmos. O monitoramento é realizado através de acelerômetro e giroscópio de 3 eixos, acoplados ao peito do usuário, capaz de detectar quedas através de um algoritmo de análise de limites determinados a partir da fusão dos dados dos sensores. O sensor vestível utiliza tecnologia EnOcean, que propicia conexão sem fio com um sistema de automação de casas inteligentes, de acordo com a norma KNX, através da plataforma Home Assistant. Telegramas de alarmes são automaticamente enviados no caso de detecção de quedas, e acionam um atuador pertencente ao sistema KNX. Além de validar a tecnologia EnOcean para uso em dispositivos vestíveis, o protótipo desenvolvido não indicou nenhum falso positivo através de testes realizados com dois usuários de características corporais diferentes, onde foram reproduzidos 100 vezes cada um dos oito tipos de movimentos (quatro movimentos de quedas e quatro de não quedas). Os testes realizados com o dispositivo revelaram sensibilidade e de especificidade de até 96% e 100%, respectivamente. / The use of assistive technologies to provide quality of life for elderly is increasing. One of the research lines of this area is the use of devices for fall detection, which is an increasing problem due to many factors, including greater longevity, more elders living alone, among others. This work presents the development of a wearable device, a sensor node for ultra-low power networks. Also, describes the expansion of a KNX system, which the device is integrated. The device is able to detect falls which can aid the monitoring of the elderly people and improve security. The monitoring is done through a 3-axis accelerometer and gyroscope attached on the user’s chest. The fall detection is done by a threshold algorithm based on data fusion of the sensors. The wearable sensor is an EnOcean node, which includes a wireless connection with a smart home system, according to the KNX standard, through the Home Assistant platform. Alarm telegrams are automatically sent in case of fall detection, and fires an actuator that is part of the KNX system to alarm. In addition to validating the EnOcean’s Technology for use on wearable devices, the developed prototype didn’t indicated any false positives through tests performed with two users of different body characteristics, where each of the eight types of movements (four movements of falls and four of non-falls) were reproduced 100 times. The tests done with the device revealed sensitivity and specificity of up to 96% and 100%, respectively.

Page generated in 0.0524 seconds