• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 13
  • 12
  • 11
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Socially aware robot navigation

Antonucci, Alessandro 03 November 2022 (has links)
A growing number of applications involving autonomous mobile robots will require their navigation across environments in which spaces are shared with humans. In those situations, the robot’s actions are socially acceptable if they reflect the behaviours that humans would generate in similar conditions. Therefore, the robot must perceive people in the environment and correctly react based on their actions and relevance to its mission. In order to give a push forward to human-robot interaction, the proposed research is focused on efficient robot motion algorithms, covering all the tasks needed in the whole process, such as obstacle detection, human motion tracking and prediction, socially aware navigation, etc. The final framework presented in this thesis is a robust and efficient solution enabling the robot to correctly understand the human intentions and consequently perform safe, legible, and socially compliant actions. The thesis retraces in its structure all the different steps of the framework through the presentation of the algorithms and models developed, and the experimental evaluations carried out both with simulations and on real robotic platforms, showing the performance obtained in real–time in complex scenarios, where the humans are present and play a prominent role in the robot decisions. The proposed implementations are all based on insightful combinations of traditional model-based techniques and machine learning algorithms, that are adequately fused to effectively solve the human-aware navigation. The specific synergy of the two methodology gives us greater flexibility and generalization than the navigation approaches proposed so far, while maintaining accuracy and reliability which are not always displayed by learning methods.
12

Low-Observable Object Detection and Tracking Using Advanced Image Processing Techniques

Li, Meng 21 August 2014 (has links)
No description available.
13

A Hybrid Tracking Approach for Autonomous Docking in Self-Reconfigurable Robotic Modules

Sohal, Shubhdildeep Singh 02 July 2019 (has links)
Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. The proposed self-reconfigurable mobile robot design exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. The two degrees of freedom (DOF) docking interface referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant) allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also achieved via an additional translational DOF, allowing for toggling between tracked and wheeled locomotion modes by lowering and raising the wheeled assembly. This thesis also presents a visual-based onboard Hybrid Target Tracking algorithm to detect and follow a target robot leading to autonomous docking between the modules. As a result of this proposed approach, the tracked features are then used to bring the robots in sufficient proximity for the docking procedure using Image Based Visual Servoing (IBVS) control. Experimental results to validate the robustness of the proposed tracking method, as well as the reliability of the autonomous docking procedure, are also presented in this thesis. / Master of Science / Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. Such robots can prove useful in environments that are either too dangerous or inaccessible to humans. Therefore, in this research, several specific hardware and software development aspects related to self-reconfigurable mobile robots are proposed. In terms of hardware development, a robotic module was designed that is symmetrically invertible and exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. Such interchangeable mobility is important when the robot operates in a constrained workspace. The mobile robot also has integrated two degrees of freedom (DOF) docking mechanisms referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant). The docking interface allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also performed via an additional translational DOF, allowing for lowering and raising the wheeled assembly. The robot is equipped with sensors to provide positional feedback of the joints relative to the target robot. In terms of software development, a visual-based onboard Hybrid Target Tracking algorithm for high-speed consistent tracking iv of colored targets is also presented in this work. The proposed technique is used to detect and follow a colored target attached to the target robot leading to autonomous docking between the modules using Image Based Visual Servoing (IBVS). Experimental results to validate the robustness of the proposed tracking approach, as well as the reliability of the autonomous docking procedure, are also presented in the thesis. The thesis is concluded with discussions about future research in both structured and unstructured terrains.
14

Representation and Interpretation of Manual and Non-Manual Information for Automated American Sign Language Recognition

Parashar, Ayush S 09 July 2003 (has links)
Continuous recognition of sign language has many practical applications and it can help to improve the quality of life of deaf persons by facilitating their interaction with hearing populace in public situations. This has led to some research in automated continuous American Sign Language recognition. But most work in continuous ASL recognition has only used top-down Hidden Markov Model (HMM) based approaches for recognition. There is no work on using facial information, which is considered to be fairly important. In this thesis, we explore bottom-up approach based on the use of Relational Distributions and Space of Probability Functions (SoPF) for intermediate level ASL recognition. We also use non-manual information, firstly, to decrease the number of deletion and insertion errors and secondly, to find whether the ASL sentence has 'Negation' in it, for which we use motion trajectories of the face. The experimental results show: The SoPF representation works well for ASL recognition. The accuracy based on the number of deletion errors, considering the 8 most probable signs in the sentence is 95%, while when considering 6 most probable signs, is 88%. Using facial or non-manual information increases accuracy when we consider top 6 signs, from 88% to 92%. Thus face does have information content in it. It is difficult to directly combine the manual information (information from hand motion) with non-manual (facial information) to improve the accuracy because of following two reasons: Manual images are not synchronized with the non-manual images. For example the same facial expressions is not present at the same manual position in two instances of the same sentences. One another problem in finding the facial expresion related with the sign, occurs when there is presence of a strong non-manual indicating 'Assertion' or 'Negation' in the sentence. In such cases the facial expressions are totally dominated by the face movements which is indicated by 'head shakes' or 'head nods'. The number of sentences, that have 'Negation' in them and are correctly recognized with the help of motion trajectories of the face are, 27 out of 30.
15

Extracción y recuperación de información temporal

Llidó Escrivá, Dolores Maria 20 September 2002 (has links)
Esta tesis intenta demostrar cómo los sistemas de Recuperación de Información (RI) y los sistemas de Detección de Sucesos (TDT - Topic Detection and Tracking) mejoran si se añade una componente temporal extraída automáticamente del texto, a la cual denominaremos periodo de suceso. Este atributo representa el espacio de tiempo en el que transcurre el suceso principal relatado en cada documento. Con este propósito la tesis ha cubierto los siguientes objetivos: * Definición de un modelo de tiempo para representar y manipular las referencias temporales que aparecen en un texto. * Desarrollo de una aplicación para la extracción de expresiones temporales lingüísticas y el reconocimiento del intervalo absoluto que referencian según el calendario Gregoriano. * Implementación de un sistema para la extracción automática del periodo de suceso. * Modificación de los actuales sistemas de RI, TDT para incluir la información temporal extraída con las herramientas anteriores.
16

An Investigation Of Jamming Techniques Through A Radar Receiver Simulation

Kirkpantur-cadallli, Atiye Asli 01 December 2007 (has links) (PDF)
In this study, various jamming techniques and their effects on detection and tracking performance have been investigated through a radar receiver simulation that models a search radar for target acquisition and single-target tracking radar during track operation. The radar is modeled as looking at airborne targets, and hence clutter is not considered. Customized algorithms have been developed for the detection of target azimuth angle, range and Doppler velocity within the modeled geometry and chosen radar parameters. The effects of varying parameters like jamming-to-signal ratio (JSR) and jamming signal`s Doppler shift have been examined in the analysis of jamming effectiveness.
17

Detecting And Tracking Moving Objects With An Active Camera In Real Time

Karakas, Samet 01 September 2011 (has links) (PDF)
Moving object detection techniques can be divided into two categories based on the type of the camera which is either static or active. Methods of static cameras can detect moving objects according to the variable regions on the video frame. However, the same method is not suitable for active cameras. The task of moving object detection for active cameras generally needs more complex algorithms and unique solutions. The aim of this thesis work is real time detection and tracking of moving objects with an active camera. For this purpose, feature based algorithms are implemented due to the computational efficiency of these kinds of algorithms and SURF (Speeded Up Robust Features) is mainly used for these algorithms. An algorithm is developed in C++ environment and OpenCV library is frequently used. The developed algorithm is capable of detecting and tracking moving objects by using a PTZ (Pan-Tilt-Zoom) camera at a frame rate of approximately 5 fps and with a resolution of 640x480.
18

Representation and interpretation of manual and non-manual information for automated American Sign Language recognition [electronic resource] / by Ayush S Parashar.

Parashar, Ayush S. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 80 pages. / Thesis (M.S.C.S.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Continuous recognition of sign language has many practical applications and it can help to improve the quality of life of deaf persons by facilitating their interaction with hearing populace in public situations. This has led to some research in automated continuous American Sign Language recognition. But most work in continuous ASL recognition has only used top-down Hidden Markov Model (HMM) based approaches for recognition. There is no work on using facial information, which is considered to be fairly important. In this thesis, we explore bottom-up approach based on the use of Relational Distributions and Space of Probability Functions (SoPF) for intermediate level ASL recognition. We also use non-manual information, firstly, to decrease the number of deletion and insertion errors and secondly, to find whether the ASL sentence has 'Negation' in it, for which we use motion trajectories of the face. / ABSTRACT: The experimental results show: - The SoPF representation works well for ASL recognition. The accuracy based on the number of deletion errors, considering the 8 most probable signs in the sentence is 95%, while when considering 6 most probable signs, is 88%. - Using facial or non-manual information increases accuracy when we consider top 6 signs, from 88% to 92%. Thus face does have information content in it. - It is difficult to directly combine the manual information (information from hand motion) with non-manual (facial information) to improve the accuracy because of following two reasons: 1. Manual images are not synchronized with the non-manual images. For example the same facial expressions is not present at the same manual position in two instances of the same sentences. 2. One another problem in finding the facial expresion related with the sign, occurs when there is presence of a strong non-manual indicating 'Assertion' or 'Negation' in the sentence. / ABSTRACT: In such cases the facial expressions are totally dominated by the face movements which is indicated by 'head shakes' or 'head nods'. - The number of sentences, that have 'Negation' in them and are correctly recognized with the help of motion trajectories of the face are, 27 out of 30. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
19

Moving Object Identification And Event Recognition In Video Surveillamce Systems

Orten, Burkay Birant 01 August 2005 (has links) (PDF)
This thesis is devoted to the problems of defining and developing the basic building blocks of an automated surveillance system. As its initial step, a background-modeling algorithm is described for segmenting moving objects from the background, which is capable of adapting to dynamic scene conditions, as well as determining shadows of the moving objects. After obtaining binary silhouettes for targets, object association between consecutive frames is achieved by a hypothesis-based tracking method. Both of these tasks provide basic information for higher-level processing, such as activity analysis and object identification. In order to recognize the nature of an event occurring in a scene, hidden Markov models (HMM) are utilized. For this aim, object trajectories, which are obtained through a successful track, are written as a sequence of flow vectors that capture the details of instantaneous velocity and location information. HMMs are trained with sequences obtained from usual motion patterns and abnormality is detected by measuring the distance to these models. Finally, MPEG-7 visual descriptors are utilized in a regional manner for object identification. Color structure and homogeneous texture parameters of the independently moving objects are extracted and classifiers, such as Support Vector Machine (SVM) and Bayesian plug-in (Mahalanobis distance), are utilized to test the performance of the proposed person identification mechanism. The simulation results with all the above building blocks give promising results, indicating the possibility of constructing a fully automated surveillance system for the future.
20

Detecção e rastreamento de íris para implementação de uma interface homem-computador

Fernandes Junior, Valmir 10 August 2010 (has links)
Made available in DSpace on 2016-03-15T19:37:33Z (GMT). No. of bitstreams: 1 Valmir Fernandes Junior.pdf: 2218220 bytes, checksum: f12b7829c2024510149ca8f24aa66e26 (MD5) Previous issue date: 2010-08-10 / This paper presents a technique to iris detection and tracking that can be used in a human computer interface which allows people with mobility restricted, including no mobility above the shoulders, can control the mouse pointer only moving their eyes, without using expensive equipments. The unique data input used is an ordinary webcam without optical zoon, special lights or restricting user face mobility. The mouse displacement will occur in a straight way, in other words, the mouse cursor will be positioned at the place estimated by the technique. To the iris detection tests 60 images were used. 90.83% of the iris were identified correctly, there were 4.17% of missing iris and 5% false positives (iris were estimated in a wrong place). Using images generated straight from the webcam the iris were found correctly in 87,5%, no iris were found in 11,11% and in 1,39% the technique found iris in wrong places, the average time between positioning and a click is about 20 seconds. / Este trabalho apresenta uma técnica de detecção e rastreamento de íris para ser utilizada em uma interface homem-computador que permita pessoas com mobilidade restrita, inclusive sem mobilidade nos ombros, possam controlar o cursor do mouse com o movimento dos olhos, sem a necessidade de adquirir equipamentos caros. A única entrada de dados utilizada é uma webcam simples sem auxílio de zoon ótico, iluminação especial ou fixação da face do usuário. A movimentação do mouse dar-se-á de maneira direta, ou seja, o ponteiro do mouse será direcionado para a região estimada pela técnica. Para a realização dos testes de detecção de íris foram utilizadas 60 imagens. Em 90.83% dos casos as íris foram encontradas corretamente, 4.17% dos casos não foram identificados e ocorreram 5% de falsos positivos (casos em que as íris foram estimadas no lugar errado). Com as imagens geradas diretamente pela webcam a identificação das íris ocorreu com sucesso em 87,50% dos casos, erros em 11,11 % e 1,39% de falsos positivos, o tempo médio entre o posicionamento e um clique é de cerca de 20 segundos.

Page generated in 0.0983 seconds