• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 44
  • 44
  • 34
  • 17
  • 13
  • 12
  • 9
  • 6
  • 6
  • 5
  • 1
  • Tagged with
  • 482
  • 110
  • 104
  • 102
  • 94
  • 92
  • 87
  • 78
  • 60
  • 55
  • 50
  • 48
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Spoken Resistance: Slam Poetry Performance as a Diasporic Response to Discursive Violence

Lindeman, Harriet 01 January 2017 (has links)
This project foregrounds the work and perspectives of spoken word poets of Middle Eastern and North African (MENA) descent in connection to the NYC slam poetry scene. I trace the parallel racialization of MENA diaspora communities in the US and the development of slam poetry as a space for raising “othered” voices. Through ethnographic analysis, I consider slam poetry as a site of intersectional struggle, arguing that the engagement of MENA diaspora poets with this scene reveals the ways in which poetry both constitutes resistance to discursive violence through representation and works to mobilize audiences against tangible structures of violence.
242

Indexation d'une base de données images : application à la localisation et la cartographie fondées sur des radio-étiquettes et des amers visuels pour la navigation d'un robot en milieu intérieur / Indexation of an image data base : application to the localization and the mapping from RFID tags and visual landmarks for the indoor navigation of a robot

Raoui, Younès 29 April 2011 (has links)
Ce mémoire concerne les techniques d'indexation dans des bases d'image, ainsi que les méthodes de localisation en robotique mobile. Il fait le lien entre les travaux en Perception du pôle Robotique et Intelligence Artificielle du LAAS-CNRS, et les recherches sur la fouille de données menées à l'Université de Rabat. Depuis une dizaine d'années, la vision est devenue une source de données sensorielles essentielles sur les robots mobiles: elle fournit en particulier des représentations de l'environnement dans lequel doit se déplacer un robot sous la forme de modèles géométriques ou de modèles fondés sur l'apparence. Concernant la vision, seules les représentations fondées sur l'apparence ont été considérées; elles consistent en une base d'images acquises lors de déplacements effectués de manière supervisée durant une phase d'apprentissage. Le robot se localise en recherchant dans la base d'images, celle qui ressemble le plus à l'image courante: les techniques exploitées pour ce faire sont des méthodes d'indexation, similaires à celles exploitées en fouille de données sur Internet par exemple. Nous proposons une approche fondés sur des points d'intérêt extraits d'images en couleur texturées. Par ailleurs, nous présentons une technique de navigation par RFID (Radio Frequency IDentifier) qui utilise la méthode MonteCarlo, appliquée soit de manière intuitive, soit de manière formelle. Enfin, nous donnons des résultats très préliminaires sur la combinaison d'une perception par capteurs RFID et par capteurs visuels afin d'améliorer la précision de la localisation du robot mobile. / This document is related both to indexing methods in image data bases and to localization methods used in mobile robotics. It exploits the relationships between research works on Perception made in the Robotics department of LAAS-CNRS in Toulouse, and on Data Mining performed by the LIMAIARF lab at the Rabat University. Computer Vision has become a major source of sensory data on mobile robots for about ten years; it allows to build especially representations of the environment in which a robot has to execute motions, either by geometrical maps or by appearance-based models. Concerning computer vision, only appearance-based representations have been studied; they consist on a data base of images acquired while the robot is moved in an interactive way during a learning step. Robot self-localization is provided by searching in this data base, the image which looks like the one acquired from the current position: this function is performed from indexing or data mining methods, similar to the ones used on Internet. It is proposed an indexing method based on interest points extracted from color and textured images. Moreover, geometrical representations are also considered for an RFID-based navigation method, based on particle filtering, used either in a classical formal way or with a more intuitive approach. Finally, very preliminar results are described on a multi-sensory approach using both Vision and RFID tags in order to improve the accuracy on the robot localization
243

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Guizilini, Vitor Campanholo 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
244

High precision monocular visual odometry / Estimação 3D aplicada a odometria visual

Pereira, Fabio Irigon January 2018 (has links)
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usadas para estimar a trajetória de um veículo equipado com uma câmera, problema conhecido como odometria visual. Para obter medidas objetivas de eficiência e precisão, e poder comparar os resultados obtidos com o estado da arte, uma base de dados de alta precisão, bastante utilizada pela comunidade científica foi utilizada. No curso deste trabalho novas técnicas para rastreamento de detalhes, estimativa de posição de câmera, cálculo de posição 3D de pontos e recuperação de escala são propostos. Os resultados alcançados superam os mais bem ranqueados trabalhos na base de dados escolhida até o momento da publicação desta tese. / Recovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
245

Motion Conflict Detection and Resolution in Visual-Inertial Localization Algorithm

Wisely Babu, Benzun 30 July 2018 (has links)
In this dissertation, we have focused on conflicts that occur due to disagreeing motions in multi-modal localization algorithms. In spite of the recent achievements in robust localization by means of multi-sensor fusion, these algorithms are not applicable to all environments. This is primarily attributed to the following fundamental assumptions: (i) the environment is predominantly stationary, (ii) only ego-motion of the sensor platform exists, and (iii) multiple sensors are always in agreement with each other regarding the observed motion. Recently, studies have shown how to relax the static environment assumption using outlier rejection techniques and dynamic object segmentation. Additionally, to handle non ego-motion, approaches that extend the localization algorithm to multi-body tracking have been studied. However, there has been no attention given to the conditions where multiple sensors contradict each other with regard to the motions observed. Vision based localization has become an attractive approach for both indoor and outdoor applications due to the large information bandwidth provided by images and reduced cost of the cameras used. In order to improve the robustness and overcome the limitations of vision, an Inertial Measurement Unit (IMU) may be used. Even though visual-inertial localization has better accuracy and improved robustness due to the complementary nature of camera and IMU sensor, they are affected by disagreements in motion observations. We term such dynamic situations as environments with motion conflictbecause these are caused when multiple different but self- consistent motions are observed by different sensors. Tightly coupled visual inertial fusion approaches that disregard such challenging situations exhibit drift that can lead to catastrophic errors. We have provided a probabilistic model for motion conflict. Additionally, a novel algorithm to detect and resolve motion conflicts is also presented. Our method to detect motion conflicts is based on per-frame positional estimate discrepancy and per- landmark reprojection errors. Motion conflicts were resolved by eliminating inconsistent IMU and landmark measurements. Finally, a Motion Conflict aware Visual Inertial Odometry (MC- VIO) algorithm that combined both detection and resolution of motion conflict was implemented. Both quantitative and qualitative evaluation of MC-VIO on visually and inertially challenging datasets were obtained. Experimental results indicated that MC-VIO algorithm reduced the absolute trajectory error by 70% and the relative pose error by 34% in scenes with motion conflict, in comparison to the reference VIO algorithm. Motion conflict detection and resolution enables the application of visual inertial localization algorithms to real dynamic environments. This paves the way for articulate object tracking in robotics. It may also find numerous applications in active long term augmented reality.
246

Poetry Slam. En studie av vilken betydelse Poetry Slam har som litteratur och som identitetsskapande verksamhet för ett antal tävlingsdeltagare. / Poetry Slam : A study of Poetry Slam’s literary significance and its identity defining impact on those participating in Poetry Slam contests.

Brummer Pind, Daniella January 2007 (has links)
Today, the definition of literature has changed. Walter J. Ong talks about the secondary spoken language that has surfaced as a result of new technical innovations to our writing tools. This also transforms our attitude towards literature and we can see a return to literature based on verbal characteristics. According to Hans Hertel the verbal renaissance reflects a modern man need to come closer to each other and create togetherness. This master thesis adheres to these views and states that Poetry Slam is a manifestation of these theories. It also argues that Poetry Slam can not only be viewed upon as a literary movement. Besides providing literary teaching practices for large quantities of individuals, performing Poetry Slam also facilitates personal development. The master thesis is an interview based study, which aims to ascertain the importance of Poetry Slam as literature and its effect on the identity defining processes of those participating in Poetry Slam contests. The foundation for the essay’s line of questioning is based on the literary theories of Pierre Bourdieu. It also incorporates the theories of identity creation in a modern day society by Thomas Ziehe. The subjects interviewed have a thorough understanding on how to assess verbal art forms. It is also a common understanding that poetry is a wider art form than literature. Despite this, there is a positive interpretation and evaluation of the literature in question. Several of the subjects supplying the underpinning information firmly believes that participation in Poetry Slam sessions will further their carriers as poets. This consequently leads to a conflict between their goals and the aim and purpose of Poetry Slam. / Uppsatsnivå: D
247

Stereo Camera Pose Estimation to Enable Loop Detection / Estimering av kamera-pose i stereo för att återupptäcka besökta platser

Ringdahl, Viktor January 2019 (has links)
Visual Simultaneous Localization And Mapping (SLAM) allows for three dimensionalreconstruction from a camera’s output and simultaneous positioning of the camera withinthe reconstruction. With use cases ranging from autonomous vehicles to augmentedreality, the SLAM field has garnered interest both commercially and academically. A SLAM system performs odometry as it estimates the camera’s movement throughthe scene. The incremental estimation of odometry is not error free and exhibits driftover time with map inconsistencies as a result. Detecting the return to a previously seenplace, a loop, means that this new information regarding our position can be incorporatedto correct the trajectory retroactively. Loop detection can also facilitate relocalization ifthe system loses tracking due to e.g. heavy motion blur. This thesis proposes an odometric system making use of bundle adjustment within akeyframe based stereo SLAM application. This system is capable of detecting loops byutilizing the algorithm FAB-MAP. Two aspects of this system is evaluated, the odometryand the capability to relocate. Both of these are evaluated using the EuRoC MAV dataset,with an absolute trajectory RMS error ranging from 0.80 m to 1.70 m for the machinehall sequences. The capability to relocate is evaluated using a novel methodology that intuitively canbe interpreted. Results are given for different levels of strictness to encompass differentuse cases. The method makes use of reprojection of points seen in keyframes to definewhether a relocalization is possible or not. The system shows a capability to relocate inup to 85% of all cases when a keyframe exists that can project 90% of its points intothe current view. Errors in estimated poses were found to be correlated with the relativedistance, with errors less than 10 cm in 23% to 73% of all cases. The evaluation of the whole system is augmented with an evaluation of local imagedescriptors and pose estimation algorithms. The descriptor SIFT was found to performbest overall, but demanding to compute. BRISK was deemed the best alternative for afast yet accurate descriptor. Conclusions that can be drawn from this thesis is that FAB-MAP works well fordetecting loops as long as the addition of keyframes is handled appropriately.
248

Mapeado 3D con robots autónomos mediante visión estéreo

Sáez Martínez, Juan Manuel 23 September 2005 (has links)
No description available.
249

An evaluation and comparison of long term simultaneous localization and mapping algorithms

Conte Marza, Fabián Alejandro January 2018 (has links)
Ingeniero Civil Eléctrico / Este trabajo consiste en la generación de un set de datos con un respectivo ground truth (medición más confiable) y el uso de los algoritmos ORB-SLAM (Orientated FAST and Rotated BRIEF (Binary Robust Independent Elementary Features) Simultaneous Location And Mapping) y LOAM (Lidar Odometry And Mapping) a modo de entender de mejor forma el problema de SLAM (localización y mapeo simultaneo) y comparar los resultados obtenidos con el ground truth. A modo de entender de mejor forma el set de datos generado, la funcionalidad de los diferentes sensores es explicada. Los sensores utilizados para generar los datos son LIDAR, cámara estéreo y GPS. Este trabajo posee dos mayores etapas, en primer lugar, el GPS es estudiado para establecer las diferentes formas de extraer los datos desde el dispositivo. Una forma es generar un nodo de ROS que mediante comunicación de Bluetooth otorga un mensaje que puede ser leído. Otra forma es presionar tres veces el botón de encendido del GPS, lo que inicia el almacenamiento de los datos en la tarjeta SD. Mientras el primer método entrega mayor cantidad de información, es menos confiable, existiendo la posibilidad de guardar mensajes vacios o perdida de ciertos datos, afectando la tasa de muestreo. Finalmente una combinación de ambos métodos es implementada. Un set de datos de prueba es generado cerca de la Universidad De Chile, para probar que los datos están siendo almacenados correctamente. En el test se concluye que a modo de obtener mejor resultado con el GPS es necesario tomar los datos en zonas con baja cantidad de edificios. Finalmente con los datos y el ground truth el Error Absoluto de la Trayectoria (ATE) es calculado como método de comparación de ambas trayectorias generadas con los algoritmos mencionados. El ATE s la cantidad de energía necesaria para transformar la trayectoria estimada en el ground truth. Dadas ciertas limitaciones en la extracción de los datos estimados, la comparación se realizo entre dos set de datos de prueba, con pequeña cantidad de loops en el camino recorrido. En esta situación los resultados dados por LOAM son mejores que los obtenidos con ORB.SLAM. Pero en un ambiente con mayor cantidad de loops y una trayectoria más larga ORB-SLAM entregaría mejores resultados. ABSTRACT This work consists of the generation of a data-set with ground truth and the use of ORB-SLAM (Orientated FAST and Rotated BRIEF (Binary Robust Independent Elementary Features) Simultaneous Location And Mapping) and LOAM (Lidar Odometry And Mapping) algorithms as a way to better understand SLAM and to compare the ground truth and the data-set generated. To fully understand the data-set generated, the functionality of the different sensors is explained. The sensors used to generate the data-set are LIDAR, Stereo Camera and a GPS. This work is divided into two stages, in the first place the GPS is studied to establish the different ways to extract the data from it. One way is to generate a ROS node that through Bluetooth communication generates a message which is published. The other way is to press three times the button of the GPS to store the data in the GPS micro SD memory. While the first method is capable of store more data per second, it is less reliable, existing the possibility of store an empty message or simply the loss of data in the process. In the end, a combination of the two methods is implemented, modifying the bag file with the data stored in the micro SD. A test-data is generated near the University Of Chile, to prove that the bag file (a type of file that can contain any kind of information such as images, video or text, between others) is correctly generated. In these tests, it was concluded that to obtain better performance of the GPS therefore, obtain a better ground truth, it was necessary to generate the data in a zone with a low quantity of high buildings. Finally with the data-set and the ground truth the Absolute Trajectory Error (ATE) is used as a method to compare the trajectories. The ATE is the amount of energy that would require to transform the estimated trajectory on the ground truth. Since certain limitations of the extraction of the estimated path, the comparison was made between two small data-set which counted with low quantity of closed loops. Therefore the LOAM algorithm shows better results in this trajectory. The ORB-SLAM algorithm shows better results in data-sets with a high quantity of loops in the path.
250

Développement d'un capteur composite Vision/Laser à couplage serré pour le SLAM d'intérieur

Gallegos Garrido, Gabriela 17 June 2011 (has links) (PDF)
Depuis trois décennies, la navigation autonome en environnement inconnu est une des thématiques principales de recherche de la communauté robotique mobile. En l'absence de connaissance sur l'environnement, il est nécessaire de réaliser simultanément les tâches de localisation et de cartographie qui sont extrêmement interdépendantes. Ce problème est connu sous le nom de SLAM (Simultaneous Localization And Mapping). Pour obtenir des informations précises sur leur environnement, les robots mobiles sont équipés d'un ensemble de capteurs appelé système de perception qui leur permet d'effectuer une localisation précise et une reconstruction fiable et cohérente de leur environnement. Nous pensons qu'un système de perception composé de l'odométrie du robot, d'une camera omnidirectionnelle et d'un télémètre laser 2D est suffisant pour résoudre de manière robuste les problèmes de SLAM. Dans ce contexte, nous proposons une approche appearance-based pour résoudre les problèmes de SLAM et effectuer une reconstruction 3D fiable de l'environnement. Cette approche repose sur un couplage serré entre les capteurs laser et omnidirectionnel permettant d'exploiter au mieux les complémentarités des deux types de capteurs. Une représentation originale et générique robot-centrée est proposée. Une vue augmentée sphérique est construite en projetant dans l'image omnidirectionelle les mesures de profondeur du télémètre laser et une estimation de la position du sol. Notre méthode de localisation de type appearance-based minimise une fonction de coût non-linéaire directement construite à partir de la vue sphérique augmenté décrite précédemment. Cependant comme dans toutes les méthodes récursives d'optimisation, des problèmes de convergence peuvent survenir quand l'initialisation est loin de la solution. Ce problème est aussi présent dans notre méthode où une initialisation suffisamment proche de la solution est nécessaire pour s'assurer une convergence rapide et pour réduire les couts de calcul. Pour cela, on utilise un algorithme de PSM amélioré pour construire une prédection du déplacement du robot.

Page generated in 0.0299 seconds