Spelling suggestions: "subject:"robotics vision""
1 |
A vision system for a robot working in a semi-structured environmentGinige, A. January 1986 (has links)
No description available.
|
2 |
Recognising activities by jointly modelling actions and their effectsVafeias, Efstathios January 2015 (has links)
With the rapid increase in adoption of consumer technologies, including inexpensive but powerful hardware, robotics appears poised at the cusp of widespread deployment in human environments. A key barrier that still prevents this is the machine understanding and interpretation of human activity, through a perceptual medium such as computer vision, or RBG-D sensing such as with the Microsoft Kinect sensor. This thesis contributes novel video-based methods for activity recognition. Specifically, the focus is on activities that involve interactions between the human user and objects in the environment. Based on streams of poses and object tracking, machine learning models are provided to recognize various of these interactions. The thesis main contributions are (1) a new model for interactions that explicitly learns the human-object relationships through a latent distributed representation, (2) a practical framework for labeling chains of manipulation actions in temporally extended activities and (3) an unsupervised sequence segmentation technique that relies on slow feature analysis and spectral clustering. These techniques are validated by experiments with publicly available data sets, such as the Cornell CAD-120 activity corpus which is one of the most extensive publicly available such data sets that is also annotated with ground truth information. Our experiments demonstrate the advantages of the proposed methods, over and above state of the art alternatives from the recent literature on sequence classifiers.
|
3 |
An intelligent sample changerAngelikaki, C. January 1988 (has links)
No description available.
|
4 |
An application of structured light techniques to the examination of holes and concavitiesMichell, V. A. S. January 1987 (has links)
No description available.
|
5 |
Agent-based 3d visual trackingCheng, Tak Keung Unknown Date (has links)
We describe our overall approach to building robot vision systems, and the conceptual systems architecture as a network of agents, which run in parallel, and cooperate to achieve the system’s goals. We present the current state of the 3D Feature-Based Tracker, a robot vision system for tracking and segmenting the 3D motion of objects using image input from a calibrated stereo pair of video cameras. The system runs in a multi-level cycle of prediction and verification or correction. The currently modelled 3D positions and velocities of the feature points are extrapolated a short time into the future to yield predictions of 3D position. These 3D predictions are projected into the two stereo views, and are used to guide a fast and highly focused visual search for the feature points. The image positions at which the features are re-acquired are back-projected in 3D space in order to update the 3D positions and velocities. At a higher level, features are dynamically grouped into clusters with common 3D motion. Predictions from the cluster level can be fed to the lower level to correct errors in the point-wise tracking.
|
6 |
Řízení mobilního robota / Mobile robot controlFraněk, Dominik January 2011 (has links)
The goal of this work is design and realization of an autonomous mobile robot, capable of navigation and map creation, using stereoscopic camera and robotic operation system ROS. ** This is an added text for reaching minimal length needed for uploading into information system. **
|
7 |
Exploration architecturale pour la conception d'un système sur puce de vision robotique, adéquation algorithme-architecture d'un système embarqué temps-réel / Architectural exploration for the design of a robotic vision System-on-Chip, algorithm-architecture adequacy of a real-time embedded systemLefebvre, Thomas 24 September 2012 (has links)
La problématique de cette thèse se tient à l'interface des domaines scientifiques de l'adéquation algorithme architecture, des systèmes de vision bio-inspirée en robotique mobile et du traitement d'images.Le but est de rendre un robot autonome dans son processus de perception visuelle, en intégrant au sein du robot cette tâche cognitive habituellement déportée sur un serveur de calcul distant.Pour atteindre cet objectif, l'approche de conception employée suit un processus d'adéquation algorithme architecture, où les différentes étapes de traitement d'images sont analysées minutieusement.Les traitements d'image sont modifiés et déployés sur une architecture embarquée de façon à respecter des contraintes d'exécution temps-réel imposées par le contexte robotique.La robotique mobile est un sujet de recherche académique qui s'appuie sur des approches bio-mimétiques.La vision artificielle étudiée dans notre contexte emploie une approche bio-inspirée multi-résolution, basée sur l'extraction et la mise en forme de zones caractéristiques de l'image.Du fait de la complexité de ces traitements et des nombreuses contraintes liées à l'autonomie du robot, le déploiement de ce système de vision nécessite une démarche rigoureuse et complète d'exploration architecturale logicielle et matérielle.Ce processus d'exploration de l'espace de conception est présenté dans cette thèse.Les résultats de cette exploration ont mené à la conception d'une architecture principalement composée d'accélérateurs matériels de traitements (IP) paramétrables et modulaires, qui sera déployée sur un circuit reconfigurable de type FPGA.Ces IP et le fonctionnement interne de chacun d'entre eux sont décrits dans le document.L'impact des paramètres architecturaux sur l'utilisation des ressources matérielles est étudié pour les traitements principaux.Le déploiement de la partie logicielle restante est présenté pour plusieurs plate-formes FPGA potentielles.Les performances obtenues pour cette solution architecturale sont enfin présentées.Ces résultats nous permettent aujourd'hui de conclure que la solution proposée permet d'embarquer le système de vision dans des robots mobiles en respectant les contraintes temps-réel imposées. / This Ph.D Thesis stands at the crossroads of three scientific domains : algorithm-architecture adequacy, bio-inspired vision systems in mobile robotics, and image processing.The goal is to make a robot autonomous in its visual perception, by the integration to the robot of this cognitive task, usually executed on remote processing servers.To achieve this goal, the design approach follows a path of algorithm architecture adequacy, where the different image processing steps of the vision system are minutely analysed.The image processing tasks are adapted and implemented on an embedded architecture in order to respect the real-time constraints imposed by the robotic context.Mobile robotics as an academic research topic based on bio-mimetism.The artificial vision system studied in our context uses a bio-inspired multi-resolution approach, based on the extraction and formatting of interest zones of the image.Because of the complexity of these tasks and the many constraints due to the autonomy of the robot, the implementation of this vision system requires a rigorous and complete procedure for the software and hardware architectural exploration.This processus of exploration of the design space is presented in this document.The results of this exploration have led to the design of an architecture primarly based on parametrable and scalable dedicated hardware processing units (IPs), which will be implemented on an FPGA reconfigurable circuit.These IPs and the inner workings of each of them are described in the document.The impact of their architectural parameters on the FPGA resources is studied for the main processing units.The implementation of the software part is presented for several potential FPGA platforms.The achieved performance for this architectural solution are finally presented.These results allow us to conclude that the proposed solution allows the vision system to be embedded in mobile robots within the imposed real-time constraints.
|
8 |
Uso de filtro de Kalman e visão computacional para a correção de incertezas de navegação de robos autonomos / Use of Kalman filter and computational vision for the correction of uncertainties in the navigation of autonomous robotsDiogenes, Luciana Claudia Martins Ferreira 12 August 2018 (has links)
Orientadores: Paulo Roberto Gardel Kurka, Helder Anibal Hermini / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-12T23:15:00Z (GMT). No. of bitstreams: 1
Diogenes_LucianaClaudiaMartinsFerreira_D.pdf: 1973089 bytes, checksum: f2ce9df7707c0da30e35bae49d195299 (MD5)
Previous issue date: 2008 / Resumo: O presente trabalho tem como finalidade estabelecer um conjunto de procedimentos básicos de navegação e controle de robôs autônomos, baseado em imagens. Mapas de intensidade provenientes das imagens de duas câmeras são convertidos em mapas de profundidade, que fornecem ao robô informações sobre o seu posicionamento em um ambiente composto de objetos distintos. O modelo de robô de duas rodas com acionamento diferencial é usado, permitindo que o processo de navegação se dê através da fusão sensorial das informações obtidas pelas câmeras e dos dados de odometria do seu movimento. O filtro linear de Kalman é usado nesse processo de fusão, para obter estimativas ótimas de posição do robô, baseados nas imagens observadas pelas câmeras e pela informação de odometria medida pelos encoders de rotação das rodas. Realizamse simulações computacionais da tarefa de obtenção e processamento da imagem de um ambiente bidimensional simplificado, bem como do procedimento de navegação com fusão sensorial. As simulações têm por finalidade testar a viabilidade e robustez do procedimento de navegação, na presença de incertezas nas medidas de posição através de câmeras bem como nas medidas de odometria / Abstract: The work establishes a collection of procedures for image based navigation control of an autonomous robot. Intensity maps obtained from cameras are transformed in depth maps, which provide information about the robot's localization in an environment, comprised of distinct objects. A two wheeled, differential powered robot model is used, allowing the navigation process to combine double source information from the camera and odometry sensors. The Kalman filter technique is used in this information combination in order to yield optimal estimates of position of the robot, based on the camera and odometry information. Computational simulations are used to validate the image capture and processing, as well as the sensorial fusion technique, in a simplified bi-dimensional environment. The simulations are also useful in accessing the viability and robustness of the navigation process, in the presence of measurement uncertainties associated to the camera and odometry measurements / Doutorado / Mecanica dos Sólidos e Projeto Mecanico / Doutor em Engenharia Mecânica
|
9 |
Ultra Low Latency Visual Servoing for High Speed Object Tracking Using Multi Focal Length Camera ArraysMcCown, Alexander Steven 01 July 2019 (has links)
In high speed applications of visual servoing, latency from the recognition algorithm can cause significant degradation of in response time. Hardware acceleration allows for recognition algorithms to be applied directly during the raster scan from the image sensor, thereby removing virtually all video processing latency. This paper examines one such method, along with an analysis of design decisions made to optimize for use during high speed airborne object tracking tests for the US military. Designing test equipment for defense use involves working around unique challenges that arise from having many details being deemed classified or highly sensitive information. Designing tracking system without knowing any exact numbers for speeds, mass, distance or nature of the objects being tracked requires a flexible control system that can be easily tuned after installation. To further improve accuracy and allow rapid tuning to a yet undisclosed set of parameters, a machine learning powered auto-tuner is developed and implemented as a control loop optimizer.
|
10 |
Semi Autonomous Vehicle Intelligence: Real Time Target Tracking For Vision Guided Autonomous VehiclesAnderson, Jonathan D. 16 March 2007 (has links) (PDF)
Unmanned vehicles (UVs) are seeing more widespread use in military, scientific, and civil sectors in recent years. These UVs range from unmanned air and ground vehicles to surface and underwater vehicles. Each of these different UVs has its own inherent strengths and weaknesses, from payload to freedom of movement. Research in this field is growing primarily because of the National Defense Act of 2001 mandating that one-third of all military vehicles be unmanned by 2015. Research using small UVs, in particular, is a growing because small UVs can go places that may be too dangerous for humans. Because of the limitations inherent in small UVs, including power consumption and payload, the selection of light weight and low power sensors and processors becomes critical. Low power CMOS cameras and real-time vision processing algorithms can provide fast and reliable information to the UVs. These vision algorithms often require computational power that limits their use in traditional general purpose processors using conventional software. The latest developments in field programmable gate arrays (FPGAs) provide an alternative for hardware and software co-design of complicated real-time vision algorithms. By tracking features from one frame to another, it becomes possible to perform many different high-level vision tasks, including object tracking and following. This thesis describes a vision guidance system for unmanned vehicles in general and the FPGA hardware implementation that operates vision tasks in real-time. This guidance system uses an object following algorithm to provide information that allows the UV to follow a target. The heart of the object following algorithm is real-time rank transform, which transforms the image into a more robust image that maintains the edges found in the original image. A minimum sum of absolute differences algorithm is used to determine the best correlation between frames, and the output of this correlation is used to update the tracking of the moving target. Control code can use this information to move the UV in pursuit of a moving target such as another vehicle.
|
Page generated in 0.049 seconds