• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Going further with direct visual servoing / Aller plus loin avec les asservissements visuels directs

Bateux, Quentin 12 February 2018 (has links)
Dans cette thèse, nous nous concentrons sur les techniques d'asservissement visuel (AV), critiques pour de nombreuses applications de vision robotique et insistons principalement sur les AV directs. Afin d'améliorer l'état de l'art des méthodes directes, nous nous intéressons à plusieurs composantes des lois de contrôle d'AV traditionnelles. Nous proposons d'abord un cadre générique pour considérer l'histogramme comme une nouvelle caractéristique visuelle. Cela permet de définir des lois de contrôle efficaces en permettant de choisir parmi n'importe quel type d'histogramme pour décrire des images, depuis l'histogramme d'intensité à l'histogramme couleur, en passant par les histogrammes de Gradients Orientés. Une nouvelle loi d'asservissement visuel direct est ensuite proposée, basée sur un filtre particulaire pour remplacer la partie optimisation des tâches d'AV classiques, permettant d'accomplir des tâches associées à des fonctions de coûts hautement non linéaires et non convexes. L'estimation du filtre particulaire peut être calculée en temps réel à l'aide de techniques de transfert d'images permettant d'évaluer les mouvements de caméra associés aux déplacements des caractéristiques visuelles considérées dans l'image. Enfin, nous présentons une nouvelle manière de modéliser le problème de l'AV en utilisant l'apprentissage profond et les réseaux neuronaux convolutifs pour pallier à la difficulté de modélisation des problèmes non convexes via les méthodes analytiques classiques. En utilisant des techniques de transfert d'images, nous proposons une méthode permettant de générer rapidement des ensembles de données d'apprentissage de grande taille afin d'affiner des architectures de réseau pré-entraînés sur des tâches connexes, et résoudre des tâches d'AV. Nous montrons que cette méthode peut être appliquée à la fois pour modéliser des scènes connues, et plus généralement peut être utilisée pour modéliser des estimations de pose relative entre des couples de points de vue pris de scènes arbitraires. / In this thesis we focus on visual servoing (VS) techniques, critical for many robotic vision applications and we focus mainly on direct VS. In order to improve the state-of-the-art of direct methods, we tackle several components of traditional VS control laws. We first propose a method to consider histograms as a new visual servoing feature. It allows the definition of efficient control laws by allowing to choose from any type of his tograms to describe images, from intensity to color histograms, or Histograms of Oriented Gradients. A novel direct visual servoing control law is then proposed, based on a particle filter to perform the optimization part of visual servoing tasks, allowing to accomplish tasks associated with highly non-linear and non-convex cost functions. The Particle Filter estimate can be computed in real-time through the use of image transfer techniques to evaluate camera motions associated to suitable displacements of the considered visual features in the image. Lastly, we present a novel way of modeling the visual servoing problem through the use of deep learning and Convolutional Neural Networks to alleviate the difficulty to model non-convex problems through classical analytic methods. By using image transfer techniques, we propose a method to generate quickly large training datasets in order to fine-tune existing network architectures to solve VS tasks.We shows that this method can be applied both to model known static scenes, or more generally to model relative pose estimations between couples of viewpoints from arbitrary scenes.
12

Structure from Motion Using Optical Flow Probability Distributions

Merrell, Paul Clark 18 March 2005 (has links)
Several novel structure from motion algorithms are presented that are designed to more effectively manage the problem of noise. In many practical applications, structure from motion algorithms fail to work properly because of the noise in the optical flow values. Most structure from motion algorithms implicitly assume that the noise is identically distributed and that the noise is white. Both assumptions are false. Some points can be track more easily than others and some points can be tracked more easily in a particular direction. The accuracy of each optical flow value can be quantified using an optical flow probability distribution. By using optical flow probability distributions in place of optical flow estimates in a structure from motion algorithm, a better understanding of the noise is developed and a more accurate solution is obtained. Two different methods of calculating the optical flow probability distributions are presented. The first calculates non-Gaussian probability distributions and the second calculates Gaussian probability distributions. Three different methods for calculating structure from motion are presented that use these probability distributions. The first method works on two frames and can handle any kind of noise. The second method works on two frames and is restricted to only Gaussian noise. The final method works on multiple frames and uses Gaussian noise. A simulation was created to directly compare the performance of methods that use optical flow probability distributions and methods that do not. The simulation results show that those methods which use the probability distributions better estimate the camera motion and the structure of the scene.
13

An Onboard Vision System for Unmanned Aerial Vehicle Guidance

Edwards, Barrett Bruce 17 November 2010 (has links) (PDF)
The viability of small Unmanned Aerial Vehicles (UAVs) as a stable platform for specific application use has been significantly advanced in recent years. Initial focus of lightweight UAV development was to create a craft capable of stable and controllable flight. This is largely a solved problem. Currently, the field has progressed to the point that unmanned aircraft can be carried in a backpack, launched by hand, weigh only a few pounds and be capable of navigating through unrestricted airspace. The most basic use of a UAV is to visually observe the environment and use that information to influence decision making. Previous attempts at using visual information to control a small UAV used an off-board approach where the video stream from an onboard camera was transmitted down to a ground station for processing and decision making. These attempts achieved limited results as the two-way transmission time introduced unacceptable amounts of latency into time-sensitive control algorithms. Onboard image processing offers a low-latency solution that will avoid the negative effects of two-way communication to a ground station. The first part of this thesis will show that onboard visual processing is capable of meeting the real-time control demands of an autonomous vehicle, which will also include the evaluation of potential onboard computing platforms. FPGA-based image processing will be shown to be the ideal technology for lightweight unmanned aircraft. The second part of this thesis will focus on the exact onboard vision system implementation for two proof-of-concept applications. The first application describes the use of machine vision algorithms to locate and track a target landing site for a UAV. GPS guidance was insufficient for this task. A vision system was utilized to localize the target site during approach and provide course correction updates to the UAV. The second application describes a feature detection and tracking sub-system that can be used in higher level application algorithms.
14

Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapy

Gambirasio, Ricardo Fibe 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.
15

Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapy

Ricardo Fibe Gambirasio 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.
16

Inferência dos ângulos críticos de voo por associação do fluxo óptico com a geometria da cena

Lima, Milton Macena Ramos de 26 March 2013 (has links)
Made available in DSpace on 2015-04-11T14:02:52Z (GMT). No. of bitstreams: 1 milton.pdf: 1235914 bytes, checksum: dff60afa3cc8a29da9cf209d407c7d8f (MD5) Previous issue date: 2013-03-26 / The three Attitude Parameters or Critical Flight Parameters - Attack, Yaw and Roll - are the angles that describe the rotational movements of an aerial vehicle in three-dimensional space. From the estimate of these angles, we can accomplish the stabilization of flight. The need for it varies for each type of aerial vehicle in inverse proportion to the stability provided by their mechanical characteristics. Moreover, the association of critical angles with other flight parameters, such as altitude and velocity, enables the description of all vehicle movements, and consequently, the execution of a flight path predetermined. Critical Flight Parameters can be inferred by electromechanical inertial sensors, Global Positioning System or by visual perception. Several authors mention that the electromechanical sensors fail and Global Positioning System are unavailable for some period of time under various environmental conditions, leading to loss of orientation and, in the case of unmanned vehicles, several accidents, given the lack of support from a pilot on these situations. In this work two techniques are integrated to estimate these parameters, based on Robotic Vision: the first one is based on the position and inclination of the Horizon about the aerial vehicle; and the second one is based on the estimation of optical flow of the scene viewed by the aerial vehicle. The use of optical flow provides the estimation of Yaw Parameter, which is not got at Horizon approach. It also enables the estimation of Critical Flight Parameters when the Horizon is not present. Both techniques are associated with camera parameters and integrated to provide more robustness to the flight angles estimation. Experiments conducted using real images of fixed-wing aircraft flight proved the method efficacy in open and unstructured envinroment. / Os três Parâmetros de Atitude ou Parâmetros Críticos de Voo Ataque, Guinada e Rolagem são os ângulos que descrevem os movimentos rotacionais de um veículo aéreo no espaço tridimensional. A partir da estimativa destes ângulos, pode-se realizar a estabilização de voo, cuja necessidade varia para cada tipo de veículo aéreo em proporção inversa à estabilidade proporcionada pelas suas características mecânicas. Além disto, a associação dos ângulos críticos de voo com outros parâmetros, como altitude e velocidade, possibilita a descrição de todos os movimentos do veículo, e consequentemente, a execução de uma rota de voo pré-determinada. Os ângulos críticos de voo podem ser inferidos por sensores eletromecânicos inerciais, pelo Sistema de Posicionamento Global ou por percepção visual. Diversos autores citam que os sensores eletromecânicos falham e o Sistema de Posicionamento Global fica indisponível por algum intervalo de tempo sob diversas condições ambientais, ocasionando a perda da orientação e, nos casos de veículos nãotripulados, acidentes e perdas de veículos, dada a falta do apoio de um piloto nestas situações. Neste trabalho são integradas duas técnicas de estimativa destes parâmetros, baseadas em Visão Robótica: a primeira é baseada na posição e inclinação do Horizonte com relação ao veículo aéreo; a segunda é baseada na estimativa do Fluxo Óptico da cena frontal ao veículo aéreo. O uso do Fluxo Óptico possibilita a estimativa do Parâmetro de Guinada, que não é realizada na abordagem baseada no Horizonte. Também possibilita a estimativa dos Parâmetros Críticos de Voo quando o Horizonte não está presente. As duas técnicas são associadas a parâmetros da câmera e integradas para dar mais robustez à estimativa dos ângulos de voo. Experimentos realizados usando-se imagens reais de voos de veículos aéreos de asa fixa provaram a eficiência do método em ambiente aberto e não-estruturado.
17

Identifikace 3D objektů pro robotické aplikace / Identification of 3D objects for Robotic Applications

Hujňák, Jaroslav January 2020 (has links)
This thesis focuses on robotic 3D vision for application in Bin Picking. The new method based on Conformal Geometric Algebra (CGA) is proposed and tested for identification of spheres in Pointclouds created with 3D scanner. The speed, precision and scalability of this method is compared to traditional descriptors based method. It is proved that CGA maintains the same precision as the traditional method in much shorter time. The CGA based approach seems promising for the use in the future of robotic 3D vision for identification and localization of spheres.
18

Time-Dependent Data: Classification and Visualization

Tanisaro, Pattreeya 14 November 2019 (has links)
The analysis of the immensity of data in space and time is a challenging task. For this thesis, the time-dependent data has been explored in various directions. The studies focused on data visualization, feature extraction, and data classification. The data that has been used in the studies comes from various well-recognized archives and has been the basis of numerous researches. The data characteristics ranged from the univariate time series to multivariate time series, from hand gestures to unconstrained views of general human movements. The experiments covered more than one hundred datasets. In addition, we also discussed the applications of visual analytics to video data. Two approaches were proposed to create a feature vector for time-dependent data classification. One is designed especially for a bio-inspired model for human motion recognition and the other is a subspace-based approach for arbitrary data characteristics. The extracted feature vectors of the proposed approaches can be easily visualized in two-dimensional space. For the classification, we experimented with various known models and offered a simple model using data in subspaces for light-weight computation. Furthermore, this method allows a data analyst to inspect feature vectors and detect an anomaly from a large collection of data simultaneously. Various classification techniques were compared and the findings were summarized. Hence, the studies can assist a researcher in picking an appropriate technique when setting up a corresponding model for a given characteristic of temporal data, and offer a new perspective for analyzing the time series data. This thesis is comprised of two parts. The first part gives an overview of time-dependent data and of this thesis with its focus on classification; the second part covers the collection of seven publications.
19

Variational aleatoric uncertainty calibration in neural regression

Bhatt, Dhaivat 07 1900 (has links)
Des mesures de confiance calibrées et fiables sont un prérequis pour la plupart des systèmes de perception robotique car elles sont nécessaires aux modules de fusion de capteurs et de planification qui interviennent plus en aval. Cela est particulièrement vrai dans le cas d’applications où la sécurité est essentielle, comme les voitures à conduite autonome. Dans le contexte de l’apprentissage profond, l’incertitude prédictive est classée en incertitude épistémique et incertitude aléatoire. Il existe également une incertitude distributionnelle associée aux données hors distribution. L’incertitude aléatoire représente l’ambiguïté inhérente aux données d’entrée et est généralement irréductible par nature. Plusieurs méthodes existent pour estimer cette incertitude au moyen de structures de réseau modifiées ou de fonctions de perte. Cependant, en général, ces méthodes manquent de calibration, ce qui signifie que les incertitudes estimées ne représentent pas fidèlement l’incertitude des données empiriques. Les approches actuelles pour calibrer l’incertitude aléatoire nécessitent soit un "ensemble de données de calibration", soit de modifier les paramètres du modèle après l’apprentissage. De plus, de nombreuses approches ajoutent des opérations supplémentaires lors de l’inférence. Pour pallier à ces problèmes, nous proposons une méthode simple et efficace d’entraînement d’un régresseur neuronal calibré, conçue à partir des premiers principes de la calibration. Notre idée maîtresse est que la calibration ne peut être réalisée qu’en imposant des contraintes sur plusieurs exemples, comme ceux d’un mini-batch, contrairement aux approches existantes qui n’imposent des contraintes que sur la base d’un échantillon. En obligeant la distribution des sorties du régresseur neuronal (la distribution de la proposition) à ressembler à unedistribution cible en minimisant une divergence f , nous obtenons des modèles nettement mieuxcalibrés par rapport aux approches précédentes. Notre approche, f -Cal, est simple à mettre en œuvre ou à ajouter aux modèles existants et surpasse les méthodes de calibration existantes dansles tâches réelles à grande échelle de détection d’objets et d’estimation de la profondeur. f -Cal peut être mise en œuvre en 10-15 lignes de code PyTorch et peut être intégrée à n’importe quel régresseur neuronal probabiliste, de façon peu invasive. Nous explorons également l’estimation de l’incertitude distributionnelle pour la détection d’objets, et employons des méthodes conçues pour les systèmes de classification. Nous établissons un problème d’arrière-plan hors distribution qui entrave l’applicabilité des méthodes d’incertitude distributionnelle dans la détection d’objets. / Calibrated and reliable confidence measures are a prerequisite for most robotics perception systems since they are needed by sensor fusion and planning components downstream. This is particularly true in the case of safety-critical applications such as self-driving cars. In the context of deep learning, the sources of predictive uncertainty are categorized into epistemic and aleatoric uncertainty. There is also distributional uncertainty associated with out of distribution data. Epistemic uncertainty, also known as knowledge uncertainty, arises because of noise in the model structure and parameters, and can be reduced with more labeled data. Aleatoric uncertainty represents the inherent ambiguity in the input data and is generally irreducible in nature. Several methods exist for estimating aleatoric uncertainty through modified network structures or loss functions. However, in general, these methods lack calibration, meaning that the estimated uncertainties do not represent the empirical data uncertainty accurately. Current approaches to calibrate aleatoric uncertainty either require a held out calibration dataset or to modify the model parameters post-training. Moreover, many approaches add extra computation during inference time. To alleviate these issues, this thesis proposes a simple and effective method for training a calibrated neural regressor, designed from the first principles of calibration. Our key insight is that calibration can be achieved by imposing constraints across multiple examples, such as those in a mini-batch, as opposed to existing approaches that only impose constraints on a per-sample basis. By enforcing the distribution of outputs of the neural regressor (the proposal distribution) to resemble a target distribution by minimizing an f-divergence, we obtain significantly better-calibrated models compared to prior approaches. Our approach, f-Cal, is simple to implement or add to existing models and outperforms existing calibration methods on the large-scale real-world tasks of object detection and depth estimation. f-Cal can be implemented in 10-15 lines of PyTorch code, and can be integrated with any probabilistic neural regressor in a minimally invasive way. This thesis also explores the estimation of distributional uncertainty for object detection, and employ methods designed for classification setups. In particular, we attempt to detect out of distribution (OOD) samples, examples which are not part of training data distribution. I establish a background-OOD problem which hampers applicability of distributional uncertainty methods in object detection specifically.
20

Interactive 3D Reconstruction / Interaktive 3D-Rekonstruktion

Schöning, Julius 23 May 2018 (has links)
Applicable image-based reconstruction of three-dimensional (3D) objects offers many interesting industrial as well as private use cases, such as augmented reality, reverse engineering, 3D printing and simulation tasks. Unfortunately, image-based 3D reconstruction is not yet applicable to these quite complex tasks, since the resulting 3D models are single, monolithic objects without any division into logical or functional subparts. This thesis aims at making image-based 3D reconstruction feasible such that captures of standard cameras can be used for creating functional 3D models. The research presented in the following does not focus on the fine-tuning of algorithms to achieve minor improvements, but evaluates the entire processing pipeline of image-based 3D reconstruction and tries to contribute at four critical points, where significant improvement can be achieved by advanced human-computer interaction: (i) As the starting point of any 3D reconstruction process, the object of interest (OOI) that should be reconstructed needs to be annotated. For this task, novel pixel-accurate OOI annotation as an interactive process is presented, and an appropriate software solution is released. (ii) To improve the interactive annotation process, traditional interface devices, like mouse and keyboard, are supplemented with human sensory data to achieve closer user interaction. (iii) In practice, a major obstacle is the so far missing standard for file formats for annotation, which leads to numerous proprietary solutions. Therefore, a uniform standard file format is implemented and used for prototyping the first gaze-improved computer vision algorithms. As a sideline of this research, analogies between the close interaction of humans and computer vision systems and 3D perception are identified and evaluated. (iv) Finally, to reduce the processing time of the underlying algorithms used for 3D reconstruction, the ability of artificial neural networks to reconstruct 3D models of unknown OOIs is investigated. Summarizing, the gained improvements show that applicable image-based 3D reconstruction is within reach but nowadays only feasible by supporting human-computer interaction. Two software solutions, one for visual video analytics and one for spare part reconstruction are implemented. In the future, automated 3D reconstruction that produces functional 3D models can be reached only when algorithms become capable of acquiring semantic knowledge. Until then, the world knowledge provided to the 3D reconstruction pipeline by human computer interaction is indispensable.

Page generated in 0.0515 seconds