• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 11
  • 7
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 192
  • 192
  • 100
  • 73
  • 50
  • 36
  • 33
  • 33
  • 31
  • 31
  • 29
  • 28
  • 26
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Assessment of a Low Cost IR Laser Local Tracking Solution for Robotic Operations

Du, Minzhen 14 May 2021 (has links)
This thesis aimed to assess the feasibility of using an off-the-shelf virtual reality tracking system as a low cost precision pose estimation solution for robotic operations in both indoor and outdoor environments. Such a tracking solution has the potential of assisting critical operations related to planetary exploration missions, parcel handling/delivery, and wildfire detection/early warning systems. The boom of virtual reality experiences has accelerated the development of various low-cost, precision indoor tracking technologies. For the purpose of this thesis we choose to adapt the SteamVR Lighthouse system developed by Valve, which uses photo-diodes on the trackers to detect the rotating IR laser sheets emitted from the anchored base stations, also known as lighthouses. Some previous researches had been completed using the first generation of lighthouses, which has a few limitations on communication from lighthouses to the tracker. A NASA research has cited poor tracking performance under sunlight. We choose to use the second generation lighthouses which has improved the method of communication from lighthouses to the tracker, and we performed various experiments to assess their performance outdoors, including under sunlight. The studies of this thesis have two stages, the first stage focused on a controlled, indoor environment, having an Unmanned Aerial Vehicle (UAS) perform repeatable flight patterns and simultaneously tracked by the Lighthouse and a reference indoor tracking system, which showed that the tracking precision of the lighthouse is comparable to the industrial standard indoor tracking solution. The second stage of the study focused on outdoor experiments with the tracking system, comparing UAS flights between day and night conditions as well as positioning accuracy assessments with a CNC machine under indoor and outdoor conditions. The results showed matching performance between day and night while still comparable to industrial standard indoor tracking solution down to centimeter precision, and matching simulated CNC trajectory down to millimeter precision. There is also some room for improvement in regards to the experimental method and equipment used, as well as improvements on the tracking system itself needed prior to adaptation in real-world applications. / Master of Science / This thesis aimed to assess the feasibility of using an off-the-shelf virtual reality tracking system as a low cost precision pose estimation solution for robotic operations in both indoor and outdoor environments. Such a tracking solution has the potential of assisting critical operations related to planetary exploration missions, parcel handling/delivery, and wildfire detection/early warning systems. The boom of virtual reality experiences has accelerated the development of various low-cost, precision indoor tracking technologies. For the purpose of this thesis we choose to adapt the SteamVR Lighthouse system developed by Valve, which uses photo-diodes on the trackers to detect the rotating IR laser sheets emitted from the anchored base stations, also known as lighthouses. Some previous researches had been completed using the first generation of lighthouses, which has a few limitations on communication from lighthouses to the tracker. A NASA research has cited poor tracking performance under sunlight. We choose to use the second generation lighthouses which has improved the method of communication from lighthouses to the tracker, and we performed various experiments to assess their performance outdoors, including under sunlight. The studies of this thesis have two stages, the first stage focused on a controlled, indoor environment, having an Unmanned Aerial Vehicle (UAS) perform repeatable flight patterns and simultaneously tracked by the Lighthouse and a reference indoor tracking system, which showed that the tracking precision of the lighthouse is comparable to the industrial standard indoor tracking solution. The second stage of the study focused on outdoor experiments with the tracking system, comparing UAS flights between day and night conditions as well as positioning accuracy assessments with a CNC machine under indoor and outdoor conditions. The results showed matching performance between day and night while still comparable to industrial standard indoor tracking solution down to centimeter precision, and matching simulated CNC trajectory down to millimeter precision. There is also some room for improvement in regards to the experimental method and equipment used, as well as improvements on the tracking system itself needed prior to adaptation in real-world applications.
162

Accident Reconstruction in Ice Hockey: A Pipeline using Pose and Kinematics Estimation to Personalize Finite Element Human Body Models / Rekonstruktion av olyckor i ishockey: En pipeline som använder pose- och kinematikuppskattning för att anpassa finita element humanmodeller

Even, Azilis Emma Sulian January 2024 (has links)
Ice hockey is a sport whose athletes are at high risk for traumatic head injuries due to the violence of potential impacts with other athletes, ice, or glass during games. In order to develop the best protective strategies for the players, it is necessary to have a deep understanding of accident mechanisms during ice hockey games. Accident reconstructions using the finite element (FE) method are a way to perform a systematic analysis of impact cases, but require input data on the circumstances of the accidents. Thus, this project focused on finding a way to extract the position and velocity of the players involved from readily available videos of ice hockey accidents using motion tracking methods. This project included two parts: pose estimation and velocity estimation. The pose estimation aimed to align a human body model (HBM) with the players' poses and the key steps included estimating 2D joints from impact images, estimating the players' 3D poses, skeleton inferencing, and aligning the results with the baseline HBM via pelvic registration. The velocity estimation defined the initial conditions for simulating the collision and key steps included identifying the players' 2D joints across impact video frames, tracking of the players using a simplified pelvis projection on the rink plane, and estimating the players’ velocity using homography to identify their position on the ice hockey rink. Then, both parts were applied to accident cases from a video database of collisions that occurred during a hockey league season. The cases in which the pipeline was fully applied ultimately resulted in LS-DYNA positioning files for the Total Human Model for Safety (THUMS) model, and problematic cases were used to get an overview of the limits of the chosen methodology. Said limitations were mostly linked to the quality of the source videos, which is highly dependent on the source of the videos and possibly not controllable. Due to this, selection criteria are required, such as checking the blurriness and quality of the videos and the viewing angles to ensure as few occlusions as possible. Overall, this project resulted in a working semi-automatic pipeline for pose and velocity estimation in contact sports collisions, as well as a first set of personalized input information that should allow the reconstruction of ice hockey accidents using FE simulations. / Ishockey är en sport vars utövare löper stor risk att drabbas av traumatiska huvudskador på grund av de våldsamma potentiella kollisionerna med andra utövare, is eller glas under matcherna. För att kunna utveckla de bästa skyddsstrategierna för spelarna är det nödvändigt att ha en djup förståelse för olycksmekanismerna under ishockeymatcher. Olycksrekonstruktioner med hjälp av finita elementmetoden är ett sätt att utföra en systematisk analys av kollisionsfall, men kräver indata om omständigheterna kring olyckorna. Detta projekt fokuserade därför på att hitta ett sätt att extrahera de inblandade spelarnas position och hastighet från lättillgängliga videor av ishockeyolyckor med hjälp av rörelsespårningsmetoder. Projektet bestod av två delar: poseuppskattning och hastighetsuppskattning. Poseuppskattningen syftade till att anpassa en humanmodell till spelarnas poser och de viktigaste stegen omfattade uppskattning av 2D-leder från kollisionsbilder, uppskattning av spelarnas 3D-poser, skelettinferens och anpassning av resultaten till baslinjen HBM via bäckenregistrering. Hastighets-uppskattningen definierade de initiala villkoren för simulering av kollisionen och viktiga steg inkluderade identifiering av spelarnas 2D-led i videobilder av kollisionen, spårning av spelarna med hjälp av en förenklad bäckenprojektion på rinkplanet och uppskattning av spelarnas hastighet med hjälp av homografi för att identifiera deras position på ishockeyrinken. Därefter tillämpades båda delarna på olycksfall från en videodatabas med kollisioner som inträffade under en säsong i en hockeyliga. De fall där pipelinen tillämpades fullt ut resulterade slutligen i LS-DYNA-positioneringsfiler, och problematiska fall användes för att få en överblick över gränserna för den valda metoden. Begränsningarna var främst kopplade till kvaliteten på källvideorna, som är starkt beroende av källan till videorna och eventuellt inte kan kontrolleras. På grund av detta krävs urvalskriterier, t.ex. kontroll av videornas oskärpa och kvalitet samt betraktningsvinklar för att säkerställa så få ocklusioner som möjligt. Sammantaget resulterade detta projekt i en fungerande halvautomatisk pipeline för pose- och hastighetsuppskattning vid kollisioner i kontaktsporter, samt en första uppsättning personlig indatainformation som bör möjliggöra rekonstruktion av ishockeyolyckor med hjälp av simulering med finita element.
163

A Composite Field-Based Learning Framework for Pose Estimation and Object Detection : Exploring Scale Variation Adaptations in Composite Field-Based Pose Estimation and Extending the Framework for Object Detection / En sammansatt fältbaserad inlärningsramverk för posuppskattning och objektdetektering : Utforskning av skalvariationsanpassningar i sammansatt fältbaserad posuppskattning och utvidgning av ramverket för objektdetektering

Guo, Jianting January 2024 (has links)
This thesis aims to address the concurrent challenges of multi-person 2D pose estimation and object detection within a unified bottom-up framework. Our foundational solutions encompass a recently proposed pose estimation framework named OpenPifPaf, grounded in composite fields. OpenPifPaf employs the Composite Intensity Field (CIF) for precise joint localization and the Composite Association Field (CAF) for seamless joint connectivity. To assess the model’s robustness against scale variances, a Feature Pyramid Network (FPN) is incorporated into the baseline. Additionally, we present a variant of OpenPifPaf known as CifDet. CifDet utilizes the Composite Intensity Field to classify and detect object centers, subsequently regressing bounding boxes from these identified centers. Furthermore, we introduce an extended version of CifDet specifically tailored for enhanced object detection capabilities—CifCafDet. This augmented framework is designed to more effectively tackle the challenges inherent in object detection tasks. The baseline OpenPifPaf model outperforms most existing bottom-up pose estimation methods and achieves comparable results with some state-of-the-art top-down methods on the COCO keypoint dataset. Its variant, CifDet, adapts the OpenPifPaf’s composite field-based architecture for object detection tasks. Further modifications result in CifCafDet, which demonstrates enhanced performance on the MS COCO detection dataset over CifDet, suggesting its viability as a multi-task framework. / Denna avhandling syftar till att ta itu med de samtidiga utmaningarna med flerpersons 2D-posestimering och objektdetektion inom en enhetlig bottom-up-ram. Våra grundläggande lösningar omfattar ett nyligen föreslaget ramverk för posestimering med namnet OpenPifPaf, som grundar sig i kompositfält. OpenPifPaf använder Composite Intensity Field (CIF) för exakt leddlokalisering och Composite Association Field (CAF) för sömlös ledanslutning. För att bedöma modellens robusthet mot skalvariationer införlivas ett Feature Pyramid Network (FPN) i baslinjen. Dessutom presenterar vi en variant av OpenPifPaf känd som CifDet. CifDet använder Composite Intensity Field för att klassificera och detektera objektcentrum, för att sedan regrediera inramningslådor från dessa identifierade centrum. Vidare introducerar vi en utökad version av CifDet som är speciellt anpassad för förbättrade objektdetekteringsförmågor—CifCafDet. Detta förstärkta ramverk är utformat för att mer effektivt ta itu med de utmaningar som är inneboende i objektdetekteringsuppgifter. Basmodellen OpenPifPaf överträffar de flesta befintliga bottom-up-metoder för posestimering och uppnår jämförbara resultat med vissa toppmoderna top-down-metoder på COCO-keypoint-datasetet. Dess variant, CifDet, anpassar OpenPifPafs kompositfältbaserade arkitektur för objekt-detekteringsuppgifter. Ytterligare modifieringar resulterar i CifCafDet, som visar förbättrad prestanda på MS COCO-detektionsdatasetet över CifDet, vilket antyder dess livskraft som ett ramverk för flera uppgifter.
164

Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage / Visuo-inertial data fusion for pose estimation and self-calibration

Scandaroli, Glauco Garcia 14 June 2013 (has links)
Les systèmes multi-capteurs exploitent les complémentarités des différentes sources sensorielles. Par exemple, le capteur visuo-inertiel permet d’estimer la pose à haute fréquence et avec une grande précision. Les méthodes de vision mesurent la pose à basse fréquence mais limitent la dérive causée par l’intégration des données inertielles. Les centrales inertielles mesurent des incréments du déplacement à haute fréquence, ce que permet d’initialiser la vision et de compenser la perte momentanée de celle-ci. Cette thèse analyse deux aspects du problème. Premièrement, nous étudions les méthodes visuelles directes pour l’estimation de pose, et proposons une nouvelle technique basée sur la corrélation entre des images et la pondération des régions et des pixels, avec une optimisation inspirée de la méthode de Newton. Notre technique estime la pose même en présence des changements d’illumination extrêmes. Deuxièmement, nous étudions la fusion des données a partir de la théorie de la commande. Nos résultats principaux concernent le développement d’observateurs pour l’estimation de pose, biais IMU et l’autocalibrage. Nous analysons la dynamique de rotation d’un point de vue non linéaire, et fournissons des observateurs stables dans le groupe des matrices de rotation. Par ailleurs, nous analysons la dynamique de translation en tant que système linéaire variant dans le temps, et proposons des conditions d’observabilité uniforme. Les analyses d’observabilité nous permettent de démontrer la stabilité uniforme des observateurs proposés. La méthode visuelle et les observateurs sont testés et comparés aux méthodes classiques avec des simulations et de vraies données visuo-inertielles. / Systems with multiple sensors can provide information unavailable from a single source, and complementary sensory characteristics can improve accuracy and robustness to many vulnerabilities as well. Explicit pose measurements are often performed either with high frequency or precision, however visuo-inertial sensors present both features. Vision algorithms accurately measure pose at low frequencies, but limit the drift due to integration of inertial data. Inertial measurement units yield incremental displacements at high frequencies that initialize vision algorithms and compensate for momentary loss of sight. This thesis analyzes two aspects of that problem. First, we survey direct visual tracking methods for pose estimation, and propose a new technique based on the normalized crosscorrelation, region and pixel-wise weighting together with a Newton-like optimization. This method can accurately estimate pose under severe illumination changes. Secondly, we investigate the data fusion problem from a control point of view. Main results consist in novel observers for concurrent estimation of pose, IMU bias and self-calibration. We analyze the rotational dynamics using tools from nonlinear control, and provide stable observers on the group of rotation matrices. Additionally, we analyze the translational dynamics using tools from linear time-varying systems, and propose sufficient conditions for uniform observability. The observability analyses allow us to prove uniform stability of the observers proposed. The proposed visual method and nonlinear observers are tested and compared to classical methods using several simulations and experiments with real visuo-inertial data.
165

3D detection and pose estimation of medical staff in operating rooms using RGB-D images / Détection et estimation 3D de la pose des personnes dans la salle opératoire à partir d'images RGB-D

Kadkhodamohammadi, Abdolrahim 01 December 2016 (has links)
Dans cette thèse, nous traitons des problèmes de la détection des personnes et de l'estimation de leurs poses dans la Salle Opératoire (SO), deux éléments clés pour le développement d'applications d'assistance chirurgicale. Nous percevons la salle grâce à des caméras RGB-D qui fournissent des informations visuelles complémentaires sur la scène. Ces informations permettent de développer des méthodes mieux adaptées aux difficultés propres aux SO, comme l'encombrement, les surfaces sans texture et les occlusions. Nous présentons des nouvelles approches qui tirent profit des informations temporelles, de profondeur et des vues multiples afin de construire des modèles robustes pour la détection des personnes et de leurs poses. Une évaluation est effectuée sur plusieurs jeux de données complexes enregistrés dans des salles opératoires avec une ou plusieurs caméras. Les résultats obtenus sont très prometteurs et montrent que nos approches surpassent les méthodes de l'état de l'art sur ces données cliniques. / In this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries.
166

Enabling physical action in computer mediated communication : an embodied interaction approach

Khan, Muhammad Sikandar Lal January 2015 (has links)
No description available.
167

Estudo de uma técnica para o tratamento de dead-times em operações de rastreamento de objetos por servovisão

Saqui, Diego 22 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:15Z (GMT). No. of bitstreams: 1 6235.pdf: 6898238 bytes, checksum: 058a3b75f03de2058255b7fa7db30dac (MD5) Previous issue date: 2014-05-22 / Financiadora de Estudos e Projetos / Visual servoing is a technique that uses computer vision to acquire visual information (by camera) and a control system with closed loop circuit to control robots. One typical application of visual servoing is tracking objects on conveyors in industrial environments. Visual servoing has the advantage of obtaining a large amount of information from the environment and greater flexibility in operations than other types of sensors. A disadvantage are the delays, known as dead-times or time-delays that can occur during the treatment of visual information in computer vision tasks or other tasks of the control system that need large processing capacity. The dead-times in visual servoing applied in industrial operations such as in the tracking of objects on conveyors are critical and can negatively affect production capacity in manufacturing environments. Some methodogies can be found in the literature for this problem and some of these methodologies are often based on the Kalman filter. In this work a technique was selected based on the formulation of the Kalman filter that already had a study on the prediction of future pose of objects with linear motion. This methodology has been studied in detail, tested and analyzed through simulations for other motions and some applications. Three types of experiments were generated: one for different types of motions and two others applied in different types of signals in the velocity control systems. The results from the motion of the object shown that the technique is able to estimate the future pose of objects with linear motion and smooth curves, but it is inefficient for drastic changes in motion. With respect to the signal to be filtered in the velocity control, the methodogy has been shown applicable (with motions conditions) only in the estimation of pose of the object after the occurrence of dead-times caused by computer vision and this information is subsequently used to calculate the future error of the object related to the robotic manipulator used to calculate the velocity of the robot. The trying to apply the methodogy directly on the error used to calculate the velocity to be applied to the robot did not produce good results. With the results the methodogy can be applied for object tracking with linear motion and smooth curves as in the case of objects transported by conveyors in industrial environments. / Servovisao e uma tecnica que utiliza visao computacional para obter informacoes visuais (atraves de camera) e um sistema de controle com circuito em malha fechada para controlar robos. Uma das aplicacoes tipicas de servovisao e no rastreamento de objetos sobre esteiras transportadoras em ambientes industriais. Servovisao possui a vantagem em relacao a outros tipos de sensores de permitir a obtencao de um grande numero de informacoes a partir do ambiente e maior flexibilidade nas operacoes. Uma desvantagem sao os atrasos conhecidos como dead-times ou time-delays que podem ocorrer durante o tratamento de informacoes visuais nas tarefas de visao computacional ou em outras tarefas do sistema de controle que necessitam de grande capacidade de processamento. Os dead-times em servovisao aplicada em operacoes industriais como no rastreamento de objetos em esteiras transportadoras sao criticos e podem afetar negativamente na capacidade de producao em ambientes de manufatura. Algumas metodologias podem ser encontradas na literatura para esse tipo de problema sendo muitas vezes baseadas no filtro de Kalman. Nesse trabalho foi selecionada uma metodologia baseada na formulacao do filtro de Kalman que ja possui um estudo na previsao futura de pose de objetos com movimentacao linear. Essa metodologia foi estudada detalhadamente, testada atraves de simulacoes e analisada sobre outros tipos de movimentos e algumas aplicacoes. No total foram gerados tres tipos de experimentos: um para diferentes tipos de movimentacao e outros dois aplicados em diferentes tipos de sinais no controlador de velocidades. Os resultados a partir da movimentacao do objeto demonstraram que o metodo e capaz de estimar a pose futura de objetos com movimento linear e com curvas suaves, porem e ineficiente para alteracoes drasticas no movimento. Com relacao ao sinal a ser filtrado no controlador de velocidades a metodologia se demonstrou aplicavel (com as condicoes de movimento) somente na estimativa da pose do objeto apos a ocorrencia de dead-times causados por visao computacional e posteriormente essa informacao e utilizada para calcular o erro futuro do objeto em relacao ao manipulador robotico utilizado no calculo da velocidade do robo. A tentativa de aplicacao da tecnica diretamente no erro utilizado no calculo da velocidade a ser aplicada ao robo nao apresentou bons resultados. Com os resultados obtidos a metodologia se demonstrou eficiente para o rastreamento de objetos de forma linear e curvas suaves como no caso de objetos transportados por esteiras em ambientes industriais.
168

3D Pose estimation of continuously deformable instruments in robotic endoscopic surgery / Mesure par vision de la position d'instruments médicaux flexibles pour la chirurgie endoscopique robotisée

Cabras, Paolo 24 February 2016 (has links)
Connaître la position 3D d’instruments robotisés peut être très utile dans le contexte chirurgical. Nous proposons deux méthodes automatiques pour déduire la pose 3D d’un instrument avec une unique section pliable et équipé avec des marqueurs colorés, en utilisant uniquement les images fournies par la caméra monoculaire incorporée dans l'endoscope. Une méthode basée sur les graphes permet segmenter les marqueurs et leurs coins apparents sont extraits en détectant la transition de couleur le long des courbes de Bézier qui modélisent les points du bord. Ces primitives sont utilisées pour estimer la pose 3D de l'instrument en utilisant un modèle adaptatif qui prend en compte les jeux mécaniques du système. Pour éviter les limites de cette approche dérivants des incertitudes sur le modèle géométrique, la fonction image-position-3D peut être appris selon un ensemble d’entrainement. Deux techniques ont été étudiées et améliorées : réseau des fonctions à base radiale avec noyaux gaussiens et une régression localement pondérée. Les méthodes proposées sont validées sur une cellule expérimentale robotique et sur des séquences in-vivo. / Knowing the 3D position of robotized instruments can be useful in surgical context for e.g. their automatic control or gesture guidance. We propose two methods to infer the 3D pose of a single bending section instrument equipped with colored markers using only the images provided by the monocular camera embedded in the endoscope. A graph-based method is used to segment the markers. Their corners are extracted by detecting color transitions along Bézier curves fitted on edge points. These features are used to estimate the 3D pose of the instrument using an adaptive model that takes into account the mechanical plays of the system. Since this method can be affected by model uncertainties, the image-to-3d function can be learned according to a training set. We opted for two techniques that have been improved : Radial Basis Function Network with Gaussian kernel and Locally Weighted Projection. The proposed methods are validated on a robotic experimental cell and in in-vivo sequences.
169

Fusion de données capteurs visuels et inertiels pour l'estimation de la pose d'un corps rigide / Rigid body pose estimation using fusion of inertial and visual sensor data

Seba, Ali 16 June 2015 (has links)
Cette thèse traite la problématique d'estimation de la pose (position relative et orientation) d'un corps rigide en mouvement dans l’espace 3D par fusion de données issues de capteurs inertiels et visuels. Les mesures inertielles sont fournies à partir d’une centrale inertielle composée de gyroscopes 3 axes et d’accéléromètres 3 axes. Les données visuelles sont issues d’une caméra. Celle-ci est positionnée sur le corps rigide en mouvement, elle fournit des images représentatives du champ visuel perçu. Ainsi, les mesures implicites des directions des lignes, supposées fixes dans l’espace de la scène, projetées sur le plan de l’image seront utilisées dans l’algorithme d’estimation de l’attitude. La démarche consistait d’abord à traiter le problème de la mesure issue du capteur visuel sur une longue séquence en utilisant les caractéristiques de l’image. Ainsi, un algorithme de suivi de lignes a été proposé en se basant sur les techniques de calcul du flux optique des points extraits des lignes à suivre et utilisant une approche de mise en correspondance par minimisation de la distance euclidienne. Par la suite, un observateur conçu dans l’espace SO(3) a été proposé afin d’estimer l’orientation relative du corps rigide dans la scène 3D en fusionnant les données issues de l’algorithme de suivi de lignes avec les données des gyroscopes. Le gain de l’observateur a été élaboré en utilisant un filtre de Kalman de type M.E.K.F. (Multiplicative Extended Kalman Filter). Le problème de l’ambigüité du signe dû à la mesure implicite des directions des lignes a été considéré dans la conception de cet observateur. Enfin, l’estimation de la position relative et de la vitesse absolue du corps rigide dans la scène 3D a été traitée. Deux observateurs ont été proposés : le premier est un observateur en cascade avec découplage entre l’estimation de l’attitude et l’estimation de la position. L’estimation issue de l’observateur d’attitude alimente un observateur non linéaire utilisant des mesures issues des accéléromètres afin de fournir une estimation de la position relative et de la vitesse absolue du corps rigide. Le deuxième observateur, conçu quant à lui directement dans SE(3) , utilise un filtre de Kalman de type M.E.K.F afin d’estimer la pose par fusion de données inertielles (accéléromètres, gyromètres) et des données visuelles. Les performances des méthodes proposées sont illustrées et validées par différents résultats de simulation / AbstractThis thesis addresses the problems of pose estimation of a rigid body moving in 3D space by fusing data from inertial and visual sensors. The inertial measurements are provided from an I.M.U. (Inertial Measurement Unit) composed by accelerometers and gyroscopes. Visual data are from cameras, which positioned on the moving object, provide images representative of the perceived visual field. Thus, the implicit measure directions of fixed lines in the space of the scene from their projections on the plane of the image will be used in the attitude estimation. The approach was first to address the problem of measuring visual sensors after a long sequence using the characteristics of the image. Thus, a line tracking algorithm has been proposed based on optical flow of the extracted points and line matching approach by minimizing the Euclidean distance. Thereafter, an observer in the SO(3) space has been proposed to estimate the relative orientation of the object in the 3D scene by merging the data from the proposed lines tracking algorithm with Gyro data. The observer gain was developed using a Kalman filter type M.E.K.F. (Multiplicative Extended Kalman Filter). The problem of ambiguity in the sign of the measurement directions of the lines was considered in the design of the observer. Finally, the estimation of the relative position and the absolute velocity of the rigid body in the 3D scene have been processed. Two observers were proposed: the first one is an observer cascaded with decoupled from the estimation of the attitude and position estimation. The estimation result of the attitude observer feeds a nonlinear observer using measurements from the accelerometers in order to provide an estimate of the relative position and the absolute velocity of the rigid body. The second observer, designed directly in SE (3) for simultaneously estimating the position and orientation of a rigid body in 3D scene by fusing inertial data (accelerometers, gyroscopes), and visual data using a Kalman filter (M.E.K.F.). The performance of the proposed methods are illustrated and validated by different simulation results
170

Odhad pózy kamery z přímek pomocí přímé lineární transformace / Camera Pose Estimation from Lines using Direct Linear Transformation

Přibyl, Bronislav Unknown Date (has links)
Tato disertační práce se zabývá odhadem pózy kamery z korespondencí 3D a 2D přímek, tedy tzv. perspektivním problémem n  přímek (angl. Perspective- n -Line, PnL). Pozornost je soustředěna na případy s velkým počtem čar, které mohou být efektivně řešeny metodami využívajícími lineární formulaci PnL. Dosud byly známy pouze metody pracující s korespondencemi 3D bodů a 2D přímek. Na základě tohoto pozorování byly navrženy dvě nové metody založené na algoritmu přímé lineární transformace (angl. Direct Linear Transformation, DLT): Metoda DLT-Plücker-Lines pracující s korespondencemi 3D a 2D přímek a metoda DLT-Combined-Lines pracující jak s korespondencemi 3D bodů a 2D přímek, tak s korespondencemi 3D přímek a 2D přímek. Ve druhém případě je redundantní 3D informace využita k redukci minimálního počtu požadovaných korespondencí přímek na 5 a ke zlepšení přesnosti metody. Navržené metody byly důkladně testovány za různých podmínek včetně simulovaných a reálných dat a porovnány s nejlepšími existujícími PnL metodami. Metoda DLT-Combined-Lines dosahuje výsledků lepších nebo srovnatelných s nejlepšími existujícími metodami a zároveň je značně rychlá. Tato disertační práce také zavádí jednotný rámec pro popis metod pro odhad pózy kamery založených na algoritmu DLT. Obě navržené metody jsou definovány v tomto rámci.

Page generated in 0.0915 seconds