• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 195
  • 24
  • 17
  • 10
  • 9
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 337
  • 214
  • 142
  • 104
  • 70
  • 60
  • 56
  • 47
  • 44
  • 43
  • 43
  • 42
  • 38
  • 38
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage / Visuo-inertial data fusion for pose estimation and self-calibration

Scandaroli, Glauco Garcia 14 June 2013 (has links)
Les systèmes multi-capteurs exploitent les complémentarités des différentes sources sensorielles. Par exemple, le capteur visuo-inertiel permet d’estimer la pose à haute fréquence et avec une grande précision. Les méthodes de vision mesurent la pose à basse fréquence mais limitent la dérive causée par l’intégration des données inertielles. Les centrales inertielles mesurent des incréments du déplacement à haute fréquence, ce que permet d’initialiser la vision et de compenser la perte momentanée de celle-ci. Cette thèse analyse deux aspects du problème. Premièrement, nous étudions les méthodes visuelles directes pour l’estimation de pose, et proposons une nouvelle technique basée sur la corrélation entre des images et la pondération des régions et des pixels, avec une optimisation inspirée de la méthode de Newton. Notre technique estime la pose même en présence des changements d’illumination extrêmes. Deuxièmement, nous étudions la fusion des données a partir de la théorie de la commande. Nos résultats principaux concernent le développement d’observateurs pour l’estimation de pose, biais IMU et l’autocalibrage. Nous analysons la dynamique de rotation d’un point de vue non linéaire, et fournissons des observateurs stables dans le groupe des matrices de rotation. Par ailleurs, nous analysons la dynamique de translation en tant que système linéaire variant dans le temps, et proposons des conditions d’observabilité uniforme. Les analyses d’observabilité nous permettent de démontrer la stabilité uniforme des observateurs proposés. La méthode visuelle et les observateurs sont testés et comparés aux méthodes classiques avec des simulations et de vraies données visuo-inertielles. / Systems with multiple sensors can provide information unavailable from a single source, and complementary sensory characteristics can improve accuracy and robustness to many vulnerabilities as well. Explicit pose measurements are often performed either with high frequency or precision, however visuo-inertial sensors present both features. Vision algorithms accurately measure pose at low frequencies, but limit the drift due to integration of inertial data. Inertial measurement units yield incremental displacements at high frequencies that initialize vision algorithms and compensate for momentary loss of sight. This thesis analyzes two aspects of that problem. First, we survey direct visual tracking methods for pose estimation, and propose a new technique based on the normalized crosscorrelation, region and pixel-wise weighting together with a Newton-like optimization. This method can accurately estimate pose under severe illumination changes. Secondly, we investigate the data fusion problem from a control point of view. Main results consist in novel observers for concurrent estimation of pose, IMU bias and self-calibration. We analyze the rotational dynamics using tools from nonlinear control, and provide stable observers on the group of rotation matrices. Additionally, we analyze the translational dynamics using tools from linear time-varying systems, and propose sufficient conditions for uniform observability. The observability analyses allow us to prove uniform stability of the observers proposed. The proposed visual method and nonlinear observers are tested and compared to classical methods using several simulations and experiments with real visuo-inertial data.
182

3D detection and pose estimation of medical staff in operating rooms using RGB-D images / Détection et estimation 3D de la pose des personnes dans la salle opératoire à partir d'images RGB-D

Kadkhodamohammadi, Abdolrahim 01 December 2016 (has links)
Dans cette thèse, nous traitons des problèmes de la détection des personnes et de l'estimation de leurs poses dans la Salle Opératoire (SO), deux éléments clés pour le développement d'applications d'assistance chirurgicale. Nous percevons la salle grâce à des caméras RGB-D qui fournissent des informations visuelles complémentaires sur la scène. Ces informations permettent de développer des méthodes mieux adaptées aux difficultés propres aux SO, comme l'encombrement, les surfaces sans texture et les occlusions. Nous présentons des nouvelles approches qui tirent profit des informations temporelles, de profondeur et des vues multiples afin de construire des modèles robustes pour la détection des personnes et de leurs poses. Une évaluation est effectuée sur plusieurs jeux de données complexes enregistrés dans des salles opératoires avec une ou plusieurs caméras. Les résultats obtenus sont très prometteurs et montrent que nos approches surpassent les méthodes de l'état de l'art sur ces données cliniques. / In this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries.
183

Dynamic Headpose Classification and Video Retargeting with Human Attention

Anoop, K R January 2015 (has links) (PDF)
Over the years, extensive research has been devoted to the study of people's head pose due to its relevance in security, human-computer interaction, advertising as well as cognitive, neuro and behavioural psychology. One of the main goals of this thesis is to estimate people's 3D head orientation as they freely move around in naturalistic settings such as parties, supermarkets etc. Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces captured are at low-resolution with a blurred appearance. Also labelling sufficient training data for headpose estimation in such settings is difficult due to the motion of targets and the large possible range of head orientations. Domain adaptation approaches are useful for transferring knowledge from the training source to the test target data having different attributes, minimizing target data labelling efforts in the process. This thesis examines the use of transfer learning for efficient multi-view head pose classification. Relationship between head pose and facial appearance from many labelled examples corresponding to the source data is learned initially. Domain adaptation techniques are then employed to transfer this knowledge to the target data. The following three challenging situations is addressed (I) ranges of head poses in the source and target images is different, (II) where source images capture a stationary person while target images capture a moving person with varying facial appearance due to changing perspective, scale and (III) a combination of (I) and (II). All proposed transfer learning methods are sufficiently tested and benchmarked on a new compiled dataset DPOSE for headpose classification. This thesis also looks at a novel signature representation for describing object sets for covariance descriptors, Covariance Profiles (CPs). CP is well suited for representing a set of similarly related objects. CPs posit that the covariance matrices, pertaining to a specific entity, share the same eigen-structure. Such a representation is not only compact but also eliminates the need to store all the training data. Experiments on images as well as videos for applications such as object-track clustering and headpose estimation is shown using CP. In the second part, Human-gaze for interest point detection for video retargeting is explored. Regions in video streams attracting human interest contribute significantly to human understanding of the video. Being able to predict salient and informative Regions of Interest (ROIs) through a sequence of eye movements is a challenging problem. This thesis proposes an interactive human-in-loop framework to model eye-movements and predicts visual saliency in yet-unseen frames. Eye-tracking and video content is used to model visual attention in a manner that accounts for temporal discontinuities due to sudden eye movements, noise and behavioural artefacts. Gaze buffering, for eye-gaze analysis and its fusion with content based features is proposed. The method uses eye-gaze information along with bottom-up and top-down saliency to boost the importance of image pixels. Our robust visual saliency prediction is instantiated for content aware Video Retargeting.
184

Estudo de uma técnica para o tratamento de dead-times em operações de rastreamento de objetos por servovisão

Saqui, Diego 22 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:15Z (GMT). No. of bitstreams: 1 6235.pdf: 6898238 bytes, checksum: 058a3b75f03de2058255b7fa7db30dac (MD5) Previous issue date: 2014-05-22 / Financiadora de Estudos e Projetos / Visual servoing is a technique that uses computer vision to acquire visual information (by camera) and a control system with closed loop circuit to control robots. One typical application of visual servoing is tracking objects on conveyors in industrial environments. Visual servoing has the advantage of obtaining a large amount of information from the environment and greater flexibility in operations than other types of sensors. A disadvantage are the delays, known as dead-times or time-delays that can occur during the treatment of visual information in computer vision tasks or other tasks of the control system that need large processing capacity. The dead-times in visual servoing applied in industrial operations such as in the tracking of objects on conveyors are critical and can negatively affect production capacity in manufacturing environments. Some methodogies can be found in the literature for this problem and some of these methodologies are often based on the Kalman filter. In this work a technique was selected based on the formulation of the Kalman filter that already had a study on the prediction of future pose of objects with linear motion. This methodology has been studied in detail, tested and analyzed through simulations for other motions and some applications. Three types of experiments were generated: one for different types of motions and two others applied in different types of signals in the velocity control systems. The results from the motion of the object shown that the technique is able to estimate the future pose of objects with linear motion and smooth curves, but it is inefficient for drastic changes in motion. With respect to the signal to be filtered in the velocity control, the methodogy has been shown applicable (with motions conditions) only in the estimation of pose of the object after the occurrence of dead-times caused by computer vision and this information is subsequently used to calculate the future error of the object related to the robotic manipulator used to calculate the velocity of the robot. The trying to apply the methodogy directly on the error used to calculate the velocity to be applied to the robot did not produce good results. With the results the methodogy can be applied for object tracking with linear motion and smooth curves as in the case of objects transported by conveyors in industrial environments. / Servovisao e uma tecnica que utiliza visao computacional para obter informacoes visuais (atraves de camera) e um sistema de controle com circuito em malha fechada para controlar robos. Uma das aplicacoes tipicas de servovisao e no rastreamento de objetos sobre esteiras transportadoras em ambientes industriais. Servovisao possui a vantagem em relacao a outros tipos de sensores de permitir a obtencao de um grande numero de informacoes a partir do ambiente e maior flexibilidade nas operacoes. Uma desvantagem sao os atrasos conhecidos como dead-times ou time-delays que podem ocorrer durante o tratamento de informacoes visuais nas tarefas de visao computacional ou em outras tarefas do sistema de controle que necessitam de grande capacidade de processamento. Os dead-times em servovisao aplicada em operacoes industriais como no rastreamento de objetos em esteiras transportadoras sao criticos e podem afetar negativamente na capacidade de producao em ambientes de manufatura. Algumas metodologias podem ser encontradas na literatura para esse tipo de problema sendo muitas vezes baseadas no filtro de Kalman. Nesse trabalho foi selecionada uma metodologia baseada na formulacao do filtro de Kalman que ja possui um estudo na previsao futura de pose de objetos com movimentacao linear. Essa metodologia foi estudada detalhadamente, testada atraves de simulacoes e analisada sobre outros tipos de movimentos e algumas aplicacoes. No total foram gerados tres tipos de experimentos: um para diferentes tipos de movimentacao e outros dois aplicados em diferentes tipos de sinais no controlador de velocidades. Os resultados a partir da movimentacao do objeto demonstraram que o metodo e capaz de estimar a pose futura de objetos com movimento linear e com curvas suaves, porem e ineficiente para alteracoes drasticas no movimento. Com relacao ao sinal a ser filtrado no controlador de velocidades a metodologia se demonstrou aplicavel (com as condicoes de movimento) somente na estimativa da pose do objeto apos a ocorrencia de dead-times causados por visao computacional e posteriormente essa informacao e utilizada para calcular o erro futuro do objeto em relacao ao manipulador robotico utilizado no calculo da velocidade do robo. A tentativa de aplicacao da tecnica diretamente no erro utilizado no calculo da velocidade a ser aplicada ao robo nao apresentou bons resultados. Com os resultados obtidos a metodologia se demonstrou eficiente para o rastreamento de objetos de forma linear e curvas suaves como no caso de objetos transportados por esteiras em ambientes industriais.
185

Posicionamento e movimenta??o de um rob? human?ide utilizando imagens de uma c?mera m?vel externa

Nogueira, Marcelo Borges 20 December 2005 (has links)
Made available in DSpace on 2014-12-17T14:55:48Z (GMT). No. of bitstreams: 1 MarceloBN.pdf: 1368278 bytes, checksum: e9f6ea9d9353cb33144a3fc036bd57dc (MD5) Previous issue date: 2005-12-20 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This work proposes a method to localize a simple humanoid robot, without embedded sensors, using images taken from an extern camera and image processing techniques. Once the robot is localized relative to the camera, supposing we know the position of the camera relative to the world, we can compute the position of the robot relative to the world. To make the camera move in the work space, we will use another mobile robot with wheels, which has a precise locating system, and will place the camera on it. Once the humanoid is localized in the work space, we can take the necessary actions to move it. Simultaneously, we will move the camera robot, so it will take good images of the humanoid. The mainly contributions of this work are: the idea of using another mobile robot to aid the navigation of a humanoid robot without and advanced embedded electronics; chosing of the intrinsic and extrinsic calibration methods appropriated to the task, especially in the real time part; and the collaborative algorithm of simultaneous navigation of the robots / Este trabalho prop?e um m?todo para localizar um rob? human?ide simples, sem sensores embarcados, utilizando imagens obtidas por uma c?mera externa e t?cnicas de processamento de imagens. Localizando o rob? em rela??o ? c?mera, e supondo conhecida a posi??o da c?mera em rela??o ao mundo, podemos determinar a posi??o do rob? human?ide em rela??o ao mundo. Para que a posi??o da c?mera n?o seja fixa, utilizamos um outro rob? m?vel com rodas, dotado de um sistema de localiza??o preciso, sobre o qual ser? colocada a c?mera. Uma vez que o human?ide seja localizado no ambiente de trabalho, podemos tomar as a??es necess?rias para realizar a sua movimenta??o. Simultaneamente, movimentamos o rob? que cont?m a c?mera, de forma que este tenha uma boa visada do human?ide. As principais contribui??es deste trabalho s?o: a id?ia de utilizar um segundo rob? m?vel para auxiliar a movimenta??o de um rob? human?ide sem eletr?nica embarcada avan?ada; a escolha de m?todos de calibra??o dos par?metros intr?nsecos e extr?nsecos da c?mera apropriados para a aplica??o em quest?o, especialmente na parte em tempo real; e o algoritmo colaborativo de movimenta??o simult?nea dos dois rob?s
186

3D Pose estimation of continuously deformable instruments in robotic endoscopic surgery / Mesure par vision de la position d'instruments médicaux flexibles pour la chirurgie endoscopique robotisée

Cabras, Paolo 24 February 2016 (has links)
Connaître la position 3D d’instruments robotisés peut être très utile dans le contexte chirurgical. Nous proposons deux méthodes automatiques pour déduire la pose 3D d’un instrument avec une unique section pliable et équipé avec des marqueurs colorés, en utilisant uniquement les images fournies par la caméra monoculaire incorporée dans l'endoscope. Une méthode basée sur les graphes permet segmenter les marqueurs et leurs coins apparents sont extraits en détectant la transition de couleur le long des courbes de Bézier qui modélisent les points du bord. Ces primitives sont utilisées pour estimer la pose 3D de l'instrument en utilisant un modèle adaptatif qui prend en compte les jeux mécaniques du système. Pour éviter les limites de cette approche dérivants des incertitudes sur le modèle géométrique, la fonction image-position-3D peut être appris selon un ensemble d’entrainement. Deux techniques ont été étudiées et améliorées : réseau des fonctions à base radiale avec noyaux gaussiens et une régression localement pondérée. Les méthodes proposées sont validées sur une cellule expérimentale robotique et sur des séquences in-vivo. / Knowing the 3D position of robotized instruments can be useful in surgical context for e.g. their automatic control or gesture guidance. We propose two methods to infer the 3D pose of a single bending section instrument equipped with colored markers using only the images provided by the monocular camera embedded in the endoscope. A graph-based method is used to segment the markers. Their corners are extracted by detecting color transitions along Bézier curves fitted on edge points. These features are used to estimate the 3D pose of the instrument using an adaptive model that takes into account the mechanical plays of the system. Since this method can be affected by model uncertainties, the image-to-3d function can be learned according to a training set. We opted for two techniques that have been improved : Radial Basis Function Network with Gaussian kernel and Locally Weighted Projection. The proposed methods are validated on a robotic experimental cell and in in-vivo sequences.
187

Fusion de données capteurs visuels et inertiels pour l'estimation de la pose d'un corps rigide / Rigid body pose estimation using fusion of inertial and visual sensor data

Seba, Ali 16 June 2015 (has links)
Cette thèse traite la problématique d'estimation de la pose (position relative et orientation) d'un corps rigide en mouvement dans l’espace 3D par fusion de données issues de capteurs inertiels et visuels. Les mesures inertielles sont fournies à partir d’une centrale inertielle composée de gyroscopes 3 axes et d’accéléromètres 3 axes. Les données visuelles sont issues d’une caméra. Celle-ci est positionnée sur le corps rigide en mouvement, elle fournit des images représentatives du champ visuel perçu. Ainsi, les mesures implicites des directions des lignes, supposées fixes dans l’espace de la scène, projetées sur le plan de l’image seront utilisées dans l’algorithme d’estimation de l’attitude. La démarche consistait d’abord à traiter le problème de la mesure issue du capteur visuel sur une longue séquence en utilisant les caractéristiques de l’image. Ainsi, un algorithme de suivi de lignes a été proposé en se basant sur les techniques de calcul du flux optique des points extraits des lignes à suivre et utilisant une approche de mise en correspondance par minimisation de la distance euclidienne. Par la suite, un observateur conçu dans l’espace SO(3) a été proposé afin d’estimer l’orientation relative du corps rigide dans la scène 3D en fusionnant les données issues de l’algorithme de suivi de lignes avec les données des gyroscopes. Le gain de l’observateur a été élaboré en utilisant un filtre de Kalman de type M.E.K.F. (Multiplicative Extended Kalman Filter). Le problème de l’ambigüité du signe dû à la mesure implicite des directions des lignes a été considéré dans la conception de cet observateur. Enfin, l’estimation de la position relative et de la vitesse absolue du corps rigide dans la scène 3D a été traitée. Deux observateurs ont été proposés : le premier est un observateur en cascade avec découplage entre l’estimation de l’attitude et l’estimation de la position. L’estimation issue de l’observateur d’attitude alimente un observateur non linéaire utilisant des mesures issues des accéléromètres afin de fournir une estimation de la position relative et de la vitesse absolue du corps rigide. Le deuxième observateur, conçu quant à lui directement dans SE(3) , utilise un filtre de Kalman de type M.E.K.F afin d’estimer la pose par fusion de données inertielles (accéléromètres, gyromètres) et des données visuelles. Les performances des méthodes proposées sont illustrées et validées par différents résultats de simulation / AbstractThis thesis addresses the problems of pose estimation of a rigid body moving in 3D space by fusing data from inertial and visual sensors. The inertial measurements are provided from an I.M.U. (Inertial Measurement Unit) composed by accelerometers and gyroscopes. Visual data are from cameras, which positioned on the moving object, provide images representative of the perceived visual field. Thus, the implicit measure directions of fixed lines in the space of the scene from their projections on the plane of the image will be used in the attitude estimation. The approach was first to address the problem of measuring visual sensors after a long sequence using the characteristics of the image. Thus, a line tracking algorithm has been proposed based on optical flow of the extracted points and line matching approach by minimizing the Euclidean distance. Thereafter, an observer in the SO(3) space has been proposed to estimate the relative orientation of the object in the 3D scene by merging the data from the proposed lines tracking algorithm with Gyro data. The observer gain was developed using a Kalman filter type M.E.K.F. (Multiplicative Extended Kalman Filter). The problem of ambiguity in the sign of the measurement directions of the lines was considered in the design of the observer. Finally, the estimation of the relative position and the absolute velocity of the rigid body in the 3D scene have been processed. Two observers were proposed: the first one is an observer cascaded with decoupled from the estimation of the attitude and position estimation. The estimation result of the attitude observer feeds a nonlinear observer using measurements from the accelerometers in order to provide an estimate of the relative position and the absolute velocity of the rigid body. The second observer, designed directly in SE (3) for simultaneously estimating the position and orientation of a rigid body in 3D scene by fusing inertial data (accelerometers, gyroscopes), and visual data using a Kalman filter (M.E.K.F.). The performance of the proposed methods are illustrated and validated by different simulation results
188

Extraction de comportements reproductibles en avatar virtuel

Dare, Kodjine 10 1900 (has links)
Face à une image représentant une personne, nous (les êtres humains) pouvons visualiser les différentes parties de la personne en trois dimensions (tridimensionnellement – 3D) malgré l'aspect bidimensionnel (2D) de l'image. Cette compétence est maîtrisée grâce à des années d'analyse des humains. Bien que cette estimation soit facilement réalisable par les êtres humains, elle peut être difficile pour les machines. Dans ce mémoire, nous décrivons une approche qui vise à estimer des poses à partir de vidéos dans le but de reproduire les mouvements observés par un avatar virtuel. Nous poursuivons en particulier deux objectifs dans notre travail. Tout d'abord, nous souhaitons extraire les coordonnées d’un individu dans une vidéo à l’aide de méthodes 2D puis 3D. Dans le second objectif, nous explorons la reconstruction d'un avatar virtuel en utilisant les coordonnées 3D de façon à transférer les mouvements humains vers l'avatar. Notre approche qui consiste à compléter l’estimation des coordonnées 3D par des coordonnes 2D permettent d’obtenir de meilleurs résultats que les méthodes existantes. Finalement nous appliquons un transfert des positions par image sur le squelette d'un avatar virtuel afin de reproduire les mouvements extraits de la vidéo. / Given an image depicting a person, we (human beings) can visualize the different parts of the person in three dimensions despite the two-dimensional aspect of the image. This perceptual skill is mastered through years of analyzing humans. While this estimation is easily achievable for human beings, it can be challenging for machines. 3D human pose estimation uses a 3D skeleton to represent the human body posture. In this thesis, we describe an approach that aims at estimating poses from video with the objective of reproducing the observed movements by a virtual avatar. We aim two main objectives in our work. First, we achieve the extraction of initial body parts coordinates in 2D using a method that predicts joint locations by part affinities (PAF). Then, we estimate 3D body parts coordinates based on a human full 3D mesh reconstruction approach supplemented by the previously estimated 2D coordinates. Secondly, we explore the reconstruction of a virtual avatar using the extracted 3D coordinates with the prospect to transfer human movements towards the animated avatar. This would allow to extract the behavioral dynamics of a human. Our approach consists of multiple subsequent stages that show better results in the estimation and extraction than similar solutions due to this supplement of 2D coordinates. With the final extracted coordinates, we apply a transfer of the positions (per frame) to the skeleton of a virtual avatar in order to reproduce the movements extracted from the video.
189

Estimation de pose 2D par réseau convolutif

Huppé, Samuel 04 1900 (has links)
Magic: The Gathering} est un jeu de cartes à collectionner stochastique à information imparfaite inventé par Richard Garfield en 1993. Le but de ce projet est de proposer un pipeline d'apprentissage machine permettant d'accomplir la détection et la localisation des cartes du jeu \textit{Magic} au sein d'une image typique des tournois de ce jeu. Il s'agit d'un problème de pose d'objets 2D à quatre degrés de liberté soit, la position sur deux axes, la rotation et l'échelle, dans un contexte où les cartes peuvent être superposées. À travers ce projet, nous avons développé une approche par données synthétiques à deux réseaux capable, collectivement d'identifier, et de régresser ces paramètres avec une précision significative. Dans le cadre de ce projet, nous avons développé un algorithme d'apprentissage profond par données synthétiques capable de positionner une carte avec une précision d'un demi pixel et d'une rotation de moins d'un degré. Finalement, nous avons montré que notre jeu de données synthétique est suffisamment réaliste pour permettre à nos réseaux de généraliser aux cas d'images réelles. / Magic: The Gathering} is an imperfect information, stochastic, collectible card game invented by Richard Garfield in 1993. The goal of this project is to propose a machine learning pipeline capable of detecting and localising \textit{Magic} cards within an image. This is a 2D pose problem with 4 degrees of freedom, namely translation in $x$ and $y$, rotation, and scale, in a context where cards can be superimposed on one another. We tackle this problem by relying on deep learning using a combination of two separate neural networks. Our final pipeline has the ability to tackle real-world images and gives, with a very good degree of precision, the poses of cards within an image. Through the course of this project, we have developped a method of realistic synthetic data generation to train both our models to tackle real world images. The results show that our pose subnetwork is able to predict position within half a pixel, rotation within one degree and scale within 2 percent.
190

Skeleton Tracking for Sports Using LiDAR Depth Camera / Skelettspårning för sport med LiDAR-djupkamera

Efstratiou, Panagiotis January 2021 (has links)
Skeletal tracking can be accomplished deploying human pose estimation strategies. Deep learning is shown to be the paramount approach in the realm where in collaboration with a ”light detection and ranging” depth camera the development of a markerless motion analysis software system seems to be feasible. The project utilizes a trained convolutional neural network in order to track humans doing sport activities and to provide feedback after biomechanical analysis. Implementations of four filtering methods are presented regarding movement’s nature, such as kalman filter, fixedinterval smoother, butterworth and moving average filter. The software seems to be practicable in the field evaluating videos at 30Hz, as it is demonstrated by indoor cycling and hammer throwing events. Nonstatic camera behaves quite well against a standstill and upright person while the mean absolute error is 8.32% and 6.46% referential to left and right knee angle, respectively. An impeccable system would benefit not only the sports domain but also the health industry as a whole. / Skelettspårning kan åstadkommas med hjälp av metoder för uppskattning av mänsklig pose. Djupinlärningsmetoder har visat sig vara det främsta tillvägagångssättet och om man använder en djupkamera med ljusdetektering och varierande omfång verkar det vara möjligt att utveckla ett markörlöst system för rörelseanalysmjukvara. I detta projekt används ett tränat neuralt nätverk för att spåra människor under sportaktiviteter och för att ge feedback efter biomekanisk analys. Implementeringar av fyra olika filtreringsmetoder för mänskliga rörelser presenteras, kalman filter, utjämnare med fast intervall, butterworth och glidande medelvärde. Mjukvaran verkar vara användbar vid fälttester för att utvärdera videor vid 30Hz. Detta visas genom analys av inomhuscykling och släggkastning. En ickestatisk kamera fungerar ganska bra vid mätningar av en stilla och upprättstående person. Det genomsnittliga absoluta felet är 8.32% respektive 6.46% då vänster samt höger knävinkel användes som referens. Ett felfritt system skulle gynna såväl idrottssom hälsoindustrin.

Page generated in 0.0288 seconds