• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 10
  • 7
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 182
  • 182
  • 94
  • 69
  • 45
  • 33
  • 33
  • 30
  • 28
  • 28
  • 27
  • 26
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Identification, classification and modelling of Traditional African dances using deep learning techniques

Adebunmi Elizabeth Odefunso (10711203) 06 May 2021 (has links)
<p>Human action recognition continues to evolve and is examined better using deep learning techniques. Several successes have been recorded in the field of action recognition but only very few has focused on dance. This is because dance actions and, especially Traditional African dance, are long and involve fast movement of body parts. This research proposes a novel framework that applies data science algorithms to the field of cultural preservation by applying various deep learning techniques to identify, classify and model Traditional African dances from videos. Traditional African dances are important part of the African culture and heritage. Digital preservation of these dances in their myriad forms is a problem. The dance dataset was constituted using freely available YouTube videos. Three Traditional African dances – Adowa, Bata and Swange – were used for the dance classification process. Two Convolutional Neural Network (CNN) models were used for the classification and they achieved an accuracy of 97% and 98% respectively. Sound classification of Adowa, Bata and Swange drum ensembles were also carried out; an accuracy of 96% was achieved. Human Pose Estimation Algorithms were applied to the Sinte dance. A model of Sinte dance, which can be exported to other environments, was obtained.</p>
62

Classification of the different movements (walk/trot/canter) anddata collection of pose estimation

Sjöström, Moa January 2020 (has links)
Pose estimation uses computer vision to predict how a body moves. The likeliness off different movements is predicted with a neural network and the most likely pose is predicted. With DeepLabCut, an open source software package for 3D animal pose estimation, information about animals behaviour and movement can be extracted. In this report the pose estimation of horses four hooves is used. By looking at the position of the hooves different gaits can be identified. Horses used for riding in the major disciplines in Sweden have three different gaits, walk, trot and canter. Walk is a four-stoke gait, trot is two-stoke and canter is three-stoke. This can be used to classify the different gaits. By looking at the hooves movement in vertical position over time and fitting a sinewave to the data it is possible to see the phase difference in the hooves movement. For walk and trot there was a significant pattern which was easy to identify and corresponded well to the theory of horses movement. For canter our pre-trained model lacked in accuracy, so the output data were insufficient. Therefore it was not possible to find a significant pattern for canter which corresponds to the theory of horses movements. The Fourier Transform were also tested to classify the gaits and when plotted it was possible to detect the different gaits, but not significant enough to be reliable for different horses in different sizes running in different paces. It was also possible to add the data for all four hooves together and fit a sinewave to the added data, and then compare it with the sinewaves for each hoof separately. Depending on the gait the frequency of the sinewaves differed between the hooves separately and added together and the gaits could be identified.
63

Angles-Only Navigation for Autonomous Orbital Rendezvous

Woffinden, David Charles 01 December 2008 (has links)
The proposed thesis of this dissertation has both a practical element and theoretical component which aim to answer key questions related to the use of angles-only navigation for autonomous orbital rendezvous. The first and fundamental principle to this work argues that an angles-only navigation filter can determine the relative position and orientation (pose) between two spacecraft to perform the necessary maneuvers and close proximity operations for autonomous orbital rendezvous. Second, the implementation of angles-only navigation for on-orbit applications is looked upon with skeptical eyes because of its perceived limitation of determining the relative range between two vehicles. This assumed, yet little understood subtlety can be formally characterized with a closed-form analytical observability criteria which specifies the necessary and sufficient conditions for determining the relative position and velocity with only angular measurements. With a mathematical expression of the observability criteria, it can be used to 1) identify the orbital rendezvous trajectories and maneuvers that ensure the relative position and velocity are observable for angles-only navigation, 2) quantify the degree or level of observability and 3) compute optimal maneuvers that maximize observability. In summary, the objective of this dissertation is to provide both a practical and theoretical foundation for the advancement of autonomous orbital rendezvous through the use of angles-only navigation.
64

Hand gesture recognition using sEMG and deep learning

Nasri, Nadia 17 June 2021 (has links)
In this thesis, a study of two blooming fields in the artificial intelligence topic is carried out. The first part of the present document is about 3D object recognition methods. Object recognition in general is about providing the ability to understand what objects appears in the input data of an intelligent system. Any robot, from industrial robots to social robots, could benefit of such capability to improve its performance and carry out high level tasks. In fact, this topic has been largely studied and some object recognition methods present in the state of the art outperform humans in terms of accuracy. Nonetheless, these methods are image-based, namely, they focus in recognizing visual features. This could be a problem in some contexts as there exist objects that look alike some other, different objects. For instance, a social robot that recognizes a face in a picture, or an intelligent car that recognizes a pedestrian in a billboard. A potential solution for this issue would be involving tridimensional data so that the systems would not focus on visual features but topological features. Thus, in this thesis, a study of 3D object recognition methods is carried out. The approaches proposed in this document, which take advantage of deep learning methods, take as an input point clouds and are able to provide the correct category. We evaluated the proposals with a range of public challenges, datasets and real life data with high success. The second part of the thesis is about hand pose estimation. This is also an interesting topic that focuses in providing the hand's kinematics. A range of systems, from human computer interaction and virtual reality to social robots could benefit of such capability. For instance to interface a computer and control it with seamless hand gestures or to interact with a social robot that is able to understand human non-verbal communication methods. Thus, in the present document, hand pose estimation approaches are proposed. It is worth noting that the proposals take as an input color images and are able to provide 2D and 3D hand pose in the image plane and euclidean coordinate frames. Specifically, the hand poses are encoded in a collection of points that represents the joints in a hand, so that they can be easily reconstructed in the full hand pose. The methods are evaluated on custom and public datasets, and integrated with a robotic hand teleoperation application with great success.
65

Fine-Grained Hand Pose Estimation System based on Channel State Information

Yao, Weijie January 2020 (has links)
No description available.
66

Contributions to 3D object recognition and 3D hand pose estimation using deep learning techniques

Gomez-Donoso, Francisco 18 September 2020 (has links)
In this thesis, a study of two blooming fields in the artificial intelligence topic is carried out. The first part of the present document is about 3D object recognition methods. Object recognition in general is about providing the ability to understand what objects appears in the input data of an intelligent system. Any robot, from industrial robots to social robots, could benefit of such capability to improve its performance and carry out high level tasks. In fact, this topic has been largely studied and some object recognition methods present in the state of the art outperform humans in terms of accuracy. Nonetheless, these methods are image-based, namely, they focus in recognizing visual features. This could be a problem in some contexts as there exist objects that look alike some other, different objects. For instance, a social robot that recognizes a face in a picture, or an intelligent car that recognizes a pedestrian in a billboard. A potential solution for this issue would be involving tridimensional data so that the systems would not focus on visual features but topological features. Thus, in this thesis, a study of 3D object recognition methods is carried out. The approaches proposed in this document, which take advantage of deep learning methods, take as an input point clouds and are able to provide the correct category. We evaluated the proposals with a range of public challenges, datasets and real life data with high success. The second part of the thesis is about hand pose estimation. This is also an interesting topic that focuses in providing the hand's kinematics. A range of systems, from human computer interaction and virtual reality to social robots could benefit of such capability. For instance to interface a computer and control it with seamless hand gestures or to interact with a social robot that is able to understand human non-verbal communication methods. Thus, in the present document, hand pose estimation approaches are proposed. It is worth noting that the proposals take as an input color images and are able to provide 2D and 3D hand pose in the image plane and euclidean coordinate frames. Specifically, the hand poses are encoded in a collection of points that represents the joints in a hand, so that they can be easily reconstructed in the full hand pose. The methods are evaluated on custom and public datasets, and integrated with a robotic hand teleoperation application with great success.
67

Observability based Optimal Path Planning for Multi-Agent Systems to aid In Relative Pose Estimation

Boyinine, Rohith 28 June 2021 (has links)
No description available.
68

Vision-Based Rendering: Using Computational Stereo to Actualize IBR View Synthesis

Steele, Kevin L. 14 August 2006 (has links) (PDF)
Computer graphics imagery (CGI) has enabled many useful applications in training, defense, and entertainment. One such application, CGI simulation, is a real-time system that allows users to navigate through and interact with a virtual rendition of an existing environment. Creating such systems is difficult, but particularly burdensome is the task of designing and constructing the internal representation of the simulation content. Authoring this content on a computer usually requires great expertise and many man-hours of labor. Computational stereo and image-based rendering offer possibilities to automatically create simulation content without user assistance. However, these technologies have largely been limited to creating content from only a few photographs, severely limiting the simulation experience. The purpose of this dissertation is to enable the process of automated content creation for large numbers of photographs. The workflow goal consists of a user photographing any real-world environment intended for simulation, and then loading the photographs into the computer. The theoretical and algorithmic contributions of the dissertation are then used to transform the photographs into the data required for real-time exploration of the photographed locale. This permits a rich simulation experience without the laborious effort required to author the content manually. To approach this goal we make four contributions to the fields of computer vision and image-based rendering: an improved point correspondence methodology, an adjacency graph construction algorithm for unordered photographs, a pose estimation ordering for unordered image sets, and an image-based rendering algorithm that interpolates omnidirectional images to synthesize novel views. We encapsulate our contributions into a working system that we call Vision-Based Rendering (VBR). With our VBR system we are able to automatically create simulation content from a large unordered collection of input photographs. However, there are severe restrictions in the type of image content our present system can accurately simulate. Photographs containing large regions of high frequency detail are incorporated very accurately, but images with smooth color gradations, including most indoor photographs, create distracting artifacts in the final simulation. Thus our system is a significant and functional step toward the ultimate goal of simulating any real-world environment.
69

Light-weighted Deep Learning for LiDAR and Visual Odometry Fusion in Autonomous Driving

Zhang, Dingnan 20 December 2022 (has links)
No description available.
70

Automated Implementation of the Edinburgh Visual Gait Score (EVGS)

Ramesh, Shri Harini 14 July 2023 (has links)
Analyzing a person's gait is important in determining their physical and neurological health. However, typical motion analysis laboratories are only in urban specialty care facilities and can be expensive due to the specialized personnel and technology needed for these examinations. Many patients, especially those who reside in underdeveloped or isolated locations, find it impractical to go to such facilities. With the help of recent developments in high-performance computing and artificial intelligence models, it is now feasible to evaluate human movement using digital video. Over the past 20 years, various visual gait analysis tools and scales have been developed. A study of the literature and discussions with physicians who are domain experts revealed that the Edinburgh Visual Gait Score (EVGS) is one of the most effective scales currently available. Clinical implementations of EVGS currently rely on human scoring of videos. In this thesis, an algorithmic implementation of EVGS scoring based on hand-held smart phone video was implemented. Walking gait was recorded using a handheld smartphone at 60Hz as participants walked along a hallway. Body keypoints representing joints and limb segments were then identified using the OpenPose - Body 25 pose estimation model. A new algorithm was developed to identify foot events and strides from the keypoints and determine EVGS parameters at relevant strides. The stride identification results were compared with ground truth foot events that were manually labeled through direct observation, and the EVGS results were compared with evaluations by human scorers. Stride detection was accurate within 2 to 5 frames. The level of agreement between the scorers and the algorithmic EVGS score was strong for 14 of 17 parameters. The algorithm EVGS results were highly correlated to scorers' scores (r>0.80) for eight of the 17 factors. Smartphone-based remote motion analysis with automated implementation of the EVGS may be employed in a patient's neighborhood, eliminating the need to travel. These results demonstrated the viability of automated EVGS for remote human motion analysis.

Page generated in 0.2543 seconds