• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 198
  • 24
  • 18
  • 10
  • 9
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 343
  • 217
  • 145
  • 106
  • 70
  • 61
  • 58
  • 48
  • 45
  • 45
  • 44
  • 43
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Angles-Only Navigation for Autonomous Orbital Rendezvous

Woffinden, David Charles 01 December 2008 (has links)
The proposed thesis of this dissertation has both a practical element and theoretical component which aim to answer key questions related to the use of angles-only navigation for autonomous orbital rendezvous. The first and fundamental principle to this work argues that an angles-only navigation filter can determine the relative position and orientation (pose) between two spacecraft to perform the necessary maneuvers and close proximity operations for autonomous orbital rendezvous. Second, the implementation of angles-only navigation for on-orbit applications is looked upon with skeptical eyes because of its perceived limitation of determining the relative range between two vehicles. This assumed, yet little understood subtlety can be formally characterized with a closed-form analytical observability criteria which specifies the necessary and sufficient conditions for determining the relative position and velocity with only angular measurements. With a mathematical expression of the observability criteria, it can be used to 1) identify the orbital rendezvous trajectories and maneuvers that ensure the relative position and velocity are observable for angles-only navigation, 2) quantify the degree or level of observability and 3) compute optimal maneuvers that maximize observability. In summary, the objective of this dissertation is to provide both a practical and theoretical foundation for the advancement of autonomous orbital rendezvous through the use of angles-only navigation.
152

Hand gesture recognition using sEMG and deep learning

Nasri, Nadia 17 June 2021 (has links)
In this thesis, a study of two blooming fields in the artificial intelligence topic is carried out. The first part of the present document is about 3D object recognition methods. Object recognition in general is about providing the ability to understand what objects appears in the input data of an intelligent system. Any robot, from industrial robots to social robots, could benefit of such capability to improve its performance and carry out high level tasks. In fact, this topic has been largely studied and some object recognition methods present in the state of the art outperform humans in terms of accuracy. Nonetheless, these methods are image-based, namely, they focus in recognizing visual features. This could be a problem in some contexts as there exist objects that look alike some other, different objects. For instance, a social robot that recognizes a face in a picture, or an intelligent car that recognizes a pedestrian in a billboard. A potential solution for this issue would be involving tridimensional data so that the systems would not focus on visual features but topological features. Thus, in this thesis, a study of 3D object recognition methods is carried out. The approaches proposed in this document, which take advantage of deep learning methods, take as an input point clouds and are able to provide the correct category. We evaluated the proposals with a range of public challenges, datasets and real life data with high success. The second part of the thesis is about hand pose estimation. This is also an interesting topic that focuses in providing the hand's kinematics. A range of systems, from human computer interaction and virtual reality to social robots could benefit of such capability. For instance to interface a computer and control it with seamless hand gestures or to interact with a social robot that is able to understand human non-verbal communication methods. Thus, in the present document, hand pose estimation approaches are proposed. It is worth noting that the proposals take as an input color images and are able to provide 2D and 3D hand pose in the image plane and euclidean coordinate frames. Specifically, the hand poses are encoded in a collection of points that represents the joints in a hand, so that they can be easily reconstructed in the full hand pose. The methods are evaluated on custom and public datasets, and integrated with a robotic hand teleoperation application with great success.
153

Fine-Grained Hand Pose Estimation System based on Channel State Information

Yao, Weijie January 2020 (has links)
No description available.
154

Contributions to 3D object recognition and 3D hand pose estimation using deep learning techniques

Gomez-Donoso, Francisco 18 September 2020 (has links)
In this thesis, a study of two blooming fields in the artificial intelligence topic is carried out. The first part of the present document is about 3D object recognition methods. Object recognition in general is about providing the ability to understand what objects appears in the input data of an intelligent system. Any robot, from industrial robots to social robots, could benefit of such capability to improve its performance and carry out high level tasks. In fact, this topic has been largely studied and some object recognition methods present in the state of the art outperform humans in terms of accuracy. Nonetheless, these methods are image-based, namely, they focus in recognizing visual features. This could be a problem in some contexts as there exist objects that look alike some other, different objects. For instance, a social robot that recognizes a face in a picture, or an intelligent car that recognizes a pedestrian in a billboard. A potential solution for this issue would be involving tridimensional data so that the systems would not focus on visual features but topological features. Thus, in this thesis, a study of 3D object recognition methods is carried out. The approaches proposed in this document, which take advantage of deep learning methods, take as an input point clouds and are able to provide the correct category. We evaluated the proposals with a range of public challenges, datasets and real life data with high success. The second part of the thesis is about hand pose estimation. This is also an interesting topic that focuses in providing the hand's kinematics. A range of systems, from human computer interaction and virtual reality to social robots could benefit of such capability. For instance to interface a computer and control it with seamless hand gestures or to interact with a social robot that is able to understand human non-verbal communication methods. Thus, in the present document, hand pose estimation approaches are proposed. It is worth noting that the proposals take as an input color images and are able to provide 2D and 3D hand pose in the image plane and euclidean coordinate frames. Specifically, the hand poses are encoded in a collection of points that represents the joints in a hand, so that they can be easily reconstructed in the full hand pose. The methods are evaluated on custom and public datasets, and integrated with a robotic hand teleoperation application with great success.
155

Observability based Optimal Path Planning for Multi-Agent Systems to aid In Relative Pose Estimation

Boyinine, Rohith 28 June 2021 (has links)
No description available.
156

Towards a framework for multi class statistical modelling of shape, intensity and kinematics in medical images

Fouefack, Jean-Rassaire 14 February 2022 (has links)
Statistical modelling has become a ubiquitous tool for analysing of morphological variation of bone structures in medical images. For radiological images, the shape, relative pose between the bone structures and the intensity distribution are key features often modelled separately. A wide range of research has reported methods that incorporate these features as priors for machine learning purposes. Statistical shape, appearance (intensity profile in images) and pose models are popular priors to explain variability across a sample population of rigid structures. However, a principled and robust way to combine shape, pose and intensity features has been elusive for four main reasons: 1) heterogeneity of the data (data with linear and non-linear natural variation across features); 2) sub-optimal representation of three-dimensional Euclidean motion; 3) artificial discretization of the models; and 4) lack of an efficient transfer learning process to project observations into the latent space. This work proposes a novel statistical modelling framework for multiple bone structures. The framework provides a latent space embedding shape, pose and intensity in a continuous domain allowing for new approaches to skeletal joint analysis from medical images. First, a robust registration method for multi-volumetric shapes is described. Both sampling and parametric based registration algorithms are proposed, which allow the establishment of dense correspondence across volumetric shapes (such as tetrahedral meshes) while preserving the spatial relationship between them. Next, the framework for developing statistical shape-kinematics models from in-correspondence multi-volumetric shapes embedding image intensity distribution, is presented. The framework incorporates principal geodesic analysis and a non-linear metric for modelling the spatial orientation of the structures. More importantly, as all the features are in a joint statistical space and in a continuous domain; this permits on-demand marginalisation to a region or feature of interest without training separate models. Thereafter, an automated prediction of the structures in images is facilitated by a model-fitting method leveraging the models as priors in a Markov chain Monte Carlo approach. The framework is validated using controlled experimental data and the results demonstrate superior performance in comparison with state-of-the-art methods. Finally, the application of the framework for analysing computed tomography images is presented. The analyses include estimation of shape, kinematic and intensity profiles of bone structures in the shoulder and hip joints. For both these datasets, the framework is demonstrated for segmentation, registration and reconstruction, including the recovery of patient-specific intensity profile. The presented framework realises a new paradigm in modelling multi-object shape structures, allowing for probabilistic modelling of not only shape, but also relative pose and intensity as well as the correlations that exist between them. Future work will aim to optimise the framework for clinical use in medical image analysis.
157

Vision-Based Rendering: Using Computational Stereo to Actualize IBR View Synthesis

Steele, Kevin L. 14 August 2006 (has links) (PDF)
Computer graphics imagery (CGI) has enabled many useful applications in training, defense, and entertainment. One such application, CGI simulation, is a real-time system that allows users to navigate through and interact with a virtual rendition of an existing environment. Creating such systems is difficult, but particularly burdensome is the task of designing and constructing the internal representation of the simulation content. Authoring this content on a computer usually requires great expertise and many man-hours of labor. Computational stereo and image-based rendering offer possibilities to automatically create simulation content without user assistance. However, these technologies have largely been limited to creating content from only a few photographs, severely limiting the simulation experience. The purpose of this dissertation is to enable the process of automated content creation for large numbers of photographs. The workflow goal consists of a user photographing any real-world environment intended for simulation, and then loading the photographs into the computer. The theoretical and algorithmic contributions of the dissertation are then used to transform the photographs into the data required for real-time exploration of the photographed locale. This permits a rich simulation experience without the laborious effort required to author the content manually. To approach this goal we make four contributions to the fields of computer vision and image-based rendering: an improved point correspondence methodology, an adjacency graph construction algorithm for unordered photographs, a pose estimation ordering for unordered image sets, and an image-based rendering algorithm that interpolates omnidirectional images to synthesize novel views. We encapsulate our contributions into a working system that we call Vision-Based Rendering (VBR). With our VBR system we are able to automatically create simulation content from a large unordered collection of input photographs. However, there are severe restrictions in the type of image content our present system can accurately simulate. Photographs containing large regions of high frequency detail are incorporated very accurately, but images with smooth color gradations, including most indoor photographs, create distracting artifacts in the final simulation. Thus our system is a significant and functional step toward the ultimate goal of simulating any real-world environment.
158

Smart Phone-based Indoor Guidance System for the Visually Impaired

Taylor, Brandon Lee 13 March 2012 (has links) (PDF)
A smart phone camera based indoor guidance system to aid the visually impaired is presented. Most proposed systems for aiding the visually impaired with indoor navigation are not feasible for widespread use due to cost, usability, or portability. We use a smart phone vision based system to create an indoor guidance system that is simple, accessible, inexpensive, and discrete to aid the visually impaired to navigate unfamiliar environments such as public buildings. The system consists of a smart phone and a server. The smart phone transmits pictures of the user's location to the server. The server processes the images and matches them to a database of stored images of the building. After matching features, the location and orientation of the person is calculated using 3D location correspondence data stored for features of each image. Positional information is then transmitted back to the smart phone and communicated to the user via text-to-speech. This thesis focuses on developing the vision technology for this unique application rather than building the complete system. Experimental results demonstrate the ability of the system to quickly and accurately determine the pose of the user in a university building.
159

Machine Learning Aided Millimeter Wave System for Real Time Gait Analysis

Alanazi, Mubarak Alayyat 10 August 2022 (has links)
No description available.
160

Automated touch-less customer order and robot deliver system design at Kroger

Shan, Xingjian 22 August 2022 (has links)
No description available.

Page generated in 0.0522 seconds