• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 2
  • Tagged with
  • 24
  • 24
  • 9
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A distributive approach to tactile sensing for application to human movement

Mikov, Nikolay January 2015 (has links)
This thesis investigates on clinical applicability of a novel sensing technology in the areas of postural steadiness and stroke assessment. The mechanically simple Distributive Tactile Sensing approach is applied to extract motion information from flexible surfaces to identify parameters and disorders of human movement in real time. The thesis reports on the design, implementation and testing of smart platform devices which are developed for discrimination applications through the use of linear and non-linear data interpretation techniques and neural networks for pattern recognition. In the thesis mathematical models of elastic plates, based on finite element and finite difference methods, are developed and described. The models are used to identify constructive parameters of sensing devices by investigating sensitivity and accuracy of Distributive Tactile Sensing surfaces. Two experimental devices have been constructed for the investigation. These are a sensing floor platform for standing applications and a sensing chair for sitting applications. Using a linear approach, the sensing floor platform is developed to detect centre of pressure, an important parameter widely used in the assessment of postural steadiness. It is demonstrated that the locus of centre of pressure can be determined with an average deviation of 1.05mm from that of a commercialised force platform in a balance application test conducted with five healthy volunteers. This amounts to 0.4% of the sensor range. The sensing chair used neural networks for pattern recognition, to identify the level of motor impairment in people with stroke through performing functional reaching task while sitting. The clinical studies with six real stroke survivors have shown the robustness of the sensing technique to deal with a range of possible motion in the reaching task investigated. The work of this thesis demonstrates that the novel Distributive Tactile Sensing approach is suited to clinical and home applications as screening and rehabilitation systems. Mechanical simplicity is a merit of the approach and has potential to lead to versatile low-cost units.
12

ヒトの表面粗さ弁別に及ぼす触運動速度の影響

大岡, 昌博, OHKA, Masahiro, 川村, 拓也, KAWAMURA, Takuya, 宮岡, 徹, MIYAOKA, Tetsu, 三矢, 保永, MITSUYA, Yasunaga 01 1900 (has links)
No description available.
13

分布圧覚ディスプレイ装置による仮想形状呈示

大岡, 昌博, OHKA, Masahiro, 毛利, 行宏, MOURI, Yukihiro, 杉浦, 徳宏, SUGIURA, Tokuhiro, 三矢, 保永, MITSUYA, Yasunaga, 古賀, 浩嗣, KOGA, Hiroshi 10 1900 (has links)
No description available.
14

Towards Haptic Intelligence for Artificial Hands: Development and Use of Deformable, Fluidic Tactile Sensors to Relate Action and Perception

January 2013 (has links)
abstract: Human fingertips contain thousands of specialized mechanoreceptors that enable effortless physical interactions with the environment. Haptic perception capabilities enable grasp and manipulation in the absence of visual feedback, as when reaching into one's pocket or wrapping a belt around oneself. Unfortunately, state-of-the-art artificial tactile sensors and processing algorithms are no match for their biological counterparts. Tactile sensors must not only meet stringent practical specifications for everyday use, but their signals must be processed and interpreted within hundreds of milliseconds. Control of artificial manipulators, ranging from prosthetic hands to bomb defusal robots, requires a constant reliance on visual feedback that is not entirely practical. To address this, we conducted three studies aimed at advancing artificial haptic intelligence. First, we developed a novel, robust, microfluidic tactile sensor skin capable of measuring normal forces on flat or curved surfaces, such as a fingertip. The sensor consists of microchannels in an elastomer filled with a liquid metal alloy. The fluid serves as both electrical interconnects and tunable capacitive sensing units, and enables functionality despite substantial deformation. The second study investigated the use of a commercially-available, multimodal tactile sensor (BioTac sensor, SynTouch) to characterize edge orientation with respect to a body fixed reference frame, such as a fingertip. Trained on data from a robot testbed, a support vector regression model was developed to relate haptic exploration actions to perception of edge orientation. The model performed comparably to humans for estimating edge orientation. Finally, the robot testbed was used to perceive small, finger-sized geometric features. The efficiency and accuracy of different haptic exploratory procedures and supervised learning models were assessed for estimating feature properties such as type (bump, pit), order of curvature (flat, conical, spherical), and size. This study highlights the importance of tactile sensing in situations where other modalities fail, such as when the finger itself blocks line of sight. Insights from this work could be used to advance tactile sensor technology and haptic intelligence for artificial manipulators that improve quality of life, such as prosthetic hands and wheelchair-mounted robotic hands. / Dissertation/Thesis / Ph.D. Mechanical Engineering 2013
15

Adaptive Grasping Using Tactile Sensing

Hyttinen, Emil January 2017 (has links)
Grasping novel objects is challenging because of incomplete object data and because of uncertainties inherent in real world applications. To robustly perform grasps on previously unseen objects, feedback from touch is essential. In our research, we study how information from touch sensors can be used to improve grasping novel objects. Since it is not trivial to extract relevant object properties and deduce appropriate actions from touch sensing, we employ machine learning techniques to learn suitable behaviors. We have shown that grasp stability estimation based on touch can be improved by including an approximate notion of object shape. Further we have devised a method to guide local grasp adaptations based on our stability estimation method. Grasp corrections are found by simulating tactile data for grasps in the vicinity of the current grasp. We present several experiments to demonstrate the applicability of our methods. The thesis is concluded by discussing our results and suggesting potential topics for further research. / Att greppa nya föremål är utmanande, både eftersom roboten inte har fullständig information om objekten och på grund av den inneboende osäkerheten i verkliga tillämpningar. Återkoppling från känselsensorer är viktigt för att kunna greppa föremål som inte påträffats tidigare. I vår forskning så studerar vi hur information från känselsensorer kan användas för att förbättra greppandet av nya föremål. Eftersom det är svårt att extrahera relevanta egenskaper om föremål och härleda lämpliga åtgärder, baserat på känselsensorer, så har vi använt maskininlärning för att lära roboten lämpliga beteenden. Vi har visat att uppskattningar av stabiliteten av ett grepp baserat på känselsensorer kan förbättras genom att även använda en grov approximation av föremålets form. Vi har även konstruerat en metod som vägleder lokala justeringar av grepp, baserat på vår metod som uppskattar stabiliteten av ett grepp. Dess justeringar hittas genom att simulera känselsensordata för grepp i närheten av det nuvarande greppet. Vi presenterar flera experiment som demonstrerar tillämpbarheten av våra metoder. Avhandlingen avslutas med en diskussion om våra resultat och förslag på möjliga ämnen för fortsatt forskning. / <p>QC 20170510</p>
16

Third Generation Tactile Imaging System with New Interface, Calibration Method and Wear Indication

Moser, William R. January 2017 (has links)
During a clinical breast exam, a doctor palpates the breast and uses factors such as estimated size and stiffness of subcutaneous inclusions to determine whether they may be malignant tumors. The Tactile Imaging System (TIS) under development at the Control, Sensing, Networking and Perception Laboratory (CSNAP) is an effort to provide accurate and consistent characterization of inclusions. The sensing principle of the TIS is based on Total Internal Reflection (TIR) of light in a Polydimethylsiloxane (PDMS) optical waveguide positioned in front of a digital camera. When the PDMS is pressed against an object of greater stiffness it deforms, causing some light to escape the waveguide and be sensed by the camera. An algorithm maps the light pattern caused by the deformation and the force applied during the image acquisition to estimate the size, depth and stiffness of the inclusion based on a kernel model. The Third Generation Experimental TIS (TIS 3E) is an effort to improve the performance, repeatability, and usability of the system. Performance is increased through a new graphical user interface (GUI) allowing fine tuning of camera parameters, and interchangeable sensing probes for varying PDMS waveguides. Repeatability is improved with a digitally controlled lighting system, hardware triggered force sensing, and an online PDMS lighting and condition monitoring system, lowering the overall measurement error of the system. Usability is improved by a new chassis, reducing the device size and weight by 50 percent. Accuracy of the TIS 3E is comparable to the maximum accuracy TIS 1E, and exceeded the minimum accuracy of the TIS 1E. The measurement frequency was also increased from 10Hz to 50Hz. The TIS 3E will provide an accurate, consistent data acquisition platform for future Tactile Imaging Research efforts. / Electrical and Computer Engineering
17

Multi-Directional Slip Detection Between Artificial Fingers and a Grasped Object

January 2012 (has links)
abstract: Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode array, were used to understand how grip force, contact angle, object texture, and slip direction may be encoded in the sensor data. Findings show that slip induced under conditions of high contact angles and grip forces resulted in significant changes in both AC and DC pressure magnitude and rate of change in pressure. Slip induced under conditions of low contact angles and grip forces resulted in significant changes in the rate of change in electrode impedance. Slip in the distal direction of a precision grip caused significant changes in pressure magnitude and rate of change in pressure, while slip in the radial direction of the wrist caused significant changes in the rate of change in electrode impedance. A strong relationship was established between slip direction and the rate of change in ratios of electrode impedance for radial and ulnar slip relative to the wrist. Consequently, establishing multiple thresholds or establishing a multivariate model may be a useful method for detecting and characterizing slip. Detecting slip for low contact angles could be done by monitoring electrode data, while detecting slip for high contact angles could be done by monitoring pressure data. Predicting slip in the distal direction could be done by monitoring pressure data, while predicting slip in the radial and ulnar directions could be done by monitoring electrode data. / Dissertation/Thesis / M.S. Bioengineering 2012
18

Deep Learning for 3D Perception: Computer Vision and Tactile Sensing

Garcia-Garcia, Alberto 23 October 2019 (has links)
The care of dependent people (for reasons of aging, accidents, disabilities or illnesses) is one of the top priority lines of research for the European countries as stated in the Horizon 2020 goals. In order to minimize the cost and the intrusiveness of the therapies for care and rehabilitation, it is desired that such cares are administered at the patient’s home. The natural solution for this environment is an indoor mobile robotic platform. Such robotic platform for home care needs to solve to a certain extent a set of problems that lie in the intersection of multiple disciplines, e.g., computer vision, machine learning, and robotics. In that crossroads, one of the most notable challenges (and the one we will focus on) is scene understanding: the robot needs to understand the unstructured and dynamic environment in which it navigates and the objects with which it can interact. To achieve full scene understanding, various tasks must be accomplished. In this thesis we will focus on three of them: object class recognition, semantic segmentation, and grasp stability prediction. The first one refers to the process of categorizing an object into a set of classes (e.g., chair, bed, or pillow); the second one goes one level beyond object categorization and aims to provide a per-pixel dense labeling of each object in an image; the latter consists on determining if an object which has been grasped by a robotic hand is in a stable configuration or if it will fall. This thesis presents contributions towards solving those three tasks using deep learning as the main tool for solving such recognition, segmentation, and prediction problems. All those solutions share one core observation: they all rely on tridimensional data inputs to leverage that additional dimension and its spatial arrangement. The four main contributions of this thesis are: first, we show a set of architectures and data representations for 3D object classification using point clouds; secondly, we carry out an extensive review of the state of the art of semantic segmentation datasets and methods; third, we introduce a novel synthetic and large-scale photorealistic dataset for solving various robotic and vision problems together; at last, we propose a novel method and representation to deal with tactile sensors and learn to predict grasp stability.
19

Tactile Sensing and Position Estimation Methods for Increased Proprioception of Soft-Robotic Platforms

Day, Nathan McClain 01 July 2018 (has links)
Soft robots have the potential to transform the way robots interact with their environment. This is due to their low inertia and inherent ability to more safely interact with the world without damaging themselves or the people around them. However, existing sensing for soft robots has at least partially limited their ability to control interactions with their environment. Tactile sensors could enable soft robots to sense interaction, but most tactile sensors are made from rigid substrates and are not well suited to applications for soft robots that can deform. In addition, the benefit of being able to cheaply manufacture soft robots may be lost if the tactile sensors that cover them are expensive and their resolution does not scale well for manufacturability. Soft robots not only need to know their interaction forces due to contact with their environment, they also need to know where they are in Cartesian space. Because soft robots lack a rigid structure, traditional methods of joint estimation found in rigid robots cannot be employed on soft robotic platforms. This requires a different approach to soft robot pose estimation. This thesis will discuss both tactile force sensing and pose estimation methods for soft-robots. A method to make affordable, high-resolution, tactile sensor arrays (manufactured in rows and columns) that can be used for sensorizing soft robots and other soft bodies isReserved developed. However, the construction results in a sensor array that exhibits significant amounts of cross-talk when two taxels in the same row are compressed. Using the same fabric-based tactile sensor array construction design, two different methods for cross-talk compensation are presented. The first uses a mathematical model to calculate a change in resistance of each taxel directly. The second method introduces additional simple circuit components that enable us to isolate each taxel electrically and relate voltage to force directly. This thesis also discusses various approaches in soft robot pose estimation along with a method for characterizing sensors using machine learning. Particular emphasis is placed on the effectiveness of parameter-based learning versus parameter-free learning, in order to determine which method of machine learning is more appropriate and accurate for soft robot pose estimation. Various machine learning architectures, such as recursive neural networks and convolutional neural networks, are also tested to demonstrate the most effective architecture to use for characterizing soft-robot sensors.
20

Shape sensing of deformable objects for robot manipulation / Mesure et suivi de la forme d'objets déformables pour la manipulation robotisée

Sanchez Loza, Jose Manuel 24 May 2019 (has links)
Les objets déformables sont omniprésents dans notre vie quotidienne. Chaque jour, nous manipulons des vêtements dans des configurations innombrables pour nous habiller, nouons les lacets de nos chaussures, cueillons des fruits et des légumes sans les endommager pour notre consommation et plions les reçus dans nos portefeuilles. Toutes ces tâches impliquent de manipuler des objets déformables et peuvent être exécutées sans problème par une personne. Toutefois, les robots n'ont pas encore atteint le même niveau de dextérité. Contrairement aux objets rigides, que les robots sont maintenant capables de manipuler avec des performances proches de celles des humains; les objets déformables doivent être contrôlés non seulement pour les positionner, mais aussi pour définir leur forme. Cette contrainte supplémentaire, relative au contrôle de la forme d’un objet, rend les techniques utilisées pour les objets rigides inapplicables aux objets déformables. En outre, le comportement des objets déformables diffère largement entre eux, par exemple: la forme d’un câble et des vêtements est considérablement affectée par la gravité, alors que celle-ci n’affecte pas la configuration d’autres objets déformables tels que des produits alimentaires. Ainsi, différentes approches ont été proposées pour des classes spécifiques d’objets déformables.Dans cette thèse, nous cherchons à remédier à ces lacunes en proposant une approche modulaire pour détecter la forme d'un objet pendant qu'il est manipulé par un robot. La modularité de cette approche s’inspire d’un paradigme de programmation qui s’applique de plus en plus au développement de logiciels en robotique et vise à apporter des solutions plus générales en séparant les fonctionnalités en composants. Ces composants peuvent ensuite être interchangés en fonction de la tâche ou de l'objet concerné. Cette stratégie est un moyen modulaire de suivre la forme d'objets déformables.Pour valider la stratégie proposée, nous avons implémenté trois applications différentes. Deux applications portaient exclusivement sur l'estimation de la déformation de l'objet à l'aide de données tactiles ou de données issues d’un capteur d’effort. La troisième application consistait à contrôler la déformation d'un objet. Une évaluation de la stratégie proposée, réalisée sur un ensemble d'objets élastiques pour les trois applications, montre des résultats prometteurs pour une approche qui n'utilise pas d'informations visuelles et qui pourrait donc être améliorée de manière significative par l'ajout de cette modalité. / Deformable objects are ubiquitous in our daily lives. On a given day, we manipulate clothes into uncountable configurations to dress ourselves, tie the shoelaces on our shoes, pick up fruits and vegetables without damaging them for our consumption and fold receipts into our wallets. All these tasks involve manipulating deformable objects and can be performed by an able person without any trouble, however robots have yet to reach the same level of dexterity. Unlike rigid objects, where robots are now capable of handling objects with close to human performance in some tasks; deformable objects must be controlled not only to account for their pose but also their shape. This extra constraint, to control an object's shape, renders techniques used for rigid objects mainly inapplicable to deformable objects. Furthermore, the behavior of deformable objects widely differs among them, e.g. the shape of a cable and clothes are significantly affected by gravity while it might not affect the configuration of other deformable objects such as food products. Thus, different approaches have been designed for specific classes of deformable objects.In this thesis we seek to address these shortcomings by proposing a modular approach to sense the shape of an object while it is manipulated by a robot. The modularity of the approach is inspired by a programming paradigm that has been increasingly been applied to software development in robotics and aims to achieve more general solutions by separating functionalities into components. These components can then be interchanged based on the specific task or object at hand. This provides a modular way to sense the shape of deformable objects.To validate the proposed pipeline, we implemented three different applications. Two applications focused exclusively on estimating the object's deformation using either tactile or force data, and the third application consisted in controlling the deformation of an object. An evaluation of the pipeline, performed on a set of elastic objects for all three applications, shows promising results for an approach that makes no use of visual information and hence, it could greatly be improved by the addition of this modality.

Page generated in 0.1001 seconds