Spelling suggestions: "subject:"humanmachine centerface"" "subject:"humanmachine 1interface""
21 |
Exploration of Mandibular Inputs for Human-Machine InterfacesYaslam, Abdulaziz 05 1900 (has links)
The direct connection of the jaw to the brain allows it to retain its motor and sensory capabilities even after severe spinal cord injuries. As such, it can be an accessible means of providing inputs for people with paralysis to manipulate their environment. This paper explores the potential for using the jaw, specifically the mandible, as an alternative input to human-machine interface systems.
Two tests were developed to test the mandible's ability to respond to visual stimuli. First, a visual response time test to measure the precision and accuracy of user input through a mandible-actuated button. Second, a choice response test to observe coordination between the mandible and a finger.
Study results show that the mean response time of mandible inputs is 8.3% slower than the corresponding mean response time of performing the same task with a thumb. The delay in response after making a decision is statistically insignificant between the mandible- and finger-actuated inputs with the mandible being 2.67% slower.
Based on these results, the increase in response time while using the mandibular input is minimal for new users. Coordination is feasible in tasks involving both the mandible and thumb. Extensive training with a made-to-fit device has the potential to enable a visual response time equivalent to the fingers in more complex tasks. The mandible is a viable option for accessible HMI for discreet inputs. Further testing into continuous input is needed to explore the mandible's potential as an input for body augments.
|
22 |
Human Machine Interface for Low Speed Semi-autonomous ManeuveringMakhtoumi, Golnaz January 2013 (has links)
For the drivers of heavy trucks, performing some maneuvers with high precision could be a challenging task even for experienced ones. Volvo has a system which helps drivers in reversing the truck. Developing a human machine interface on a mobile platform with high usability for this system could help drivers to decrease both the stress level and spent time on maneuvering and will result in performing the task easier. This thesis introduces a new area in safety critical systems by combining automation with a mobile platform. An iterative and user centered design process utilized and three main iterations performed. In first iteration a low-fidelity prototype was created and evaluated by performing user tests. The output of usability test used to implement the software prototype for the second iteration. Evaluation of software prototype was done by desktop testing. In third iteration, second version of software prototype evaluated by performing field testing. Android and Google maps were used to implement three tasks: Destination, Rewind and Saved point. In all these iterations usability and safety were two main concerns and considered by looking into guidelines and performing evaluations. In the final test, the prototype was evaluated considering four usability factors: satisfaction, learnability, safety and achievement. After analyzing these factors prototype showed strong potential for a future product.
|
23 |
Novel Auto-Calibrating Neural Motor Decoder for Robust Prosthetic ControlMontgomery, Andrew Earl 30 August 2018 (has links)
No description available.
|
24 |
Human-Robot Collaborative Design (HRCoD): Real-Time Collaborative Cyber-Physical HMI Platform for Robotic Design and Assembly through Augmented RealityHashemi, Mona 29 April 2021 (has links)
No description available.
|
25 |
Hand Gesture Recognition Using Ultrasonic WavesAlSharif, Mohammed H. 04 1900 (has links)
Gesturing is a natural way of communication between people and is used in our
everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis
work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.
|
26 |
A real time 3D surface measurement system using projected line patterns.Shen, Anqi January 2010 (has links)
This thesis is based on a research project to evaluate a quality control system for car component stamping lines. The quality control system measures the abrasion of the stamping tools by measuring the surface of the products. A 3D vision system is developed for the real time online measurement of the product surface. In this thesis, there are three main research themes. First is to produce an industrial application. All the components of this vision system are selected from industrial products and user application software is developed. A rich human machine interface for interaction with the vision system is developed along with a link between the vision system and a control unit which is established for interaction with a production line. The second research theme is to enhance the robustness of the 3D measurement. As an industrial product, this system will be deployed in different factories. It should be robust against environmental uncertainties. For this purpose, a high signal to noise ratio is required with the light pattern being produced by a laser projector. Additionally, multiple height calculation methods and a spatial Kalman filter are proposed for optimal height estimation. The final research theme is to achieve real time 3D measurement. The vision system is expected to be installed on production lines for online quality inspection. A new 3D measurement method is developed. It combines the spatial binary coded method with phase shift methods with a single image needs to be captured. / SHRIS (Shanghai Ro-Intelligent System,co.,Ltd.)
|
27 |
Vision-Based Force Planning and Voice-Based Human-Machine Interface of an Assistive Robotic Exoskeleton Glove for Brachial Plexus InjuriesGuo, Yunfei 18 October 2023 (has links)
This dissertation focuses on improving the capabilities of an assistive robotic exoskeleton glove designed for patients with Brachial Plexus Injuries (BPI). The aim of this research is to develop a force control method, an automatic force planning method, and a Human-Machine Interface (HMI) to refine the grasping functionalities of the exoskeleton glove, thus helping rehabilitation and independent living for individuals with BPI. The exoskeleton glove is a useful tool in post-surgery therapy for patients with BPI, as it helps counteract hand muscle atrophy by allowing controlled and assisted hand movements. This study introduces an assistive exoskeleton glove with rigid side-mounted linkages driven by Series Elastic Actuators (SEAs) to perform five different types of grasps. In the aspect of force control, data-driven SEA fingertip force prediction methods were developed to assist force control with the Linear Series Elastic Actuators (LSEAs). This data-driven force prediction method can provide precise prediction of SEA fingertip force taking into account the deformation and friction force on the exoskeleton glove. In the aspect of force planning, a slip-grasp force planning method with hybrid slip detection is implemented. This method incorporates a vision-based approach to estimate object properties to refine grasp force predictions, thus mimicking human grasping processes and reducing the trial-and-error iterations required for the slip- grasp method, increasing the grasp success rate from 71.9% to 87.5%. In terms of HMI, the Configurable Voice Activation and Speaker Verification (CVASV) system was developed to control the proposed exoskeleton glove, which was then complemented by an innovative one-shot learning-based alternative, which proved to be more effective than CVASV in terms of training time and connectivity requirements. Clinical trials were conducted successfully in patients with BPI, demonstrating the effectiveness of the exoskeleton glove. / Doctor of Philosophy / This dissertation focuses on improving the capabilities of a robotic exoskeleton glove designed to assist individuals with Brachial Plexus Injuries (BPI). The goal is to enhance the glove's ability to grasp and manipulate objects, which can help in the recovery process and enable patients with BPI to live more independently. The exoskeleton glove is a tool for patients with BPI to used after surgery to prevent the muscles of the hand from weakening due to lack of use. This research introduces an exoskeleton glove that utilizes special mechanisms to perform various types of grasp. The study has three main components. First, it focuses on ensuring that the glove can accurately control its grip strength. This is achieved through a special method that takes into account factors such as how the materials in the glove change when it moves and the amount of friction present. Second, the study works on a method for planning how much force the glove should use to hold objects without letting them slip. This method combines a camera-based object and material detection to estimate the weight and size of the target object, making the glove better at holding things without dropping them. The third part involves designing how people can instruct the glove what to do. The command can be sent to the robot by voice. This study proposed a new method that quickly learns how you talk and recognizes your voice. The exoskeleton glove was tested on patients with BPI and the results showed that it is successful in helping them. This study enhances assistive technology, especially in the field of assistive exoskeleton glove, making it more effective and beneficial for individuals with hand disabilities.
|
28 |
Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospectsMuhsin, Z.J., Qahwaji, Rami S.R., Ghanchi, Faruque, Al-Taee, M. 19 December 2023 (has links)
Yes / The development of many tools and technologies for people with visual impairment has become a major priority in the
field of assistive technology research. However, many of these technology advancements have limitations in terms of the
human aspects of the user experience (e.g., usability, learnability, and time to user adaptation) as well as difficulties in
translating research prototypes into production. Also, there was no clear distinction between the assistive aids of adults
and children, as well as between “partial impairment” and “total blindness”. As a result of these limitations, the produced
aids have not gained much popularity and the intended users are still hesitant to utilise them. This paper presents a comprehensive review of substitutive interventions that aid in adapting to vision loss, centred on laboratory research studies
to assess user-system interaction and system validation. Depending on the primary cueing feedback signal offered to the
user, these technology aids are categorized as visual, haptics, or auditory-based aids. The context of use, cueing feedback
signals, and participation of visually impaired people in the evaluation are all considered while discussing these aids.
Based on the findings, a set of recommendations is suggested to assist the scientific community in addressing persisting
challenges and restrictions faced by both the totally blind and partially sighted people.
|
29 |
Designing Explainable In-vehicle Agents for Conditionally Automated Driving: A Holistic Examination with Mixed Method ApproachesWang, Manhua 16 August 2024 (has links)
Automated vehicles (AVs) are promising applications of artificial intelligence (AI). While human drivers benefit from AVs, including long-distance support and collision prevention, we do not always understand how AV systems function and make decisions. Consequently, drivers might develop inaccurate mental models and form unrealistic expectations of these systems, leading to unwanted incidents. Although efforts have been made to support drivers' understanding of AVs through in-vehicle visual and auditory interfaces and warnings, these may not be sufficient or effective in addressing user confusion and overtrust in in-vehicle technologies, sometimes even creating negative experiences. To address this challenge, this dissertation conducts a series of studies to explore the possibility of using the in-vehicle intelligent agent (IVIA) in the form of the speech user interface to support drivers, aiming to enhance safety, performance, and satisfaction in conditionally automated vehicles.
First, two expert workshops were conducted to identify design considerations for general IVIAs in the driving context. Next, to better understand the effectiveness of different IVIA designs in conditionally automated driving, a driving simulator study (n=24) was conducted to evaluate four types of IVIA designs varying by embodiment conditions and speech styles. The findings indicated that conversational agents were preferred and yielded better driving performance, while robot agents caused greater visual distraction. Then, contextual inquiries with 10 drivers owning vehicles with advanced driver assistance systems (ADAS) were conducted to identify user needs and the learning process when interacting with in-vehicle technologies, focusing on interface feedback and warnings. Subsequently, through expert interviews with seven experts from AI, social science, and human-computer interaction domains, design considerations were synthesized for improving the explainability of AVs and preventing associated risks. With information gathered from the first four studies, three types of adaptive IVIAs were developed based on human-automation function allocation and investigated in terms of their effectiveness on drivers' response time, driving performance, and subjective evaluations through a driving simulator study (n=39). The findings indicated that although drivers preferred more information provided to them, their response time to road hazards might be degraded when receiving more information, indicating the importance of the balance between safety and satisfaction.
Taken together, this dissertation indicates the potential of adopting IVIAs to enhance the explainability of future AVs. It also provides key design guidelines for developing IVIAs and constructing explanations critical for safer and more satisfying AVs. / Doctor of Philosophy / Automated vehicles (AVs) are an exciting application of artificial intelligence (AI). While these vehicles offer benefits like helping with long-distance driving and preventing accidents, people often do not understand how they work or make decisions. This lack of understanding can lead to unrealistic expectations and potentially dangerous situations. Even though there are visual and sound alerts in these cars to help drivers, they are not always sufficient to prevent confusion and over-reliance on technology, sometimes making the driving experience worse. To address this challenge, this dissertation explores the use of in-vehicle intelligent agents (IVIAs), in the form of speech assistant, to help drivers better understand and interact with AVs, aiming to improve safety, performance, and overall satisfaction in semi-automated vehicles.
First, two expert workshops helped identify key design features for IVIAs. Then, a driving simulator study with 24 participants tested four different designs of IVIAs varying in appearance and how they spoke. The results showed that people preferred conversational agents, which led to better driving behaviors, while robot-like agents caused more visual distractions. Then, through contextual inquiries with 10 drivers who own vehicles with advanced driver assistance systems (ADAS), I identified user needs and how they learn to interact with in-car technologies, focusing on feedback and warnings. Subsequently, I conducted expert interviews with seven professionals from AI, social science, and human-computer interaction fields, which provided further insights into facilitating the explainability of AVs and preventing associated risks. With the information gathered, three types of adaptive IVIAs were developed based on whether the driver was actively in control of the vehicle, or the driving automation system was in control. The effectiveness of these agents was evaluated through drivers' brake and steer response time, driving performance, and user satisfaction through another driving simulator study with 39 participants. The findings indicate that although drivers appreciated more detailed explanations, their response time to road hazards slowed down, highlighting the need to balance safety and satisfaction.
Overall, this research shows the potential of using IVIAs to make AVs easier to understand and safer to use. It also offers important design guidelines for creating these IVIAs and their speech contents to improve the driving experience.
|
30 |
A Novel Asynchronous Access Method for Minimal Interface UsersSilva, Jorge 01 August 2008 (has links)
Current access strategies for minimal interface (e.g., binary switch) users employ time-coded (i.e., synchronous) protocols that map unique sequences of user-generated binary digits (i.e., bits) to each of the available outcomes of a device under control.
With such strategies, the user must learn and/or reproduce the timing of the protocol with a certain degree of accuracy. As a result, the number, κ, of device outcomes made accessible to the user is typically bound by the memorization capacity of the latter and by the time required to generate the appropriate bit sequences. Furthermore, synchronous access strategies introduce a minimum time delay that increases with larger κ, precluding access to control applications requiring fast user response.
By turning control on its head, this thesis presents an access method that completely eliminates reliance on time-coded protocols. Instead, the proposed asynchronous access method requires users to employ their interfaces only when the behavior of the device under control does not match their intentions. In response to such event, the proposed method may then be used to select, and automatically transmit, a new outcome to the device. Such outcome is informed by historical and contextual assumptions incorporated into a recursive algorithm that provides increasingly accurate estimates of user intention.
This novel approach, provides significant advantages over traditional synchronous strategies: i) the user is not required to learn any protocol, ii) there is no limit in the number of outcomes that may be made available to the user iii) there is no delay in the response of the device, iv) the expected amount of information required to achieve a particular task may be minimized, and, most importantly, v) the control of previously inaccessible devices may be enabled with minimal interfaces.
This thesis presents the full mathematical development of the novel method for asynchronous control summarized above. Rigorous performance evaluations demonstrating the potential of this method in the control of complex devices, by means of minimal interfaces, are also reported.
|
Page generated in 0.0946 seconds