Spelling suggestions: "subject:"humanmachine"" "subject:"manmachine""
41 |
Interfaces and control systems for intuitive crane controlPeng, Chen Chih. January 2009 (has links)
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010. / Committee Chair: Singhose, William; Committee Member: Sadegh, Nader; Committee Member: Ueda, Jun. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
42 |
Automation, work content and work requirements a study based on international data from car industry, steel industry and power production /Heiskanen, Tuula. January 1984 (has links)
Thesis (doctoral)--University of Tampere, 1984. / Includes bibliographical references (p. 221-230).
|
43 |
Using augmented virtuality to improve human-robot interactions /Nielsen, Curtis W., January 2006 (has links) (PDF)
Thesis (Ph. D.)--Brigham Young University. Dept. of Computer Science, 2006. / Includes bibliographical references (p. 149-164).
|
44 |
Moderators Of Trust And Reliance Across Multiple Decision AidsRoss, Jennifer 01 January 2008 (has links)
The present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness - a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate's actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., 'what' the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users' trust in automation so that system interfaces can be designed to facilitate users' calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems.
|
45 |
Designing Computer Agents with Personality to Improve Human-Machine Collaboration in Complex SystemsPrabhala, Sasanka V. 18 April 2007 (has links)
No description available.
|
46 |
Terms and axioms for a theory of human-machine systems /Funk, Kenneth Harding January 1980 (has links)
No description available.
|
47 |
Some considerations in the design of computer languages for interactive problem solving /Dennis, John D. January 1971 (has links)
No description available.
|
48 |
Rhythms of dialogue in human-computer conversation /Penniman, William David January 1975 (has links)
No description available.
|
49 |
Matching feedback with operator intent for efficient human-machine interfaceElton, Mark David 09 November 2012 (has links)
Various roles for operators in human-machine systems have been proposed. This thesis shows that all of these views have in common the fact that operators perform best when given feedback that matches their intent. Past studies have shown that position control is superior to rate control except when operating large-workspace and/or dynamically slow manipulators and for exact tracking tasks. Operators of large-workspace and/or dynamically slow manipulators do not receive immediate position feedback. To remedy this lack of position feedback, a ghost arm overlay was displayed to operators of a dynamically slow manipulator, giving feedback that matches their intent. Operators performed several simple one- and two-dimensional tasks (point-to-point motion, tracking, path following) with three different controllers (position control with and without a ghost, rate control) to indicate how task conditions influence operator intent. Giving the operator position feedback via the ghost significantly increased performance with the position controller and made it comparable to performance with the rate control. These results were further validated by testing coordinated position control with and without a ghost arm and coordinated rate control on an excavator simulator. The results show that position control with the ghost arm is comparable, but not superior to rate control for the dynamics of our excavator example. Unlike previous work, this research compared the fuel efficiencies of different HMIs, as well as the time efficiencies. This work not only provides the design law of matching the feedback to the operator intent, but also gives a guideline for when to choose position or rate control based on the speed of the system.
|
50 |
A Study of Human-Machine Interface (HMI) Learnability for Unmanned Aircraft Systems Command and ControlHaritos, Tom 01 January 2017 (has links)
The operation of sophisticated unmanned aircraft systems (UAS) involves complex interactions between human and machine. Unlike other areas of aviation where technological advancement has flourished to accommodate the modernization of the National Airspace System (NAS), the scientific paradigm of UAS and UAS user interface design has received little research attention and minimal effort has been made to aggregate accurate data to assess the effectiveness of current UAS human-machine interface (HMI) representations for command and control. UAS HMI usability is a primary human factors concern as the Federal Aviation Administration (FAA) moves forward with the full-scale integration of UAS in the NAS by 2025. This study examined system learnability of an industry standard UAS HMI as minimal usability data exists to support the state-of-the art for new and innovative command and control user interface designs. This study collected data as it pertained to the three classes of objective usability measures as prescribed by the ISO 9241-11. The three classes included: (1) effectiveness, (2) efficiency, and (3) satisfaction. Data collected for the dependent variables incorporated methods of video and audio recordings, a time stamped simulator data log, and the SUS survey instrument on forty-five participants with none to varying levels of conventional flight experience (i.e., private pilot and commercial pilot). The results of the study suggested that those individuals with a high level of conventional flight experience (i.e., commercial pilot certificate) performed most effectively when compared to participants with low pilot or no pilot experience. The one-way analysis of variance (ANOVA) computations for completion rates revealed statistical significance for trial three between subjects [F (2, 42) = 3.98, p = 0.02]. Post hoc t-test using a Bonferroni correction revealed statistical significance in completion rates [t (28) = -2.92, p<0.01] between the low pilot experience group (M = 40%, SD =. 50) and high experience group (M = 86%, SD = .39). An evaluation of error rates in parallel with the completion rates for trial three also indicated that the high pilot experience group committed less errors (M = 2.44, SD = 3.9) during their third iteration when compared to the low pilot experience group (M = 9.53, SD = 12.63) for the same trial iteration. Overall, the high pilot experience group (M = 86%, SD = .39) performed better than both the no pilot experience group (M = 66%, SD = .48) and low pilot experience group (M = 40%, SD =.50) with regard to task success and the number of errors committed. Data collected using the SUS measured an overall composite SUS score (M = 67.3, SD = 21.0) for the representative HMI. The subscale scores for usability and learnability were 69.0 and 60.8, respectively. This study addressed a critical need for future research in the domain of UAS user interface designs and operator requirements as the industry is experiencing revolutionary growth at a very rapid rate. The deficiency in legislation to guide the scientific paradigm of UAS has generated significant discord within the industry leaving many facets associated with the teleportation of these systems in dire need of research attention. Recommendations for future work included a need to: (1) establish comprehensive guidelines and standards for airworthiness certification for the design and development of UAS and UAS HMI for command and control, (2) establish comprehensive guidelines to classify the complexity associated with UAS systems design, (3) investigate mechanisms to develop comprehensive guidelines and regulations to guide UAS operator training, (4) develop methods to optimize UAS interface design through automation integration and adaptive display technologies, and (5) adopt methods and metrics to evaluate human-machine interface related to UAS applications for system usability and system learnability.
|
Page generated in 0.0445 seconds