• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Visual guidance of landing behaviour when stepping down to a new level

Buckley, John, MacLellan, M.J., Tucker, M.W., Scally, Andy J., Bennett, S.J. January 2008 (has links)
When stepping down from one level to another, the leading limb has to arrest downward momentum of the body and subsequently receive and safely support bodyweight before level walking can begin. Such step downs are performed over a wide range of heights and predicting when and where contact between the landing limb and the lower level will be made is likely a critical factor. To determine if visual feedback obtained after movement initiation is habitually used in guiding landing behaviour, the present study determined whether pre-landing kinematics and the mechanics of landing would be modulated according to the type of visual feedback available during the stepping down phase. Ten healthy participants (32.3 ± 7.9 years) stepped, from a standing position, down from three different heights onto a forceplatform, either coming immediately to rest or proceeding directly to walking across the laboratory. Repeated trials were undertaken under habitual vision conditions or with vision blurred or occluded 2¿3 s prior to movement initiation. Pre-landing kinematics were assessed by determining, for the instant of landing, lead-limb knee and ankle angle, stepping distance, forwards positioning of the body CM within the base of support and the forwards and downwards body CM velocity. Landing mechanics for the initial contact period were characterized using lead limb vertical loading and stiffness, and trail limb un-weighting. When vision was occluded movement time, ankle plantarflexion and knee flexion were significantly increased compared to that determined for habitual vision, whereas forwards body CM positioning and velocity, vertical loading and stiffness, and trail limb un-weighting, were significantly reduced (p < 0.05). Similar adaptations were observed under blurred conditions, although to a lesser extent. Most variables were significantly affected by stepping task and step height. Subjects likely reduced forwards CM position and velocity at instant of landing, in order to keep the CM well away from the anterior border of the base of support, presumably to ensure boundary margins of safety were high should landing occur sooner or later than expected. The accompanying increase in ankle plantarflexion at instant of landing, and increase in single limb support time, suggests that subjects tended to probe for the ground with their lead limb under modified vision conditions. They also had more bodyweight on the trail limb at the end of the initial contact period and as a consequence had a prolonged weight transfer time. These findings indicate that under blurred or occluded vision conditions subjects adopted a cautious strategy where by they ¿sat back¿ on their trail limb and used their lead limb to probe for the ground. Hence, they did not fully commit to weight transfer until somatosensory feedback from the lead limb confirmed they had safely made contact. The effect of blurring vision was not identical to occluding vision, and led to several important differences between these conditions consistent with the use of impoverished visual information on depth. These findings indicate that online vision is customarily used to regulate landing behaviour when stepping down.
2

Augmented Reality for Spatial Perception in the Computer Assisted Surgical Trainer

Wagner, Adam, Wagner, Adam January 2017 (has links)
Traditional laparoscopic surgery continues to require significant training on the part of the surgeon before entering the operating room. Augmented Reality (AR) has been investigated for use in visual guidance in training and during surgery, but little work is available investigating the effectiveness of AR techniques in providing the user better awareness of depth and space. In this work we propose several 2D AR overlays for visual guidance in training for laparoscopic surgery, with the goal of aiding the user's perception of depth and space in that limiting environment. A pilot study of 30 subjects (22 male and 8 female) was performed with results showing the effect of the various overlays on subject performance of a path following task in the Computer Assisted Surgical Trainer (CAST-III) system developed in the Model Based Design Lab. Deviation, economy of movement, and completion time are considered as metrics. Providing a reference indicator for the nearest point on the optimal path is found to result in significant reduction (p < 0.05) in subject deviation from the path. The data also indicates a reduction in subject deviation along the depth axis and total path length with overlays designed to provide depth information. Avenues for further investigation are presented.
3

Picking Up an Object from a Pile of Objects

Ikeuchi, Katsushi, Horn, Berthold K.P., Nagata, Shigemi, Callahan, Tom, Fein, Oded 01 May 1983 (has links)
This paper describes a hand-eye system we developed to perform the binpicking task. Two basic tools are employed: the photometric stereo method and the extended Gaussian image. The photometric stereo method generates the surface normal distribution of a scene. The extended Gaussian image allows us to determine the attitude of the object based on the normal distribution. Visual analysis of an image consists of two stages. The first stage segments the image into regions and determines the target region. The photometric stereo system provides the surface normal distribution of the scene. The system segments the scene into isolated regions using the surface normal distribution rather than the brightness distribution. The second stage determines object attitude and position by comparing the surface normal distribution with the extended-Gaussian-image. Fingers, with LED sensor, mounted on the PUMA arm can successfully pick an object from a pile based on the information from the vision part.
4

Physical Guidance in Motor Learning

Howard III, James Thomas January 2003 (has links)
Previous studies of physical guidance (PG - physically constraining error during practice of a motor task) have found it to be ineffective in enhancing motor learning. However, most studies have used a highly constraining form of physical guidance that may have encouraged undue dependency. In addition, previous research has not fully considered the interaction between visual feedback and PG, and many of the studies have failed to use standard delayed retention tests with knowledge of results unavailable (no-KR). The current experiment examine the effects of varying levels of constraint in PG, as well as the interaction of PG and visual guidance (VG), using no-KR retention tests. This study involved 99 subjects divided into nine acquisition trial condition groups, forming from a 3 x 3 factorial design with factors of PG x VG, each presented at levels designated as tight, bandwidth, or none. Subjects undertook a two-dimensional pattern drawing task with no KR, PG, or VG as a pre-test, before completing 100 practice trials under one of the nine conditions. The same test was given as a retention test (immediately after practice) and as a delayed retention test (two days later). A transfer test, using a different pattern, was also administered on the second day. Almost all groups performed better on the immediate transfer test than they had on the pre-test. However, after two days only three groups (PG bandwidth-VG tight, PG none-VG bandwidth, and PG none-VG none) retained this improvement and only two groups (PG bandwidth-VG bandwidth and PG none-VG none) performed significantly better on the transfer task than their pre-test. It is proposed that bandwidth guidance generally promotes learning and that bandwidth physical guidance may enhance proprioceptive cues. Independent of PG and VG effects, KR (an overall error score) also facilitated learning.
5

Brand and usability in content-intensive websites

Yang, Tao 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Our connections to the digital world are invoked by brands, but the intersection of branding and interaction design is still an under-investigated area. Particularly, current websites are designed not only to support essential user tasks, but also to communicate an institution's intended brand values and traits. What we do not yet know, however, is which design factors affect which aspect of a brand. To demystify this issue, three sub-projects were conducted. The first project developed a systematic approach for evaluating the branding effectiveness of content-intensive websites (BREW). BREW gauges users' brand perceptions on four well-known branding constructs: brand as product, brand as organization, user image, and brand as person. It also provides rich guidelines for eBranding researchers in regard to planning and executing a user study and making improvement recommendations based on the study results. The second project offered a standardized perceived usability questionnaire entitled DEEP (design-oriented evaluation of perceived web usability). DEEP captures the perceived website usability on five design-oriented dimensions: content, information architecture, navigation, layout consistency, and visual guidance. While existing questionnaires assess more holistic concepts, such as ease-of-use and learnability, DEEP can more transparently reveal where the problem actually lies. Moreover, DEEP suggests that the two most critical and reliable usability dimensions are interface consistency and visual guidance. Capitalizing on the BREW approach and the findings from DEEP, a controlled experiment (N=261) was conducted by manipulating interface consistency and visual guidance of an anonymized university website to see how these variables may affect the university's image. Unexpectedly, consistency did not significantly predict brand image, while the effect of visual guidance on brand perception showed a remarkable gender difference. When visual guidance was significantly worsened, females became much less satisfied with the university in terms of brand as product (e.g., teaching and research quality) and user image (e.g., students' characteristics). In contrast, males' perceptions of the university's brand image stayed the same in most circumstances. The reason for this gender difference was revealed through a further path analysis and a follow-up interview, which inspired new research directions to unpack even more the nexus between branding and interaction design.

Page generated in 0.05 seconds