Spelling suggestions: "subject:"multimodal displays"" "subject:"multimodala displays""
1 |
The Ability of Four-Month-Olds to Discriminate Changes in Vocal Information in Multimodal DisplaysMcCartney, Jason 22 May 1999 (has links)
Recent investigations into infants' intersensory perception suggest a specific developmental pattern for infants' attention to visible and auditory attributes of dynamic human faces. This work has proposed that infants' perception seems to progress along a sensory continuum: beginning with multimodal sensory cues (e.g., auditory and visual), then visual-cues alone, and finally auditory-cues alone. Additionally, research has proposed that amodal or invariant sensory information directs infants' attention to specific redundant aspects in the surrounding environment (e.g., temporal synchronicity). The current research attempted to clarify the potential methodological confounds contained in previous investigations into infant intersensory development by contrasting infant behavior within fixed trial and infant-controlled habituation procedures. Moreover, the current research examined infants' attention to auditory manipulations within multimodal displays when redundant sensory information (synchronicity) was or was not available.
In Experiment 1, 4-month-old infants were habituated to complex audiovisual displays of a male or female face within an infant controlled habituation procedure, and then tested for response recovery to a change in voice. For half the infants, the change in voice maintained synchronicity with the face, and for the other half, it did not. The results showed significant response recovery (i.e., dishabituation) to the change in voice regardless of the synchronicity condition. In Experiment 2, 4-month-old infants received the same face+voice test recordings used in Experiment 1, but now within a fixed trial habituation procedure. Again, synchronicity was manipulated across groups of infants. In contrast to Experiment 1, the infants in the fixed-trial experiment failed to show evidence of voice discrimination.
These results suggest that infant controlled procedures may be more sensitive to infant attention, especially in terms of complex social displays. In addition, synchronicity appeared to be unnecessary in terms of infants' ability to detect vocal differences across multimodal displays. In sum, these results highlight the importance of research methodology (e.g., infant control) and overall stimulus complexity (e.g., discrete vs. complex) involving studies of infants' intersensory development. / Ph. D.
|
2 |
Implications of differences of echoic and iconic memory for the design of multimodal displaysJanuary 2012 (has links)
It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.
|
3 |
Aging and Automation: Non-chronological Age Factors and Takeover Request Modality Predict Transition to Manual Control Performance during Automated DrivingGaojian Huang (11037906) 30 June 2021 (has links)
<p>Adults aged 65 years and older have become the fastest-growing
age group worldwide and are known to face perceptual, cognitive, and physical
challenges in later stages of life. Automation may help to support these
various age-related declines. However, many current automated systems often
suffer from design limitations and occasionally require human intervention. To
date, there is little guidance on how to design human-machine interfaces (HMIs)
to help a wide range of users, especially older adults, transition to manual
control. Multimodal interfaces, which present information in the visual,
auditory, and/or tactile sensory channels, may be one viable option to
communicate roles in human-automation systems, but insufficient empirical
evidence is available for this approach. Also, the aging process is not
homogenous across individuals, and physical and cognitive factors may better
indicate one’s aging trajectory. Yet, the benefits that such individual
differences have on task performance in human-automation systems are not well
understood. Thus, the purpose of this dissertation work was to examine the
effects of 1) multimodal interfaces and 2) one particular non-chronological age
factor, engagement in physical exercise, on transitioning from automated to
manual control dynamic automated environments. Automated driving was used as
the testbed. The work was completed in three phases. </p><p><br></p>
<p>The vehicle takeover process involves 1) the perception of
takeover requests (TORs), 2) action selection from possible maneuvers that can
be performed in response to the TOR, and 3) the execution of selected actions.
The first phase focused on differences in the detection of multimodal TORs
between younger and older drivers during the initial phase of the vehicle
takeover process. Participants were asked to notice and respond to uni-, bi-
and trimodal combinations of visual, auditory, and tactile TORs. Dependent
measures were brake response time and maximum brake force. Overall, bi- and
trimodal warnings were associated with faster responses for both age groups
across driving conditions, but was more pronounced for older adults. Also,
engaging in physical exercise was found to be correlated with smaller maximum
brake force. </p><p><br></p>
<p>The second phase aimed to quantify the effects of age and
physical exercise on takeover task performance as a function of modality type
and lead time (i.e., the amount of time given to make decisions about which
action to employ). However, due to COVID-19 restrictions, the study could not
be completed, thus only pilot data was collected. Dependent measures included
decision making time and maximum resulting jerk. Preliminary results indicated
that older adults had a higher maximum resulting jerk compared to younger
adults. However, the differences in decision-making time and maximum resulting
jerk were narrower for the exercise group (compared to the non-exercise group)
between the two age groups. </p><p><br></p>
<p>Given COVID-19 restrictions, the objective of phase two
shifted to focus on other (non-age-related) gaps in the multimodal literature.
Specifically, the new phase examined the effects of signal direction, lead
time, and modality on takeover performance. Dependent measures included
pre-takeover metrics, e.g., takeover and information processing time, as well
as a host of post-takeover variables, i.e., maximum resulting acceleration.
Takeover requests with a tactile component were associated with the faster
takeover and information processing times. The shorter lead time was correlated
with poorer takeover quality.</p><p><br></p>
<p>The third, and final, phase used knowledge from phases one and
two to investigate the effectiveness of meaningful tactile signal patterns to
improve takeover performance. Structured and graded tactile signal patterns
were embedded into the vehicle’s seat pan and back. Dependent measures were
response and information processing times, and maximum resulting acceleration. Overall,
in only instructional signal group, meaningful tactile patterns (either in the
seat back or seat pan) had worse takeover performance in terms of response time
and maximum resulting acceleration compared to signals without patterns.
Additionally, tactile information presented in the seat back was perceived as
most useful and satisfying.</p><p><br></p>
<p>Findings from this research can inform the development of
next-generation HMIs that account for differences in various demographic
factors, as well as advance our knowledge of the aging process. In addition,
this work may contribute to improved safety across many complex domains that
contain different types and forms of automation, such as aviation,
manufacturing, and healthcare.</p>
|
Page generated in 0.0668 seconds