• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • Tagged with
  • 14
  • 14
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving the detectability of oxygen saturation level targets for preterm neonates: A laboratory test of tremolo and beacon sonifications

Deschamps, Marie-Lys, Sanderson, Penelope, Hinckfuss, Kelly, Browning, Caitlin, Loeb, Robert G., Liley, Helen, Liu, David 09 1900 (has links)
Recent guidelines recommend oxygen saturation (SpO(2)) levels of 90%-95% for preterm neonates on supplemental oxygen but it is difficult to discern such levels with current pulse oximetry sonifications. We tested (1) whether adding levels of tremolo to a conventional log-linear pulse oximetry sonification would improve identification of SpO(2) ranges, and (2) whether adding a beacon reference tone to conventional pulse oximetry confuses listeners about the direction of change. Participants using the Tremolo (94%) or Beacon (81%) sonifications identified SpO(2) range significantly more accurately than participants using the LogLinear sonification (52%). The Beaton sonification did not confuse participants about direction of change. The Tremolo sonification may have advantages over the Beacon sonification for monitoring SpO(2) of preterm neonates, but both must be further tested with clinicians in clinically representative scenarios, and with different levels of ambient noise and distractions. Crown Copyright (C) 2016 Published by Elsevier Ltd. All rights reserved.
2

Increase Driving Situation Awareness and In-vehicle Gesture-based Menu Navigation Accuracy with Heads-Up Display

Cao, Yusheng 04 1900 (has links)
More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. This thesis designed and tested a novel interface to solve a specific issue caused by this method of interaction: visual distraction within the car. In this study, a Heads-Up Display (HUD) was integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment was conducted to investigate the potential of this system in improving drivers’ driving performance, situation awareness, and gesture interactions. The thesis recruited 24 participants to test the system. Participants provided subjective feedback about using the system and objective performance data. This thesis found that HUD significantly outperformed the Heads-Down Display (HDD) in participants’ preference, perceived workload, level 1 situation awareness, and secondary-task performance. However, to achieve this, the participants compensated by having poor driving performance and relatively longer visual distraction. This thesis will provide directions for future research and improve the overall user experience while the driver interacts with the in-vehicle gesture interaction system. / M.S. / Driving is becoming one of the essential daily activities. Unless a fully autonomous vehicle is made, driving will remain as the primary task when operating the vehicle. However, to improve the overall experience during traveling, drivers are also required to perform secondary tasks such as changing the AC, switching the music, navigating the map, and other functions. Nevertheless, car accidents may happen when drivers are performing secondary tasks because those tasks are considered a distraction from the primary task, which is driving safely. Many novel interaction methods have been implemented in a modern car, such as touch screen interaction, voice interaction, etc. This thesis introduces a new gesture interaction system that allows the user to use mid-air gestures to navigate through the secondary task menus. To further avoid visual distraction caused by the system, the gesture interaction system integrated a head-up display (HUD) to allow the user to see visual feedback on their front windshield. The HUD will let the driver use the system without looking in the other directions and keep peripheral vision on the road. The experiment recruited 24 participants to test the system. Each participant provided subjective feedback about their workload, experience, and preference. In the experiment, driving simulator was used to collect their driving performance. The eye tracker glasses were used to collect eye gaze data, and the gesture menu system was used to collect gesture system performance. This thesis expects four key factors to affect the user experience: HUD vs. Heads-Down Display (visual feedback types), with sound feedback vs. without sound feedback. Results showed that HUD helped the driver perform the secondary task faster, understand the current situation better, and reduce workload. Most of the participants preferred using the HUD over using HDD. However, there are some compensations that drivers needed to make if they use HUD: focusing on the HUD for more time while performing secondary tasks and having poor driving performance. By analyzing result data, this thesis provides a direction for conducting HUD or in-vehicle gesture interaction research and improving the users’ performance and overall experience.
3

Tillämpning av ljud i IT-system för att öka användarupplevelsen: en litteraturstudie

Åstholm, Carl January 2017 (has links)
Många systemutvecklare idag saknar kunskap om hur ljud kan användas i system för att öka användarupplevelsen och är skeptiska till ljudets användningspotential. Auditory display är ett samlingsnamn för olika tekniker som nyttjar ljudet som ett medium för att kommunicera olika typer av data och information från systemet till användaren. Då mycket av forskningen rörande auditory display fokuserar på utvecklingen av hjälpmedel för synskadade istället för mer generella system för användare utan särskilda behov ser vi att det finns ett behov av en litteraturstudie med fokus på den sistnämnda gruppen system. Vi ställde frågan "hur kan auditory display tillämpas vid utveckling av traditionella IT-system" och genomförde en litteraturstudie där 23 artiklar analyserade för att identifiera olika tillämpningsområden för auditory display, med syfte att återge dessa tillämpningsområden i ett format som är av intresse för utvecklare som vill kunna använda ljud i sina system men inte vet vart de ska börja. Resultaten visar att auditory display kan användas till god effekt exempelvis vid övervakning av nätverkstrafik, i gränssnitt och widgets och i fordonsgränssnitt i bilar. Vi föreslår även lovande användningsområden som bör undersökas vidare av framtida forskare. / Today, there is a certain lack of knowledge on how sound can be utilized in systems to enhance the user experience among systems developers and many developers have a skeptical outlook on the usability of sound. Auditory display is an umbrella term for an array of different techniques that utilize sound as a medium to communicate different sorts of information and data from the system to the user. As much of the research revolving around auditory display has the development of accessibility tools for visually impaired as its sole focus, instead of more general systems intended for users without specific needs, we see that there is a need for a literature review focused on the latter. We asked ourselves the question "how can auditory display be utilized in the development of traditional IT-systems?" and carried out a literature review where 23 articles were analyzed to identify different use cases, with the purpose of presenting these use casers in a way that can be used by developers who are interested in implementing sound in their systems but are unsure where to start. Results indicate that auditory display can be used to good effect in, among others, systems for monitoring network traffic, user interfaces and widgets and in-vehicle technologies. Lastly, we propose promising potential use cases that are in need of further research.
4

Interactive sonification of a physics engine

Perkins, Rhys John January 2013 (has links)
Physics engines have become increasingly prevalent in everyday technology. In the context of this thesis they are regarded as a readily available data set that has the potential to intuitively present the process of sonification to a wide audience. Unfortunately, this process is not the focus of attention when formative decisions are made concerning the continued development of these engines. This may reveal a missed opportunity when considering that the field of interactive sonification upholds the importance of physical causalities for the analysis of data through sound. The following investigation deliberates the contextual framework of this field to argue that the physics engine, as part of typical game engine architecture, is an appropriate foundation on which to design and implement a dynamic toolset for interactive sonification. The basis for this design is supported by a number of significant theories which suggest that the underlying data of a rigid body dynamics physics system can sustain an inherent audiovisual metaphor for interaction, interpretation and analysis. Furthermore, it is determined that this metaphor can be enhanced by the extraordinary potential of the computer in order to construct unique abstractions which build upon the many pertinent ideas and practices within the surrounding literature. These abstractions result in a mental model for the transformation of data to sound that has a number of advantages in contrast to a physical modelling approach while maintaining its same creative potential for instrument building, composition and live performance. Ambitions for both sonification and its creative potential are realised by several components which present the user with a range of options for interacting with this model. The implementation of these components effectuates a design that can be demonstrated to offer a unique interpretation of existing strategies as well as overcoming certain limitations of comparable work.
5

Auditory display design : an investigation of a design pattern approach

Frauenberger, Chris January 2009 (has links)
This thesis investigates the design of audio for feedback in human-technology interaction— auditory displays. Despite promising progress in research and the potential benefits, we currently see little impact of audio in everyday interfaces. Changing interaction paradigms, new contexts of use and inclusive design principles, however, increase the need for an efficient, non-visual means of conveying information. Motivated by these needs, this work describes the development and evaluation of a methodological design framework, aiming to enhance knowledge and skill transfer in auditory display design and to enable designers to build more efficient and compelling auditory solutions. The work starts by investigating the current practice in designing audio in the user interface. A survey amongst practitioners and researchers in the field and a literature study of research papers highlighted the need for a structured design approach. Building on these results, paco – pattern design in the context space has been developed, a framework providing methods to capture, apply and refine design knowledge through design patterns. A key element of paco, the context space, serves as the organising principle for patterns, artefacts and design problems and supports designers in conceptualising the design space. The evaluation of paco is the first comparative study of a design methodology in this area. Experts in auditory display design and novice designers participated in a series of experiments to determine the usefulness of the framework. The evaluation demonstrated that paco facilitates the transfer of design knowledge and skill between experts and novices as well as promoting reflection and recording of design rationale. Alongside these principle achievements, important insights have been gained about the design process which lay the foundations for future research into this subject area. This work contributes to the field of auditory display as it reflects on the current practice and proposes a means of supporting designers to communicate, reason about and build on each other’s work more efficiently. The broader field of human-computer interaction may also benefit from the availability of design guidance for exploiting the auditory modality to answer the challenges of future interaction design. Finally, with paco a generic methodology in the field of design patterns was proposed, potentially similarly beneficial to other designing disciplines.
6

Spatial Auditory Maps for Blind Travellers

Talbot, Martin 07 April 2011 (has links)
Empirical research shows that blind persons who have the ability and opportunity to access geographic map information tactually, benefit in their mobility. Unfortunately, tangible maps are not found in large numbers. Economics is the leading explanation: tangible maps are expensive to build, duplicate and distribute. SAM, short for Spatial Auditory Map, is a prototype created to address the unavail- ability of tangible maps. SAM presents geographic information to a blind person encoded in sound. A blind person receives maps electronically and accesses them using a small in- expensive digitalizing tablet connected to a PC. The interface provides location-dependent sound as a stylus is manipulated by the user, plus a schematic visual representation for users with residual vision. The assessment of SAM on a group of blind participants suggests that blind users can learn unknown environments as complex as the ones represented by tactile maps - in the same amount of reading time. This research opens new avenues in visualization techniques, promotes alternative communication methods, and proposes a human-computer interaction framework for conveying map information to a blind person.
7

Spatial Auditory Maps for Blind Travellers

Talbot, Martin 07 April 2011 (has links)
Empirical research shows that blind persons who have the ability and opportunity to access geographic map information tactually, benefit in their mobility. Unfortunately, tangible maps are not found in large numbers. Economics is the leading explanation: tangible maps are expensive to build, duplicate and distribute. SAM, short for Spatial Auditory Map, is a prototype created to address the unavail- ability of tangible maps. SAM presents geographic information to a blind person encoded in sound. A blind person receives maps electronically and accesses them using a small in- expensive digitalizing tablet connected to a PC. The interface provides location-dependent sound as a stylus is manipulated by the user, plus a schematic visual representation for users with residual vision. The assessment of SAM on a group of blind participants suggests that blind users can learn unknown environments as complex as the ones represented by tactile maps - in the same amount of reading time. This research opens new avenues in visualization techniques, promotes alternative communication methods, and proposes a human-computer interaction framework for conveying map information to a blind person.
8

Measuring the accuracy of four attributes of sound for conveying changes in a large data set.

Holmes, Jason 05 1900 (has links)
Human auditory perception is suited to receiving and interpreting information from the environment but this knowledge has not been used extensively in designing computer-based information exploration tools. It is not known which aspects of sound are useful for accurately conveying information in an auditory display. An auditory display was created using PD, a graphical programming language used primarily to manipulate digital sound. The interface for the auditory display was a blank window. When the cursor is moved around in this window, the sound generated would changed based on the underlying data value at any given point. An experiment was conducted to determine which attribute of sound most accurately represents data values in an auditory display. The four attributes of sound tested were frequency-sine waveform, frequency-sawtooth waveform, loudness and tempo. 24 subjects were given the task of finding the highest data point using sound alone using each of the four sound treatments. Three dependent variables were measured: distance accuracy, numeric accuracy, and time on task. Repeated measures ANOVA procedures conducted on these variables did not rise to the level of statistical significance (α=.05). None of the sound treatments was more accurate than the other as representing the underlying data values. 52% of the trials were accurate within 50 pixels of the highest data point (target). An interesting finding was the tendency for the frequency-sin waveform to be used in the least accurate trial attempts (38%). Loudness, on the other hand, accounted for very few (12.5%) of the least accurate trial attempts. In completing the experimental task, four different search techniques were employed by the subjects: perimeter, parallel sweep, sector, and quadrant. The perimeter technique was the most commonly used.
9

Layer Based Auditory Displays Of Robots’ Actions And Intentions

Orthmann, Bastian January 2021 (has links)
Unintentional encounters between robots and humans will increase in the future and require concepts for communicating the robots’ internal states. Auditory displays can be used to convey the relevant information to people who share public spaces with social robots. Based on data gathered in a participatory design workshop with robot experts, a layer based approach for real-time generated audio feedback is introduced, where the information to be displayed is mapped to certain audio parameters. First exploratory sound designs were created and evaluated in an online study. The results show which audio parameter mappings should be examined further to display certain internal states, like e.g. mapping amplitude modulation to the robot’s speed or enhancing alarm frequencies for indicating urgent tasks. Features such as speed, urgency and large size were correctly identified in more than 50% of evaluations, while information about the robot’s interactivity or its small size were not comprehensible to the participants. / Oavsiktliga möten mellan robotar och människor kommer öka i framtiden vilket kräver koncept för att kommunicera robotarnas inre tillstånd. Ljuddisplayer kan användas för att förmedla relevant information till människor som delar offentliga utrymmen med sociala robotar. Baserat på data som samlats in i en deltagande designworkshop med robottexperter introduceras ett lagerbaserat tillvägagångssätt för realtidsgenererad ljudåterkoppling, där den information som ska visas mappas till vissa ljudparametrar. Explorativa ljuddesigns skapades och utvärderades i en online-studie. Resultaten visar vilka ljudparametrar som bör undersökas ytterligare för att visa vissa interna tillstånd, som t.ex. att mappa amplitudmodulering till robotens hastighet eller att förbättra larmfrekvenser för att indikera brådskande ärenden. Egenskaper som hastighet, brådska och stor storlek identifierades korrekt i mer än 50 % av utvärderingarna, men information om robotens interaktivitet och lilla storlek var svårbegriplig för deltagarna.
10

Novel In-Vehicle Gesture Interactions: Design and Evaluation of Auditory Displays and Menu Generation Interfaces

Tabbarah, Moustafa 30 January 2023 (has links)
Driver distraction is a major contributor to car crashes, and visual distraction caused by using invehicle infotainment systems (IVIS) degrades driving performance and increases crash risk. Air gesture interfaces were developed to mitigate for driver distraction, and using auditory displays showed a decrease in off-road glances and an improved perceived workload. However, the design of auditory displays was not fully investigated. This thesis presents directional research in the design of auditory displays for air-gesture IVIS through two dual-task experiments of driving a simulator and air-gesture menu navigation. Experiment 1 with 32 participants employed a 2x4 mixed-model design, and explored the effect of four auditory display conditions (auditory icon, earcon, spearcon, and no-sound) and two menu-generation interfaces (fixed and adaptive) on driving performance, eye glance behavior, secondary task performance and subjective perception. Each auditory display (within-subjects) was tested with both a fixed and adaptive menu-generation interface (between-subjects). Results from Experiment 1 demonstrated that spearcon provided the least visual distraction, least workload, best system usability and was favored by participants; and that fixed menu generation outperformed adaptive menu generation in driving safety and secondary task performance. Experiment 2 with 24 participants utilized the best interface to emerge from Experiment 1 to further explore the auditory display with the most potential: spearcon. 70% spearcon and 40% spearcon were compared to text-to-speech (TTS) and no audio conditions. Results from Experiment 2 showed that 70% spearcon induced less visual distraction than 40% spearcon, and that 70% spearcon resulted in the most accurate but slowest secondary task selections. Experimental results are discussed in the context of the multiple resource theory and the working memory model, design guidelines are proposed, and future work is discussed. / Master of Science / Driver distraction is a major cause of car accidents, and using in-vehicle infotainment systems (IVIS) while driving can distract drivers and increase the risk of crashes. Air gesture interfaces and auditory displays were created to help reduce driver distraction, and using auditory displays has been shown to decrease the number of times a driver looks away from the road and to improve the perceived workload of the driver. However, the design of auditory displays has not been thoroughly studied. This study examined the design of auditory displays for air gesture IVIS through two experiments in which participants drove a simulator and used air gesture menus while navigating. The first experiment, which included 32 participants, looked at the effect of four different types of auditory displays (auditory icon, earcon, spearcon, and no sound) and two different types of menu-generation interfaces (fixed and adaptive) on driving performance, eye glance behavior, secondary task performance, and subjective perception. The second experiment, which included 24 participants, compared the use of 70% and 40% spearcon displays to text-to-speech and no audio conditions. The results of these experiments showed that using spearcon displays resulted in the least visual distraction and workload, the best system usability, and the most accurate but slowest secondary task selections. These findings are discussed in relation to existing theories of how the brain processes multiple tasks, and design guidelines for auditory displays are proposed for future research.

Page generated in 0.0541 seconds