Current analysis techniques for space physics 2D numerical data are based on scruti-nising the data with the eyes. Space physics data sets acquired from the natural lab of the interstellar medium may contain events that may be masked by noise making it difficult to identify. This thesis presents research on the use of sound as an adjunct to current data visualisation techniques to explore, analyse and augment signatures in space physics data. This research presents a new sonification technique to decom-pose a space physics data set into different components (frequency, oscillatory modes, etc…) of interest, and its use as an adjunct to data visualisation to explore and analyse space science data sets which are characterised by non-linearity (a system which does not satisfy the superposition principle, or whose output is not propor-tional to its input). Integrating aspects of multisensory perceptualization, human at tention mechanisms, the question addressed by this dissertation is: Does sound used as an adjunct to current data visualisation, augment the perception of signatures in space physics data masked by noise? To answer this question, the following additional questions had to be answered: a) Is sound used as an adjunct to visualisation effective in increasing sensi-tivity to signals occurring at attended, unattended, unexpected locations, extended in space, when the occurrence of the signal is in presence of a dynamically changing competing cognitive load (noise), that makes the signal visually ambiguous? b) How can multimodal perceptualization (sound as an adjunct to visualisa-tion) and attention control mechanisms, be combined to help allocate at-tention to identify visually ambiguous signals? One aim of these questions is to investigate the effectiveness of the use of sound to-gether with visual display to increase sensitivity to signal detection in presence of visual noise in the data as compared to visual display only. Radio, particle, wave and high energy data is explored using a sonification technique developed as part of this research. The sonification technique developed as part of this research, its application and re-sults are numerically validated and presented. This thesis presents the results of three experiments and results of a training experiment. In all the 4 experiments, the volun-teers were using sound as an adjunct to data visualisation to identify changes in graphical visual and audio representations and these results are compared with those of using audio rendering only and visual rendering only. In the first experiment audio rendering did not result in significant benefits when used alone or with a visual display. With the second and third experiments, the audio as an adjunct to visual rendering became significant when a fourth cue was added to the spectra. The fourth cue con-sisted of a red line sweeping across the visual display at the rate the sound was played, to synchronise the audio and visual present. The results prove that a third congruent multimodal stimulus in synchrony with the sound helps space scientists identify events masked by noise in 2D data. Results of training experiments are reported.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:630967 |
Date | January 2013 |
Creators | Diaz Merced, Wanda Liz |
Publisher | University of Glasgow |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://theses.gla.ac.uk/5804/ |
Page generated in 0.0016 seconds