• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 4
  • 3
  • 1
  • Tagged with
  • 31
  • 31
  • 12
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Down Stream [Appalachia]

Franusich, David J. 06 May 2020 (has links)
Down Stream [Appalachia] is an immersive, interactive art installation that addresses themes of ecological preservation, conservation, and connectedness—illuminating the precarity of imperiled freshwater species in the Appalachian region. The exhibition is composed of reflective, refractive sculptures and underwater video footage, surrounded by fully-immersive spatial audio. Both the audio and visual elements react to audience presence and proximity. Species highlighted are the Candy Darter (Etheostoma osburni); the Cumberlandian Combshell (Epioblasma brevidens) and other freshwater mussels; and the Eastern Hellbender Salamander (Cryptobranchus alleganiensis alleganiensis). This paper examines the context and content of this installation, its progression and influences, and themes of ecology and the environment in the Southeast United States. / Master of Fine Arts / There are endangered species right here in the mountains of Virginia, and hardly anyone knows about them. Down Stream [Appalachia] is an immersive, interactive art installation that attempts to raise awareness and allow people to connect to these animals that otherwise go unseen. This paper examines the context, content, and themes of the installation.
22

Modelagem de um sistema para auralização musical utilizando Wave Field Synthesis / Modeling a system for musical auralization using Wave Field Synthesis

Silva, Marcio José da 31 October 2014 (has links)
Buscando-se a aplicação prática da teoria de Wave Field Synthesis (WFS) na música, foi feita uma pesquisa visando à modelagem de um sistema de sonorização capaz de criar imagens sonoras espaciais com a utilização desta técnica. Diferentemente da maioria das outras técnicas de sonorização, que trabalham com uma região de escuta pequena e localizada, WFS permite projetar os sons de cada fonte sonora - como instrumentos musicais e vozes - em diferentes pontos do espaço de audição, em uma região de escuta que pode abranger quase toda a área compreendida por este espaço, dependendo da quantidade de alto-falantes instalados. O desenvolvimento de um código de estrutura modular para WFS foi baseado na plataforma orientada a patches Pure Data (Pd), e no sistema de auralização AUDIENCE, desenvolvido na USP, sendo integrável como ferramenta para espacialização sonora interativa. A solução emprega patches dinâmicos e uma arquitetura modular, permitindo flexibilidade e manutenabilidade do código, com vantagens frente a outros software existentes, particularmente na instalação, operação e para lidar com um número elevado de fontes sonoras e alto-falantes. Para este sistema também foram desenvolvidos alto-falantes especiais com características que facilitam seu uso em aplicações musicais. / Seeking the practical application of the theory of Wave Field Synthesis (WFS) in music, a research aimed at modeling a sound system capable of creating spatial sound images with the use of this technique was made. Unlike most other techniques for sound projection that work with a small, localized listening area, WFS allows projecting the sounds of each sound source - such as musical instruments and voices - at different points within the hearing space, in a region that can cover almost the entire area comprised by this space, depending on the amount of installed speakers. The development of a modular structured code for WFS was based on the patch-oriented platform Pure Data (Pd), and on the AUDIENCE auralization system developed at USP, and it is integrable as a tool for interactive sound spatialization. The solution employs dynamic patches and a modular architecture, allowing code flexibility and maintainability, with advantages compared to other existing software, particularly in the installation, operation and to handle a large number of sound sources and speakers. For this system special speakers with features that facilitate its use in musical applications were also developed.
23

Modelagem de um sistema para auralização musical utilizando Wave Field Synthesis / Modeling a system for musical auralization using Wave Field Synthesis

Marcio José da Silva 31 October 2014 (has links)
Buscando-se a aplicação prática da teoria de Wave Field Synthesis (WFS) na música, foi feita uma pesquisa visando à modelagem de um sistema de sonorização capaz de criar imagens sonoras espaciais com a utilização desta técnica. Diferentemente da maioria das outras técnicas de sonorização, que trabalham com uma região de escuta pequena e localizada, WFS permite projetar os sons de cada fonte sonora - como instrumentos musicais e vozes - em diferentes pontos do espaço de audição, em uma região de escuta que pode abranger quase toda a área compreendida por este espaço, dependendo da quantidade de alto-falantes instalados. O desenvolvimento de um código de estrutura modular para WFS foi baseado na plataforma orientada a patches Pure Data (Pd), e no sistema de auralização AUDIENCE, desenvolvido na USP, sendo integrável como ferramenta para espacialização sonora interativa. A solução emprega patches dinâmicos e uma arquitetura modular, permitindo flexibilidade e manutenabilidade do código, com vantagens frente a outros software existentes, particularmente na instalação, operação e para lidar com um número elevado de fontes sonoras e alto-falantes. Para este sistema também foram desenvolvidos alto-falantes especiais com características que facilitam seu uso em aplicações musicais. / Seeking the practical application of the theory of Wave Field Synthesis (WFS) in music, a research aimed at modeling a sound system capable of creating spatial sound images with the use of this technique was made. Unlike most other techniques for sound projection that work with a small, localized listening area, WFS allows projecting the sounds of each sound source - such as musical instruments and voices - at different points within the hearing space, in a region that can cover almost the entire area comprised by this space, depending on the amount of installed speakers. The development of a modular structured code for WFS was based on the patch-oriented platform Pure Data (Pd), and on the AUDIENCE auralization system developed at USP, and it is integrable as a tool for interactive sound spatialization. The solution employs dynamic patches and a modular architecture, allowing code flexibility and maintainability, with advantages compared to other existing software, particularly in the installation, operation and to handle a large number of sound sources and speakers. For this system special speakers with features that facilitate its use in musical applications were also developed.
24

MODELS AND ALGORITHMS FOR INTERACTIVE AUDIO RENDERING

Tsingos, Nicolas 14 April 2008 (has links) (PDF)
Les systèmes de réalité virtuelle interactifs combinent des représentations visuelle, sonore et haptique, afin de simuler de manière immersive l'exploration d'un monde tridimensionnel représenté depuis le point de vue d'un observateur contrôlé en temps réel par l'utilisateur. La plupart des travaux effectués dans ce domaine ont historiquement port'e sur les aspects visuels (par exemple des méthodes d'affichage interactif de modèles 3D complexes ou de simulation réaliste et efficace de l'éclairage) et relativement peu de travaux ont été consacrés 'a la simulation de sources sonores virtuelles 'également dénommée auralisation. Il est pourtant certain que la simulation sonore est un facteur clé dans la production d'environnements de synthèse, la perception sonore s'ajoutant à la perception visuelle pour produire une interaction plus naturelle. En particulier, les effets sonores spatialisés, dont la direction de provenance est fidèlement reproduite aux oreilles de l'auditeur, sont particulièrement importants pour localiser les objets, séparer de multiples signaux sonores simultanés et donner des indices sur les caractéristiques spatiales de l'environnement (taille, matériaux, etc.). La plupart des systèmes de réalité virtuelle immersifs, des simulateurs les plus complexes aux jeux vidéo destin'es au grand public mettent aujourd'hui en œuvre des algorithmes de synthèse et spatialisation des sons qui permettent d'améliorer la navigation et d'accroître le réalisme et la sensation de présence de l'utilisateur dans l'environnement de synthèse. Comme la synthèse d'image dont elle est l'équivalent auditif, l'auralisation, appel'ee aussi rendu sonore, est un vaste sujet 'a la croisée de multiples disciplines : informatique, acoustique et 'électroacoustique, traitement du signal, musique, calcul géométrique mais également psycho-acoustique et perception audio-visuelle. Elle regroupe trois problématiques principales: synthèse et contrôle interactif de sons, simulation des effets de propagation du son dans l'environnement et enfin, perception et restitution spatiale aux oreilles de l'auditeur. Historiquement, ces trois problématiques émergent de travaux en acoustique architecturale, acoustique musicale et psycho-acoustique. Toutefois une différence fondamentale entre rendu sonore pour la réalité virtuelle et acoustique réside dans l'interaction multimodale et dans l'efficacité des algorithmes devant être mis en œuvre pour des applications interactives. Ces aspects importants contribuent 'a en faire un domaine 'a part qui prend une importance croissante, tant dans le milieu de l'acoustique que dans celui de la synthèse d'image/réalité virtuelle.
25

Toward adapting spatial audio displays for use with bone conduction: the cancellation of bone-conducted and air-conducted sound waves.

Stanley, Raymond M. 03 November 2006 (has links)
Virtual three-dimensional (3D) auditory displays utilize signal-processing techniques to alter sounds presented through headphones so that they seem to originate from specific spatial locations around the listener. In some circumstances bone-conduction headsets (bonephones) can provide an alternative sound presentation mechanism. However, existing 3D audio rendering algorithms need to be adjusted to use bonephones rather than headphones. This study provided anchor points for a function of shift values that could be used to adapt virtual 3D auditory displays for use with bonephones. The shift values were established by having participants adjust phase and amplitude of two waves in order to cancel out the signal and thus produce silence. These adjustments occurred in a listening environment consisting of air-conducted and bone-conducted tones, as well as air- conducted masking. Performance in the calibration condition suggested that participants understood the task, and could do this task with reasonable accuracy. In the bone-to-air listening conditions, the data produced a clear set of anchor points for an amplitude shift function. The data did not reveal, however, anchor points for a phase shift function the data for phase were highly variable and inconsistent. Application of shifts, as well as future research to establish full functions and better understand phase are discussed, in addition to validation and follow-up studies.
26

3D Surround Sound Application for Game Environments

Tång, Alfred January 2014 (has links)
This report covers the creation and implementation of a 3D audio application using FMOD Ex API. The report will also cover a walkthrough of the basic principles of 3D and surround audio, examples of other uses of 3D audio, a comparison between available technologies today, both software and hardware and finally the result of the implementation of the 3D sound environment software, both server and client. The application was created to explore the use of 3D audio in immersive environments. There was no application like this available when this project was conducted. An inductive approach along with a form of rapid application development and scenario creation was used to achieve the results presented in this report. The implementation resulted in a working client and server software which is able to create a 3D sound environment. Based on a user evaluation the software proved to be quite successful. With the help of the implementation the user, or operator, can now create a sound environment for another user, or a listener. The environment is created and designed by the operator using the client side of the implementation and later played through the server side which is connected to a 4.1 speaker system. The operator can steer and add sounds from the client to an active environment and the listener can experience the change in real time. This project was conducted as a bachelor thesis in computer science at Mälardalens University in Västerås, Sweden.
27

Measurement and validation of bone-conduction adjustment functions in virtual 3D audio displays

Stanley, Raymond M. 06 July 2009 (has links)
Virtual three-dimensional auditory displays (V3DADs) use digital signal processing to deliver sounds (typically through headphones) that seem to originate from specific external spatial locations. This set of studies investigates the delivery of V3DADs through bone-conduction transducers (BCTs) in addition to conventional headphones. Although previous research has shown that spatial separation can be induced through BCTs, some additional signal adjustments are required for optimization of V3DADs, due to the difference in hearing pathways. The present studies tested a bone-conduction adjustment function (BAF) derived from equal-loudness judgments on pure tones whose frequencies were spaced one critical band apart. Localization performance was assessed through conventional air-conduction headphones, BCTs with only transducer correction, and BCTs with a BAF. The results showed that in the elevation plane, the BAF was effective in restoring the spectral cues altered by the bone-conduction pathway. No evidence for increased percept variability or decreased lateralization in the bone-conduction conditions was found. These findings indicate that a V3DAD can be implemented on a BCT and that a BAF will improve performance, but that there is an apparent performance cost that cannot be addressed with BAFs measured using the methodology in the present studies.
28

Contributions à la mise au point de méthodes adaptatives de reproduction de champs sonores multi-zone pour les auditeurs en mouvement : Sound zones pour auditeurs en mouvement / Contributions to the development of adaptive methods for the reproduction of multizone sound fields for moving listeners : Sound zones for moving listeners

Roussel, Georges 03 July 2019 (has links)
Le nombre croissant d'appareils de diffusion de contenus audio pose le problème de partager le même espace physique sans partager lemême espace sonore. Les Sound Zones rendent possible la reproduction de programmes audio indépendants et spatialement séparés, àpartir d'un ensemble de haut-parleurs et de méthodes de reproduction de champs sonores. Le problème est alors décomposé en deuxzones : la Bright zone, où le contenu doit être reproduit et la Dark zone, où il doit être annulé. De nombreuses méthodes existent pourrésoudre ce problème, mais la plupart ne traite que le cas d'auditeurs en position statique. Elles s'appuient sur la résolution directe desméthodes d'optimisation adaptative, telle que la méthode de Pressure Matching (PM). Or, pour des utilisateurs en mouvement, cesméthodes ont un coût de calcul trop élevé, rendant impossible leur application à un problème dynamique. Le but de cette thèse est dedévelopper une solution présentant une complexité compatible avec un contrôle dynamique des Sound Zones, tout en conservant lesperformances des méthodes conventionnelles. Sous l'hypothèse que les déplacements sont lents, une résolution itérative du problème PMest proposée et évaluée. Les algorithmes LMS, NLMS et APA sont comparés sur la base de simulations en champ libre. La méthode LMSs'avère la plus avantageuse en termes de complexité, mais elle souffre d'une erreur de reproduction. Un effet mémoire limitant la réactivitédes algorithmes est aussi mis en évidence. Il est corrigé en implémentant une variante introduisant un facteur d'oubli (Variable LeakyLMS ou VLLMS). / The growing number of audio devices raises the problem of sharing the same physical space without sharing the same sound space. SoundZones make it possible to play independent and spatially separated audio programs by loudspeaker array in combination with sound fieldreproduction methods. The problem is then split into two zones: the Bright zone, where the audio content must be reproduced and theDark zone, where it must be cancelled. There are many methods available to solve this problem, but most only deal with auditors in astatic position. They are based on the direct resolution of adaptive optimization methods, such as the Pressure Matching (PM) method.However, for moving users, these methods have a too high computation cost, making it impossible to apply them to a dynamic problem.The aim of this thesis is to develop a solution offering a level of complexity compatible with a dynamic control of Sound Zones, whilemaintening the performance of conventional methods. Under the assumption that displacements are slow, an iterative resolution of the PMproblem is proposed and assessed. The LMS, NLMS and APA algorithms are compared on the basis of free field simulations. The LMSmethod is the most advantageous in terms of complexity, but it suffers from a reproduction error. A memory effect limiting the reactivityof the algorithms is also highlighted. It is corrected by implementing a leaky variant (Variable Leaky LMS or VLLMS) introducing aforgetting factor.
29

Implementation and Comparative Analysis of Head-Related and Binaural Room Impulse Response in a Mid-Side Decomposition / Implementation och Jämförande Analys av Huvud-Relaterade och Binaurala Rumsimpuls-Responser i en Mid-Sid Uppdelning

Ling, Jonathan January 2023 (has links)
This thesis aimed to clarify the essential factors involved in externalising audio over headphones. Extensive research was conducted, examining binaural cues and the latest advancements in the field. A novel approach was proposed, which applied HRIRs and BRIRs on a Mid-Side decomposition. The objective was to enhance frontal externalization while increasing control over centre-panned and side-panned elements. The proposed method underwent rigorous testing in various setups, accompanied by objective and subjective evaluations. The objective measures were then correlated with the findings from the subjective evaluations. Interaural coherence analysis revealed that the BRIR exhibited lower overall coherence values than the HRIR. This was anticipated, considering BRIRs capture room acoustics that impact sound perception compared to anechoic conditions. Introducing simple room acoustics, such as early reflections and reverberation tails, significantly reduces the coherence in higher frequencies for HRIR. Connecting these findings to the conducted listening test, it is observed that lower IC generally corresponded to a wider audio configuration. However, assessing frontal externalization proved challenging. Among the tested configurations, the two BRIR models achieved the most width, with the unsmoothed version performing slightly better. This suggests a tradeoff between externalization and colouration, as the smoothed BRIR model excelled in spectral colouration and preference. For the HRIR, adding room acoustics slightly increased the width. It received lower ratings regarding spectral colouration and was not preferred over the HRIR model without room acoustics. This reinforces the significance of preserving the original spectral characteristics. / Denna avhandling hade som mål att klargöra de väsentliga faktorer som är involverade i att externalisera ljud över hörlurar. Omfattande forskning genomfördes, där binaurala ledtrådar och de senaste framstegen inom området undersöktes. Ett nytt tillvägagångssätt föreslogs, där HRIRs och BRIRs applicerades på en Mid-Sid-uppdelning. Målet var att förbättra frontal externalisering i kombination med mer kontroll över centrerade och sido-justerade element. Den föreslagna metoden genomgick rigorösa tester i olika inställningar, åtföljda av objektiva och subjektiva utvärderingar. Objektiva mätningar korrelerades sedan med resultaten från de subjektiva utvärderingarna. Analysen av interaural koherens visade att BRIR hade lägre övergripande koherensvärden än HRIR. Detta var väntat eftersom BRIR fångar upp rumslig akustik som påverkar ljuduppfattningen jämfört med anekoiska förhållanden. Att införa enkla rumsliga akustikegenskaper, såsom tidiga reflektioner och efterklangssvansar, minskar signifikant koherensen i högre frekvenser för HRIR. Genom att koppla samman dessa resultat med de genomförda lyssningstesterna kan man observera att lägre interaural koherens generellt korresponderade med en bredare ljudkonfiguration. Dock visade det sig vara utmanande att bedöma frontal externalisering. Bland de testade konfigurationerna uppnådde de två BRIR-modellerna störst bredd, där en icke spektrum-justerad version presterade något bättre. Detta tyder på en avvägning mellan externalisering och färgning, eftersom den slätade BRIR-modellen presterade bra i både färgning av ljud och preferens. För HRIR ökade bredden något genom att lägga till rumslig akustik. Den fick lägre betyg avseende spektralfärgning och föredrogs inte framför HRIR-modellen utan rumslig akustik. Detta understryker vikten av att bevara de ursprungliga spektrala egenskaperna.
30

Spatialized Sonification for the Learning of Surgical Navigation / Spatialiserad Sonifikation för inlärning av Kirurgisk Navigation

Danielsson, Alexander January 2023 (has links)
Historically, the education of surgical navigation in minimally invasive neurosurgery has been constrained by several factors. Medical students have been required to physically be in the operating room to observe a teacher perform the different procedures. This restricts their opportunities to gain valuable hands-on experience in their field. An extended reality simulation system that employs auditory feedback in the form of sonification could be used to provide an inexpensive alternative to this traditional approach. Such a system would allow medical students to get practical experience with valuable insights during their initial years of training without requiring access to the operating room. In order to perform a first evaluation of the impact of sonification on neurosurgical learning using extended reality simulations, a prototype of a surgical simulation tool with six possible sonifications was implemented for the task of aligning a catheter against a target angle. The sonification types studied were spatial, psychoacoustic and direct parameter-mapping, each of which encoded the component angles either in parallel or sequentially. The sonifications were evaluated against each other and the baseline condition in a comparative mixed-design user study measuring the participants efficacy as accuracy, precision, time-to-completion and perceived workload for an assisted neurosurgical simulation task. Participants were found to be significantly slower when using the psychoacoustic sonification as compared to using no aid. Both the spatial and direct sonification showed non-significant tendencies to be slower than the baseline condition. Whilst no significant difference was found between the sonifications, the participants tended to have higher efficacy when using the spatial and direct sonifications, than with the psychoacoustic sonification. Hence these sonifications show the most promise as possible candidates for an auditory feedback system in an extended reality simulator for surgical navigation. However, further evaluation is needed to conclude the full effect of the direct and spatial sonifications on the students’ efficacy. / Utbildningen av kirurgisk navigation för minimalinvasiva neurokirurgiska operationer har historiskt begränsats av flera anledningar. För att kunna lära sig, så har läkarstudenter behövt fysiskt närvara i operationssalen för att observera en lärare genomföra olika operationer. Det har begränsat studenternas möjlighet att få viktig praktisk erfarenhet inom sitt fält. Som ett alternativ till traditionella metoder skulle ett simulationssystem baserat på extended reality (utökat verklighet) och som använder auditiv återkoppling i form av sonifikation kunna användas. Ett sådant system skulle kunna ge läkarstudenter möjligheten att träna praktiskt utanför operationssalen samtidigt som de kan få direkt återkoppling under operationens gång. För att genomföra en första utvärdering av sonifikations påverkan på inlärningen av neurokirurgi när simulationer baserad på extended reality används, så utvecklades en prototyp av en kirurgisk simulator med sex möjliga sonifikationer för uppgiften att positionera en kateter så att den är vinklad i en given riktning. Sonifikationerna byggde på parametrisk mappning baserad på spatiala, psykoakustiska eller direkta metoder. För vardera av de tre metoderna kunde komponent vinklarna antingen kommuniceras parallellt eller sekventiellt. Prototyperna utvärderades mot varandra och basfallet då ingen hjälp användes. Utvärderingen genomfördes som en användarstudie av mixed design (blandad design). Användarnas effektivitet mättes som noggrannhet, precision, använd tid och upplevd arbetsbörda. Deltagarna var signifikant långsammare än basfallet när den psykoakustiska sonifikationen användes. Både den spatiala och direkta sonifikation påvisade en likartad, men icke-signifikanta tendens att va långsammare än basfallet. Medans ingen signifikant skillnad upptäcktes mellan sonifikationerna, så tenderade deltagarna att va mer effektiva när de använde den spatiala och direkta sonifikationerna, i jämförelse med när de använde den psykoakustiska sonifikationen. Därmed verkar dessa två metoder för sonifikation vara de mest lovande kandidater för ett auditivt återkopplingssystem i en extended reality simulator för kirurgisk navigation. Dock behövs vidare utvärdering för att finna en slutsats om spatiala och direkta sonifikationers fulla påverkan på läkarstudenternas effektivitet.

Page generated in 0.0386 seconds