• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 2
  • 2
  • Tagged with
  • 17
  • 17
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A design framework for user interfaces of 3D audio production tools / Un cadre de conception pour les interfaces utilisateur des outils de production audio 3D

Mathew, Justin D. 02 October 2017 (has links)
Il y a un intérêt important et croissant à procurer des expériences d’écoute immersives pour une variété d’applications, et les améliorations constantes des technologies de reproduction audio 3D permettent de produire des scènes auditives immersives à la fois créatives et réalistes. Mais bien que ces technologies de rendu audio 3D soient maintenant relativement disponibles pour les consommateurs, la production et la création des contenus adéquats restent difficiles en raison de la variété des techniques de rendu, des considérations perceptives et des limites des interfaces utilisateur disponibles. Cette thèse traite de ces problèmes en développant un cadre de conception basé sur deux points de vue : l’analyse morphologique des méthodes et des pratiques audio 3D, et la conception d’interaction. À partir du recueil de données ethnographiques sur les outils, les méthodes et les pratiques pour la production de contenu audio 3D, de considérations sur la perception spatiale liée à l’audio 3D, et d’une analyse morphologique sur les objets d’intérêt connexes (objets audio 3D, paramètres interactifs et techniques de rendu), nous avons identifié les taches que doivent supporter les interfaces utilisateur audio 3D et proposé un cadre de conception qui caractérise la création et la manipulation des objets audio. Ensuite, nous avons conçu plusieurs techniques d’interaction pour la création audio 3D et avons étudié leurs performances et leur facilité d’utilisation selon différentes caractéristiques des méthodes d’entrée et de ’mapping’ (multiplexage, intégralité, ’directitude’). Nous avons observé des différences de performances lors de la création et de l’édition de trajectoires audio suggérant que l’augmentation de la sensibilité de la technique de ’mapping’ améliore les performances, et qu’un équilibre entre la séparabilité et l’intégralité des méthodes d’entrée peut résulter en un compromis satisfaisant entre la performance de l’utilisateur et la simplicité matérielle de la solution. Plus généralement, à partir de ces perspectives, nous avons identifié les critères de conception requis pour les interfaces utilisateur audio 3D en vue de compléter notre cadre de conception. Ce dernier, associé à nos résultats expérimentaux, sont un moyen d’aider les concepteurs à mieux prendre en compte les dimensions importantes dans le processus de conception, analyser les fonctionnalités et améliorer les interfaces utilisateur pour les outils de production audio 3D. / There has been a significant interest in providing immersive listening experiences for a variety of applications, and recent improvements in audio production have provided the capability for 3D audio practitioners to produce realistic and imaginative immersive auditory scenes. Even though technologies to reproduce 3D audio content are becoming readily available for consumers, producing and authoring this type of content is difficult due to the variety of rendering techniques, perceptual considerations, and limitations of available user interfaces. This thesis examines these issues through the development of a framework of design spaces that classifies how 3D audio objects can be created and manipulated from two different viewpoints : Morphological Analysis of 3D Audio Methods and Practices and Interaction Design. By gathering ethnographic data on tools, methods, and practices of 3D audio practitioners, overviewing spatial perception related to 3D audio, and conducting a morphological analysis on related objects of interest (3D audio objects, interactive parameters, and rendering techniques), we identified the tasks required to produce 3D audio content and how 3D audio objects can be created and manipulated. This work provided the dimensions of two design spaces that identify the interactive spatial parameters of audio objects by their recording and rendering methods, describing how user interfaces provide visual feedback and control the interactive parameters. Lastly, we designed several interaction techniques for 3D audio authoring and studied their performance and usability according to different characteristics of input and mapping methods (multiplexing, integrality, directness). We observed performance differences when creating and editing audio trajectories, suggesting that increasing the directness of the mapping technique improves performance and that a balance between separability and integrality of input methods can result into a satisfactory trade-off between user performance and cost of equipment. This study provided results that inform designers on what they might expect in terms of usability when designing input and mapping methods for 3D audio trajectory authoring tasks. From these viewpoints, we proposed design criteria required for user interfaces for 3D audio user production that developed and improved the framework of design spaces. We believe this framework and the results of our studies could help designers better account for important dimensions in the design process, analyze functionalities in current tools, and improve the usability of user interfaces for 3D audio production tools.
12

Intercorrelation between sound design, binaural and non-binaural audio systems : Effects on general vertical localization precision and reaction time in a non-visual directional choice task 3D game

Baker, David January 2022 (has links)
Spatialization of audio in the vertical plane has historically been limited. Instead, sound designers have used basic DSP to create pseudo height effects to explain the positions of corresponding objects. In recent years, binaural synthesis has become more widespread following an increase in the use of software rendering methods. With these advancements, uncertainty seems to be present around best practices when combining sound design with binaural synthesis for vertical placement of audio cues in games. This thesis compares the vertical localization performance between head related transfer functions (HRTFs) and stereo interaural level difference (ILD), when sounds have been designed with basic DSP to have auditory spatial schema (ASC). A sort of embedment of positional information. It was found that there was no significant time difference between the conditions, while hitcount, the number of correct directions selected, displayed a significant difference in some of the comparisons.
13

Indoor Navigation For The Blind And Visually Impaired: Validation And Training Methodology Using Virtual Reality

Wang, Sili 24 March 2017 (has links)
In this thesis we propose a navigation instruction validation tool and an user training tool for PERCEPT system. The validation tool evaluates the navigation instructions using a virtual reality environment by ensuring that each path in the virtual environment can be traversed by following the navigation instructions. This validation tool will serve as a first automatic validation of navigation instructions prior to testing them with blind and visually impaired users. The user-training tool enables the blind user to explore and get familiar with the real environment by using the virtual environment generated in the Unity3d based game. The user interacts with the game using PERCEPT Smartphone client just like the user would interact in the real environment. Motion in the game is emulated using the keyboard. Motion directions follow the navigation instructions obtained through the Smartphone. This user-training tool will improve the users experience in the real environment by enabling them to explore and learn the environment a-priori to their arrival in the physical space.
14

[pt] ESPACIALIZAÇÃO SONORA EM 3D BASEADA EM HRTFS / [en] HRTFS BASED 3D AUDIO SPATIALIZATION

MARCELO POLITZER COUTO 05 February 2021 (has links)
[pt] Aplicações de Realidade Virtual (VR) com rastreamento de movimentos da cabeça precisam de efeitos de espacialização de alta qualidade. A abordagem tradicional para RV/jogos (interpolação dos canais L+R para construção do estereo) se mostrou insuficiente por ser incapaz de simular a acústica do mundo real. Por isso a pesquisa na área tem migrado para espacialização 3D do áudio. O receptor tem a sensação de que o som veio de um local no espaço 3D. Em outras palavras, ele pode localizar o emissor apenas pelo áudio por consequência permite a construção ambientes mais imersivos e coerentes quando usados em conjunto de técnicas visuáis. Nesse novo contexto, motores de jogos devem prover aos designers de áudio uma gama de ferramentas especializadas para a espacialização de àudio 3D além as de uso geral, que encluem: reverberações e reflexões usadas na construção de ambientes como igrejas e cavernas (locais com ecos); modulação, para criar variações de frequência e aliviar na repetitividade de sons recorrentes (como os de passos e tiros); mix e fade de volumes, utilizado na criação de momentos dramáticos na história e reprodução musical. Nesse trabalho, nós propomos um motor de áudio de tempo real para espacialização de fontes sonoras pontuais em ambientes virtuais. Vai possuir uma arquitetura documentada e de código aberto que provê um conjunto de efeitos e a habilidade de os compor. Nós implementamos a espacialização de áudio em 3D sobre bancos de dados de respostas impulsionais da cabeça (HRIRs) e efeitos sonoros com técnicas de processamento digital de sinais (DSP). Apesar da existência de sistemas comerciais poderosos de áudio para VR estejam disponíveis (e.g. Oculus), nosso protótipo pode ser uma alternativa se a simplicidade, testabilidade e ajustes forem levados em conta. / [en] Virtual Reality (VR) applications with low-latency head tracking require high-quality spatial audio effects. However, classic VR/game sound approaches cannot properly simulate the acoustic of the real world. Current audio research is moving towards 3D spatial audio to have a more realistic simulation. In 3D spatial audio, the listener has the sensation that sound comes from a particular direction in 3D space. In other words, the listener can localize a source based on audio and have a more coherent and immersive experience when paired with visual simulation. In this new context, game engines should provide sound designers with a set of 3D spatial audio tools. The following common effects are desirable in this type of toolbox: reverberations and reflections, which can be employed in the creation of caverns or churches (places with lots of echoes); modulation, which can increase the perceived variety of a recorded sound, by slightly varying its pitch (as in the sounds of footsteps); mixing and fading volumes, which can create dramatic moments in storytelling and music reproduction. In this work, we propose a realtime audio engine to spatialize sound point sources in virtual environments. This engine is an open-source architecture that provides a basic set of audio effects and an efficient way to mix and match them. We implement 3D audio spatialization by leveraging recorded head-related impulse responses (HRIRs) and we produce special sound effects with digital signal processing (DSP) techniques. Although some powerful commercial audio SDKs for Virtual Reality are currently available (e.g. Oculus), our audio engine prototype may be a flexible option when adaptation, simplification, testing, and parameter tuning are necessary.
15

Influence de la stéréoscopie sur la perception du son : cas de mixages sonores pour le cinéma en relief / The influence of stereoscopy on sound perception : a case study on the sound mixing of stereoscopic-3D movies

Hendrickx, Etienne 04 December 2015 (has links)
Peu d'études ont été menées sur l'influence de la stéréoscopie sur la perception d'un mixage audio au cinéma. Les témoignages de mixeurs ou les articles scientifiques montrent pourtant une grande diversité d'opinions à ce sujet. Certains estiment que cette influence est négligeable, d'autres affirment qu'il faut totalement revoir notre conception de la bande-son, aussi bien au niveau du mixage que de la diffusion. Une première série d'expériences s'est intéressée à la perception des sons d'ambiance. 8 séquences, dans leurs versions stéréoscopiques (3D-s) et non-stéréoscopiques (2D), ont été diffusées dans un cinéma à des sujets avec plusieurs mixages différents. Pour chaque présentation, les sujets devaient évaluer à quel point le mixage proposé leur paraissait trop frontal ou au contraire trop « surround », le but étant de mettre en évidence une éventuelle influence de la stéréoscopie sur la perception de la balance frontal/surround d'un mixage audio. Les résultats obtenus ont rejoint ceux d'une expérience préliminaire menée dans un auditorium de mixage, où les sujets se trouvaient en situation de mixeur et devaient eux-mêmes régler la balance frontal/surround : l'influence de la stéréoscopie était faible et n'apparaissait que pour quelques séquences. Des études ont ensuite été menées sur la perception des objets sonores tels que dialogues et effets. Une quatrième expérience s'est intéressée à l'effet ventriloque en élévation : lorsque l'on présente à un sujet des stimuli audio et visuel temporellement coïncidents mais spatialement disparates, les sujets perçoivent parfois le stimulus sonore au même endroit que le stimulus visuel. On appelle ce phénomène l’effet ventriloque car il rappelle l'illusion créée par le ventriloque lorsque sa voix semble plutôt provenir de sa marionnette que de sa propre bouche. Ce phénomène a été très largement étudié dans le plan horizontal, et dans une moindre mesure en distance. Par contre, très peu d'études se sont intéressées à l'élévation. Dans cette expérience, nous avons présenté à des sujets des séquences audiovisuelles montrant un homme en train de parler. Sa voix pouvait être reproduite sur différents haut-parleurs, qui créaient des disparités plus ou moins grandes en azimut et en élévation entre le son et l'image. Pour chaque présentation, les sujets devaient indiquer si la voix semblait ou non provenir de la même direction que la bouche de l'acteur. Les résultats ont montré que l'effet ventriloque était très efficace en élévation, ce qui suggère qu'il n'est peut-être pas nécessaire de rechercher la cohérence audiovisuelle en élévation au cinéma. / Few psychoacoustic studies have been carried out about the influence of stereoscopy on the sound mixing of movies. Yet very different opinions can be found in the cinema industry and in scientific papers. Some argue that sound needs to be mixed differently for stereoscopic movies while others pretend that this influence is negligible.A first set of experiments was conducted, which focused on the perception of ambience. Eight sequences - in their stereoscopic (s-3D) and non-stereoscopic (2D) versions, with several different sound mixes - were presented to subjects. For each presentation, subjects had to judge to what extent the mix sounded frontal or “surround.” The goal was to verify whether stereoscopy had an influence on the perception of the front/surround balance of ambience. Results showed that this influence was weak, which was consistent with a preliminary experiment conducted in a mixing auditorium where subjects had to mix the front/surround balance of several sequences themselves.Studies were then conducted on the perception of sound objects such as dialogs or on-screen effects. A fourth experiment focused on ventriloquism in elevation: when presented with a spatially discordant auditory-visual stimulus, subjects sometimes perceive the sound and the visual stimuli as coming from the same location. Such a phenomenon is often referred to as ventriloquism, because it evokes the illusion created by a ventriloquist when his voice seems to emanate from his puppet rather than from his mouth. While this effect has been extensively examined in the horizontal plane and to a lesser extent in distance, few psychoacoustic studies have focused on elevation. In this experiment, sequences of a man talking were presented to subjects. His voice could be reproduced on different loudspeakers, which created disparities in both azimuth and elevation between the sound and the visual stimuli. For each presentation, subjects had to indicate whether or not the voice seemed to emanate from the mouth of the actor. Ventriloquism was found to be highly effective in elevation, which suggests that audiovisual coherence in elevation might be unnecessary in theaters.
16

Estudos sobre personalização da função de transferência relativa à cabeça em sistemas biaurais de reprodução acústica virtual. / Studies about personalization of the head-related transfer function in binaural virtual auditory displays.

Sergio Gilberto Rodriguez Soria 18 January 2006 (has links)
Este trabalho apresenta diversas propostas associadas ao uso ótimo de funções de transferência relativas à cabeça (HRTFs) em sistemas de reprodução acústica virtual por fones de ouvido. Estas propostas permitem personalizar a HRTF a indivíduos particulares, tomando como base uma combinação da modelagem estrutural e morfológica de HRTFs. Dentro do contexto da modelagem estrutural, o presente trabalho se concentrou no estudo da contribuição do pinna à HRTF. O pinna é a estrutura anatômica responsável pela percepção de elevação. Assim, o primeiro passo foi extrair um conjunto de funções de transferência relativas ao pinna (PRTFs) das HRTFs de uma base de dados. Para tanto, foram usadas diversas técnicas como análise preditiva linear para rastrear as ressonâncias, janelamento para eliminar a influência do torso, funções de autocorrelação e de atraso de grupo para salientar as antirressonâncias, e outros algoritmos para combinar ressonâncias e antirressonâncias em apenas uma magnitude espectral. Usando essa nova base de dados de PRTFs e parâmetros antropométricos propostos mais outros registrados na base de dados, um espaço vetorial correspondente à antropometria do pinna foi mapeado linearmente em um espaço vetorial correspondente às características espectrais da PRTF, calculando-se assim várias transformações lineares para estimação de novas PRTFs fora da base de dados. A estimação atingiu 66% de reconstrução no grupo de treino. O trabalho está orientado à exploração das características espectrais importantes na percepção de elevação, portanto, está limitado ao plano médio do hemisfério frontal, onde não existem diferenças interaurais significativas nem efeitos difrativos da cabeça. Finalmente é proposto um sistema de testes de localização de fonte sonora para validar o modelo. / This work presents several proposals associated with the optimal use of head-related transfer functions (HRTF) in virtual auditory spaces presented via headphones. These proposals lead to personalization of the HRTF to particular individuals, using a combination of the structural and morphological modeling techniques. In the context of structural modeling, this work focuses on modeling the contribution of the pinna to the HRTF. The pinna is the anatomical structure responsible for vertical sound localization. Thus, the first step was to extract a set of pinna-related transfer functions (PRTFs) from HRTFs published in a database. This was accomplished using several techniques like linear prediction analysis for tracking the resonances, windowing for eliminating the torso influence, autocorrelation and group delay functions for emphasizing the notches and other algorithms for combining resonances and notches in only one magnitude response. Using this novel database of PRTFs and a set of proposed anthropometric parameters plus some others registered in the database, a vector space corresponding to pinna anthropometry is linearly mapped into a vector space corresponding to spectral features of the PRTF, being calculated, in this way, several linear transformations for estimation of new PRTFs, outside the database. The estimation attains 66% of reconstruction in the training group. The work focuses on the exploration of spectral characteristics important for elevation perception, therefore, it is limited to the median plane where there are no meaningful interaural differences nor head diffraction effects. Finally, a system for sound localization tests is proposed in order to validate the model.
17

[en] PROPAGATION OF SOUND IN TWO-DIMENSIONAL VIRTUAL ACOUSTIC ENVIRONMENTS. / [pt] PROPAGAÇÃO DE SOM EM AMBIENTES ACÚSTICOS VIRTUAIS BIDIMENSIONAIS

SERGIO ALVARES R SOUZA MAFFRA 16 July 2003 (has links)
[pt] Durante muito tempo, a simulação computacional de fenômenos acústicos tem sido utilizada principalmente no projeto e estudo da acústica de ambientes. Recentemente, no entanto, podemos ver um maior interesse na utilização dessas simulações como forma de aumentar a sensação de imersão em ambientes virtuais. De forma geral, podemos dizer que um ambiente acústico virtual deve ser capaz de realizar duas tarefas: simular a propagação do som em um ambiente e ser capaz de reproduzi-lo com seu conteúdo espacial, isto é, reproduzi-lo de forma a permitir o reconhecimento da direção de propagação do som. Esta dissertação trata desses dois assuntos. São revistos os algoritmos mais comuns para o cálculo da propagação do som e, brevemente, as formas utilizadas para reproduzir áudio com conteúdo espacial. Também é apresentada a implementação de um ambiente acústico virtual, baseado nos algoritmos de beam tracing, que simula a propagação do som em ambientes bidimensionais. Como grande parte do cálculo de propagação é realizada em uma etapa de pré-processamento, o ambiente acústico virtual implementado trata apenas de fontes fixas no espaço. Os caminhos de propagação calculados são compostos de reflexões especulares e difrações do som. / [en] For a long time, computational simulation of acoustic phenomena has been used mainly in the design and study of the acoustic properties of concert and lecture halls. Recently, however, there is a growing interest in the use of such simulations in virtual environments in order to enhance users` immersion experience. Generally, we can say that a virtual acoustic environment must be able to accomplish two tasks: simulating the propagation of sound in an environment and reproducing audio with spatial content, that is, in a way that it allows the recognition of the direction of sound propagation. These tasks are the topic of the present dissertation. We begin with a revision of the most common algorithms for the simulation of sound propagation and, briefly, of the reproduction of audio with spatial content. We then present the implementation of a virtual acoustic environment, based on beam tracing algorithms, which simulates the propagation of sound waves in two-dimensional environments. As most of the computation is made in a pre-processing stage, the virtual acoustic environment implemented is appropriate only for spatially fixed sound sources. The propagation paths computed are made of specular reflections and of diffractions.

Page generated in 0.042 seconds