• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 123
  • 45
  • 20
  • 11
  • 11
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 273
  • 47
  • 37
  • 27
  • 24
  • 22
  • 22
  • 21
  • 20
  • 18
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Undersökande av kli-ljud inom ASMR i en neutral ljudmiljö / A study of scratching sounds within ASMR in a neutral sound environment

Köhler, Anders January 2018 (has links)
Denna studie har utförts med bas i frågeställningen huruvida rosa brus kan nyttjas för att replikera den övergripande ljuddynamik som förekommer inom kli-inriktade ASMR-verk med hänseende till envelope-egenskaper samt övergripande frekvensregister och hur den akustiska karaktären av en sådan ljudartefakt kan komma att uppfattas av ett antal respondenter. Tidigare forskning inom ASMR har jämförts med auditivt inriktad litteratur för att styrka relevansen av denna sortens studie varpå ytterligare framtida forskningsutsikter har spekulerats kring. Angreppsmetoden har grundats i användandet av en noise gate-teknik med fokus på side chain-kompression samt en vocoder för replikerandet av den ljuddynamik som har inhämtats från ett kli-inriktat ASMR-verk. Ljudtekniska data för designen av artefakten införskaffades med hjälp av spektrogramanalyser och kvalitativa intervjuer stod till grund för experimentets utförande. Studieresultaten visade huvudsakligen på att rosa brus sannolikt besitter en god förmåga att approximera kli-ljud i det tekniska sammanhang som har studerats och därmed inge behag hos en lyssnare. Även övergångar mellan distinkt olika ljudsegment av denna typ visade sig besitta mycket goda möjligheter att inge behag.
212

Modeling and predicting affect in audio signals : perspectives from acoustics and chaotic dynamics / Modelisation de l'affect dans le son : perspectives de l'acoustique et de la dynamique chaotique

Mouawad, Pauline 28 June 2017 (has links)
La présente thèse décrit un projet de recherche multidisciplinaire qui porte sur la reconnaissance de l’émotion dans les sons, couvrant les théories psychologiques, l’analyse du signal acoustique,l’apprentissage automatique et la dynamique chaotique.Dans nos interactions et nos relations sociales, nous dépendons considérablement de la communication de l’information et de notre perception des messages transmis. En fait, la communication se produit lorsque les signaux transmettent des informations entre une source et une destination. Le signal peut être verbal, et l’information est ensuite portée par des motifs sonores, tels que des mots. Dans la communication vocale non verbale, cependant,l’information peut être des modèles perceptifs qui véhiculent des indices affectifs, que nous percevons et évaluons sous la forme d’intentions, d’attitudes, d’humeurs et d’émotions.La prévalence de la composante affective peut être observée dans les interactions informatiques humaines (HCI) où le développement d’applications automatisées qui comprennent et expriment les émotions est devenu crucial. De tels systèmes doivent être significatifs et faciles à utiliser pour l’utilisateur final, de sorte que notre interaction avec eux devient une expérience positive. Bien que la reconnaissance automatique des émotions dans les sons ait reçu une attention accrue au cours des dernières années, il s’agit encore d’un jeune domaine de recherche.Non seulement cela contribue à l’informatique affective en général, mais il fournit également une compréhension approfondie de la signification des sons dans notre vie quotidienne.Dans cette thèse, le problème de la reconnaissance des affects est abordé à partir d’une double perspective: nous commençons par adopter une approche standard de l’analyse acoustique du signal, où nous examinons et expérimentons les fonctionnalités existantes pour déterminer leur rôle dans la communication émotionnelle. Ensuite, nous nous tournons vers la dynamique chaotique et la symbolisation des séries temporelles, pour comprendre le rôle de la dynamique inhérente des sons dans l’expressivité affective. Nous menons nos études dans le contexte des sons non verbaux, à savoir les sons vocaux, musicaux et environnementaux.D’un point de vue de l’écoute humaine, une tâche d’annotation est menée pour construire un ground-truth de voix de chant non verbales, marquées par des descriptions catégoriques du modèle bidimensionnel d’émotions. Deux types de sons sont inclus dans l’étude: vocal et glottal.D’un point de vue psychologique, la présente recherche porte sur un débat qui existe depuis longtemps parmi les scientifiques et les psychologues, concernant les origines communes de la musique et de la voix. La question est abordée à partir d’une analyse acoustique ainsi que d’une approche dynamique non linéaire.D’un point de vue de la modélisation, ce travail propose une nouvelle approche dynamique non linéaire pour la reconnaissance de l’affect dans le son, basée sur la dynamique chaotique et la symbolisation adaptative des séries temporelles. Tout au long de cette thèse, les contrastes clés dans l’expressivité de l’émotion sont illustrés parmi les différents types de sons, à travers l’analyse des propriétés acoustiques, les métriques de la dynamique non linéaire et les performances des prédictions.Enfin, d’un point de vue progressif, nous suggérons que les travaux futurs étudient des caractéristiques motivées par les études cognitives. Nous suggérons également d’examiner dans quelle mesure nos caractéristiques reflètent les processus cognitifs. En outre, nous recommandons que nos fonctionnalités dynamiques soient testées dans des études à grande échelle de la reconnaissance d’émotions à travers la participation aux défis expérimentaux, dans le but de vérifier s’ils obtiennent un consensus. / The present thesis describes a multidisciplinary research project on emotion recognition in sounds, covering psychological theories, acoustic-based signal analysis, machine learning and chaotic dynamics.In our social interactions and relationships, we rely greatly on the communication of information and on our perception of the messages transmitted. In fact communication happens when signals transmit information between a source and a destination. The signal can be verbal,and the information is then carried by sound patterns, such as words. In non verbal vocal communication however, information can be perceptual patterns that convey affective cues,that we sense and appraise in the form of intentions, attitudes, moods and emotions.The prevalence of the affective component can be seen in human computer interactions(HCI) where the development of automated applications that understand and express emotions has become crucial. Such systems need to be meaningful and friendly to the end user, so thatour interaction with them becomes a positive experience. Although the automatic recognition of emotions in sounds has received increased attention in recent years, it is still a young fieldof research. Not only does it contribute to Affective Computing in general, but it also provides insight into the significance of sounds in our daily life.In this thesis the problem of affect recognition is addressed from a dual perspective: we start by taking a standard approach of acoustic-based signal analysis, where we survey and experiment with existing features to determine their role in emotion communication. Then,we turn to chaotic dynamics and time series symbolization, to understand the role of the inherent dynamics of sounds in affective expressiveness. We conduct our studies in the context of nonverbal sounds, namely voice, music and environmental sounds.From a human listening point of view, an annotation task is conducted to build a ground truth of nonverbal singing voices, labelled with categorical descriptions of the two-dimensional model of affect. Two types of sounds are included in the study: vocal and glottal.From a psychological perspective, the present research addresses a debate that is of long standing among scientists and psychologists, concerning the common origins of music and voice.The question is addressed from an acoustic-based analysis as well as a nonlinear dynamics approach.From a modeling viewpoint, this work proposes a novel nonlinear dynamics approach for the recognition of affect in sound, based on chaotic dynamics and adaptive time series symbolization.Throughout this thesis, key contrasts in the expressiveness of affect are illustrated among the different types of sounds, through the analysis of acoustic properties, nonlinear dynamics metrics and predictions performances. Finally from a progressive perspective, we suggest that future works investigate features that are motivated by cognitive studies. We also suggest to examine to what extent our features reflect cognitive processes. Additionally we recommend that our dynamic features be tested inlarge scale ER studies through the participation in ER challenges, with an aim to verify if they gain consensus.
213

The Pai language of Eastern Mpumalanga and its relationship to Swati

Taljaard, Petrus Cornelius 01 1900 (has links)
This thesis is a comparative study of Pai and Swati. The Pai language is spoken in the easten1 parts of the Mpumalanga Province of the Republic of South Africa. The study concentrates on the correspondences and differences of the speech sounds of these two languages and reference is also made to the morphology. The previous comprehensive work on Pai was by Ziervogel (1956) where he classified the Pai language as one of the three dialects of Eastern Sotho. He also considered the Swati elements present in Pai to be merely borrowings. The present investigation into the history of the Pai people indicates that Pai may have had links with languages other than those belonging to the Sotho group and, from the evidence, an Nguni connection has become a distinct possibility. The speech sounds of Pai are described in detail in chapter two and corresponding speech sounds in Swati are included. The vowels of both languages receive special attention because Pai apparently has a seven-vowel system and Swati a five-vowel system. The corresponding consonants in these two languages soon points towards a relationship that is based on more than just borrowed items. In chapter three the Ur-Bantu sounds of Meinhof and their reflexes in Swati and Pai are described and compared. The wide variety of attestations in Pai and the instability of some phonemes are indicative of a language that has been subjected to many outside influences and that is at the moment in a state of flux. In chapter four some aspects of the morphology are described in order to highlight the peculiar characteristics of Pai as an individual language. The relationship with Swati is again emphasized by the findings in this chapter. A statistical analysis of the speech sounds of Pai and Swati in chapter five indicates that an Nguni core of sounds exists that is shared by both these languages. A re-classification of Pai within the language context of that area may therefore be necessary. / African Languages / D. Litt. et Phil. (African Languages)
214

A percepção e a produção dos fonemas /æ, ɛ, ɑ, ɔ, ə/ de estudantes brasileiros de inglês como língua estrangeira / The perception and the production of the phonemes /æ, ɛ, ɑ, ɔ, ə/ of Brazilian students of English as a foreign language

Bertho, Mariana Centanin [UNESP] 23 April 2018 (has links)
Submitted by Mariana Centanin Bertho (marianabertho@gmail.com) on 2018-07-26T01:02:32Z No. of bitstreams: 1 MarianaCentaninBertho_dissertação_2018.pdf: 2401328 bytes, checksum: 43156416c811b705e42ea7b990f0effd (MD5) / Rejected by Aline Aparecida Matias null (alinematias@fclar.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo: 1) Numeração das páginas: as páginas pré-textuais devem ser contadas, com exceção da capa e ficha catalográfica, porém a numeração deve aparecer somente a partir da primeira página textual, a Introdução. Sendo assim sua Introdução começa na página 15. 2) Sumário: após renumerar o trabalho será preciso refazer o sumário para que ele reflita fielmente o trabalho. ATENÇÃO: Será preciso refazer também as listas de figuras, gráficos, quadros e tabelas para que as mesmas indiquem corretamente as páginas em que se encontram as figuras, gráficos, quadros e tabelas de seu trabalho. Agradecemos a compreensão. on 2018-07-26T11:29:06Z (GMT) / Submitted by Mariana Centanin Bertho (marianabertho@gmail.com) on 2018-07-26T16:29:49Z No. of bitstreams: 1 dissertação_MarianaCentaninBertho_repositório_2018_v2.pdf: 2647710 bytes, checksum: a5954f6983e903ab348e80fbe80bcef6 (MD5) / Approved for entry into archive by Aline Aparecida Matias null (alinematias@fclar.unesp.br) on 2018-07-27T11:24:40Z (GMT) No. of bitstreams: 1 bertho_mc_me_arafcl.pdf: 2647710 bytes, checksum: a5954f6983e903ab348e80fbe80bcef6 (MD5) / Made available in DSpace on 2018-07-27T11:24:40Z (GMT). No. of bitstreams: 1 bertho_mc_me_arafcl.pdf: 2647710 bytes, checksum: a5954f6983e903ab348e80fbe80bcef6 (MD5) Previous issue date: 2018-04-23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Esta pesquisa tem como objetivo descrever as características acústicas dos fonemas /æ, ɛ, ɑ, ɔ, ə/ na produção em língua inglesa dos estudantes brasileiros de inglês como língua estrangeira (ILE). Esses fonemas, por vezes, sofrem a perda do contraste na produção dos estudantes brasileiros e são produzidos dentro do espaço perceptivo dos fonemas /a/, /ɛ/ e /ɔ/ do português. Os participantes selecionados são alunos de duas escolas de idiomas e passaram por um curso sobre os sons do inglês, parte do experimento desenvolvido para a coleta de dados. O experimento consiste na gravação da leitura oral de um corpus contendo vocábulos com os fonemas selecionados, que ocorre antes e depois da participação no curso, além da resposta a dois questionários, um inicial e um final. Posteriormente, as gravações são analisadas e comparadas entre si e a uma gravação do mesmo corpus feita por um informante americano, por meio do software PRAAT, versão 5.3 (BOERSMA & WEENINK, 2011). Como suporte teórico para essa análise, entendemos que a produção oral dos estudantes encontra-se no momento da Interlíngua (SELINKER, 1972), em que podem ser encontradas as estratégias utilizadas pelos estudantes na produção dos sons da LE. São fundamentais, portanto, para nossa análise, as teorias que se dedicam especificamente à descrição da aquisição/aprendizagem do aspecto fônico de uma LE, começando pelos conceitos de crivo fonológico, de Trubetzkoy (1939), e de surdez fonológica, de Polivanov (1931). Complementando esses conceitos, a análise é guiada pelos processos explicados por modelos, tais como o Modelo de Aprendizagem da Fala (Flege, 1981), o Modelo do Ímã da Língua Materna, de Kuhl & Iverson (1995), e o Modelo de Assimilação Perceptiva, de Best e Tyler (1994). Os resultados demonstram a ocorrência de certos fenômenos na Interlíngua dos estudantes brasileiros: a proximidade acústica do par de fonemas /æ/ e / ɛ/; a proximidade acústica do par de fonemas /ɑ/ e /ɔ/; a produção de /ɑ/ próximo de /a/ pela motivação ortográfica do grafema <a>; a produção de /ɑ/ próximo de /ɔ/ pela motivação ortográfica do grafema <o>; a produção de /ə/ como /a/ pela motivação ortográfica do grafema <a>. / This research aims to describe the acoustic characteristics of the phonemes /æ, ɛ, ɑ, ɔ, ə/ in the production of English language of Brazilian students of English as a foreign language (EFL). Those phonemes sometimes lack contrast in the production of Brazilian students and are produced within the perceptive space of the phonemes /a/, /ɛ/ and /ɔ/ of Brazilian Portuguese language. The selected participants are students of two language schools who took a course on the sounds of English, designed as part of the experiment developed for the data collection. The experiment consists in recording the oral reading of a corpus containing words with the selected phonemes, which occurs before and after participation in the course, besides the response to two questionnaires, applied at the beginning and at the end of the course. Subsequently, the recordings are analyzed and compared to each other and to a recording of the same corpus by an American informant, using PRAAT software, version 5.3 (BOERSMA & WEENINK, 2011). As a theoretical support for this analysis, we understand that students’ oral production is in their Interlanguage (SELINKER, 1972), in which the strategies used by students in the production of LE sounds can be found. Therefore, for our analysis, theories that specifically focus on the acquisition/learning of the phonic aspect of an LE are fundamental, starting with Trubetzkoy’s phonological sieve (1939) and Polivanov’s phonological deafness (1931). Complementing these concepts, the analysis is guided by the processes explained by models, such as the Speech Learning Model (FLEGE, 1981), the Perceptual Magnet Effect (KUHL & IVERSON, 1995), and the Perceptual Assimilation Model (BEST, 1994). Results show the occurrence of certain phenomena in the Interlanguage of Brazilian students: acoustic proximity of the pair of phonemes /æ/ and /ɛ/; acoustic proximity of the pair of phonemes /ɑ/ and /ɔ/; the production of /ɑ/ close to /a/ motivated by the grapheme <a>; the production of /ɑ/ close to /ɔ/ motivated by the grapheme <o>; the production of /ə/ as /a/ motivated by the grapheme <a>. / CAPES 1601804
215

Sistema Modular para Detecção e Reconhecimento de Disparos de Armas de Fogo

Reis, Clovis Ferreira dos 04 December 2015 (has links)
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-11T14:47:59Z No. of bitstreams: 1 arquivototal.pdf: 3075980 bytes, checksum: 34017b499d4b0a096285315cb614b985 (MD5) / Made available in DSpace on 2017-08-11T14:47:59Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3075980 bytes, checksum: 34017b499d4b0a096285315cb614b985 (MD5) Previous issue date: 2015-12-04 / The urban violence has been increasing in almost Brazilian state and in order to face this threat, new technological tools are required by the police authorities in order to support their decisions on how and when the few available resources should be employed to combat criminality. In this context, this work presents an embedded computational tool that is suitable for detecting gun-shots automatically. To provide the necessary knowledge to understand the work, a brief description about impulsive sounds, re guns and the gun-shot characteristics are initially presented. Latter, a system based on modules is proposed to detect and recognize impulsive sound, which are characteristics of gun-shots. However, since the system contain several modules in this work we have focus only on two of them: the module for detecting impulsive sounds and the module for distinguish a gun-shot from any other impulsive sound. For the impulsive detection module, three well-known algorithms were analyzed on the same condition: the fourth derivative of the Root Median Square (RMS), the Conditional Median Filter (CMF) and the Variance Method (VM). The algorithms were tested based on four measured performance parameters: accuracy, precision, sensibility and speci city. And in order to determine the most e cient algorithm for detecting impulsive sounds, a cadence test with impulsive sounds, without or with additional noise (constant or increasing) was performed. After this analysis, the parameters employed on the CMF and VM method were tested in a wide range of con gurations to verify any possibility of optimization. Once this optimal method was determined, the classi cation module to recognize gun-shots started to be implemented. For this, two distinguish methods were compared, one based on the signal wrapped over the time and the other based on most relevant frequencies obtained from the Fourier transform. From the comparison between the two methods it was observed that the wrapped method provided 54% of accuracy in the classi cation of impulsive sounds, while with the frequency analysis this value was 72%. / A violência urbana vem crescendo anualmente em praticamente todos os estados brasileiros e para fazer face a essa amea ca, as autoridades policiais necessitam cada vez mais de ferramentas tecnológicas que os auxiliem na tomada de decisões sobre quando e como empregar os parcos recursos disponíveis a repressão do crime. Neste contexto, e apresentado nesse trabalho uma ferramenta computacional, passível de ser embarcada em dispositivos m oveis, que possibilita realizar a detecção e reconhecimento automático de disparos de armas de fogo. Para tanto, são descritos inicialmente os fundamentos básicos sobre sons impulsivos, armas de fogo e caracter sticas de disparos. Posteriormente, descreve-se uma proposta de um sistema modular de detecção e reconhecimento de disparos. No entanto, devido ao sistema conter diversos m odulos complexos, este trabalho teve foco em dois deles: o modulo de detecção de sons impulsivos e o modulo de classificação, que permite distinguir disparos de armas de fogo de outros sons impulsivos. Para o módulo de detecção de sons impulsivos foram analisados três algoritmos amplamente descritos na literatura: o algoritmo da quarta derivada da RMS, o da Conditional Median Filter (CMF) e o Método da Variância (VM). Os algoritmos foram testados com base nas medidas de desempenho da acurácia, precisão, sensibilidade e especificidade. E a para determinar o método mais e ciente, foram realizados testes de cadências, com sons impulsivos sem adição de ru do sonoro, com adição de ruído constante e com ruído variável. Ao final dessa anáise, os par^ametros preconizados na literatura para os m etodos CMF e VM foram alterados para uma verificação de possibilidade de otimização. De nido o algoritmo de detecção de impulso mais e ciente, iniciou-se o desenvolvimento do módulo de classificação. Para isso, foram propostas duas t ecnicas para o reconhecimento de disparos de armas de fogo, uma utilizando uma compara c~ao da envolt oria do som no dom nio do tempo e outra baseada na comparação de frequências dominantes obtidas por meio da transformada de Fourier. Numa comparação entre as duas técnicas observou-se que com a técnica da envoltória e poss vel identi car 54% dos sons impulsivos, enquanto que com a t ecnica baseada no dom nio da frequ^encia, este percentual foi de 72%.
216

Algoritmo para estimar gravidade de DPOC através de sinais acústicos. / Algorithm to estimate the severity of COPD by acoustic signals.

Rosemeire Cardozo Vidal 11 April 2017 (has links)
O presente estudo tem como objetivo determinar se a gravidade da DPOC poderá ser estimada através da área do gráfico das intensidades sonoras dos sons respiratórios de pacientes com DPOC. O estudo realizado com 51 pacientes com DPOC leve, moderado, grave, muito grave e 7 indivíduos saudáveis não fumantes. Os sons respiratórios de cada participante, foram coletados através de estetoscópio adaptado com um mini microfone. O método compara as áreas das intensidades sonoras em função da frequência de pacientes de DPOC e indivíduos saudáveis. Neste contexto, para atender ao objetivo, um método foi proposto e testado baseado na combinação de técnicas de filtragem e TFTC, seguida de análise estatística, cálculo da média, desvio padrão e interpolação. Os resultados sugerem que a área do gráfico da variância da intensidade sonora em função da frequência diminui quando aumenta a gravidade da DPOC, exceto para os casos em que a bronquite crônica é predominante. / The present study aims to determine if the severity of COPD can be estimated through the chart area of the sound intensities of respiratory sounds in patients with COPD. The study included 51 patients with mild, moderate, severe, very severe COPD and 7 healthy non-smokers. The breathing sounds of each participant were collected through a stethoscope adapted with a mini microphone. The method compares the areas of intensity sonic densities as a function of the frequency of COPD patients and healthy individuals. In this context, to meet the objective, a method was proposed and tested based on the combination of filtering techniques and TFTC, followed by statistical analysis, calculation of the mean, standard deviation and interpolation. The results suggest that the area of the graph of frequency-frequency sound intensity variance decreases as the severity of COPD increases, except for cases where chronic bronchitis is predominant.
217

Vocalização de suínos em grupo sob diferentes condições térmicas / Pig vocalization in group under different thermal conditions

Giselle Borges de Moura 15 February 2013 (has links)
Quantificar e qualificar o bem-estar de animais de produção, ainda é um desafio. Em qualquer avaliação de bem-estar, deve-se analisar, principalmente, a ausência de sentimentos negativos fortes, como o sofrimento, e a presença de sentimentos positivos, como o prazer. O objetivo principal dessa pesquisa foi quantificar a vocalização de suínos em grupos sob diferentes condições térmicas. Em termos de objetivos específicos foram avaliar a existência de padrões vocálicos de comunicação entre animais alojados em grupo e extrair as características acústicas dos espectros sonoros das vocalizações relacionando com as diferentes condições do micro-clima da instalação. O experimento foi realizado em uma unidade de experimentação com suínos, junto à University of Illinois (EUA), com ambiente controlado. Quatro grupos de seis leitões foram utilizados para a coleta dos dados. Foram instalados dataloggers para registrar as variáveis ambientais (T, °C e UR, %) e posterior cálculo dos índices de conforto (ITU e Entalpia do ar). Foram instalados microfones do tipo cardióide no centro geométrico de cada baia que abrigava os leitões, para registro das vocalizações. Os microfones foram conectados a um amplificador de sinais, e este a uma placa de captura dos sinais de áudio e vídeo, instalados em um computador. Para as edições dos arquivos de áudio contendo as vocalizações dos leitões, o programa Goldwave® foi utilizado na separação, e aplicação de filtros para a retirada de ruídos. Na sequência, os áudios foram analisados com auxílio do programa Sounds Analysis Pro 2011, onde foram extraídos as características acústicas. A amplitude (dB), frequência fundamental (Hz), frequência média (Hz), frequência de pico (Hz) e entropia foram utilizados para caracterização do espectro sonoro das vocalizações do grupo de leitões nas diferentes condições térmicas. O delineamento do experimento foi em blocos casualizados, com dois tratamentos, e três repetições na semana, sendo executado em duas semanas. Os dados foram amostrados para uma análise do comportamento do banco de dados de vocalização em relação aos tratamentos que foram aplicados. Os dados foram submetidos a uma análise de variância utilizando o proc GLM do SAS. Dentre os parâmetros acústicos analisados, a amplitude (dB), frequência fundamental e entropia. Os tratamentos, condição de conforto e condição de calor, apresentaram diferenças significativas, pelo teste de Tukey (p<0,05). A análise de variância mostrou diferenças no formato da onda para cada condição térmica nos diferentes períodos do dia. É possível quantificar a vocalização em grupos de suínos em diferentes condições térmicas, por intermédio da extração das características acústicas das amostras sonoras. O espectro sonoro foi extraído, indicando possíveis variações do comportamento dos leitões nas diferentes condições térmicas dentro dos períodos do dia. No entanto, a etapa de reconhecimento de padrão, ainda necessita de um banco de dados maior e mais consistente para o reconhecimento do espectro em cada condição térmica, seja por análise das imagens ou pela extração das características acústicas. Dentre as características acústicas analisadas, a amplitude (dB), frequência fundamental (Hz) e entropia das vocalizações em grupo de suínos foram significativas para expressar a condição dos animais quando em diferentes condições térmicas. / To quantify and to qualify animal well-being in livestock farms is still a challenge. To assess animal well-being, it must be analyzed, mainly, the absence of strong negative feelings, like pain, and the presence of positive feelings, like pleasure. The main objective was to quantify vocalization in a group of pigs under different thermal conditions. The specific objectives were to assess the existence of vocal pattern of communication between housing groups of pigs, and get the acoustic characteristics of the sound spectrum from the vocalizations related to the different microclimate conditions. The trial was carried out in a controlled environment experimental unit for pigs, at the University of Illinois (USA). Four groups of six pigs were used in the data collection. Dataloggers were installed to record environmental variables (T, °C and RH, %). These environmental variable were used to calculate two thermal comfort index: Enthalpy and THI. Cardioid microphones were installed to record continuous vocalizations in the geometric center of each pen where the pigs were housing. Microphones were connected to an amplifier, and this was connected to a dvr card installed in a computer to record audio and video information. For doing the sound edition in a pig vocalization database, the Goldwave® software was used to separate, and filter the files excluding background noise. In the sequence, the sounds were analyzed using the software Sounds Analysis Pro 2011, and the acoustic characteristics were extracted. Amplitude (dB), pitch (Hz), mean frequency (Hz), peak frequency (Hz) and entropy were used to characterize the sound spectrum of vocalizations of the groups of piglets in the different thermal conditions. A randomized block design was used, composed by two treatments and three repetitions in a week and executed in two weeks. Data were sampled to analyze the behavior of the databank of vocalization as a relation to the applied treatments. Data were submitted to an analysis of variance using the proc GLM of SAS. Among the studied acoustic parameters, the amplitude (dB), pitch and entropy. The treatments (comfort and heat stress conditions) presented significative differences, through Tukey\'s test (p<0,05). The analysis of variance showed differences to the wave format to each thermal condition in the different periods of the day. The quantification of vocalization of swine in groups under different thermal conditions is possible, using the extraction of acoustic characteristics from the sound samples. The sound spectrum was extracted, which indicated possible alterations in the piglets behavior in the different thermal conditions during the periods of the day. However, the stage of pattern\'s recognition still needs a larger and more consistent database to the recognition of the spectrum in each thermal condition, through image analysis or by the extraction of the acoustic characteristics. Among he analyzed acoustic characteristics, the amplitude (dB), pitch (Hz) and entropy of the vocalizations of groups of swine were significative to express the condition of the animals in different thermal conditions.
218

Caracterização das emissões sonoras do boto-cinza Sotalia guianensis (Van Benédén, 1864) (Cetacea: Delphinidae) e a investigação do ambiente acústico na Baía de Benevente, ES

Reis, Sarah Stutz 01 February 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-03-31T15:07:28Z No. of bitstreams: 1 sarahstutzreis.pdf: 2710665 bytes, checksum: 3e112b0640b97d744e46b7134c227148 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-04-24T02:58:46Z (GMT) No. of bitstreams: 1 sarahstutzreis.pdf: 2710665 bytes, checksum: 3e112b0640b97d744e46b7134c227148 (MD5) / Made available in DSpace on 2016-04-24T02:58:46Z (GMT). No. of bitstreams: 1 sarahstutzreis.pdf: 2710665 bytes, checksum: 3e112b0640b97d744e46b7134c227148 (MD5) Previous issue date: 2013-02-01 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Os delfinídeos exibem grande plasticidade de sinais acústicos e são capazes de adequar suas emissões sonoras frente à diferentes circunstâncias. Atualmente, a poluição sonora dos oceanos constitui uma ameaça aos cetáceos e esta questão tem sido pouco estudada em relação ao boto-cinza (Sotalia guianensis). Logo, seus sinais acústicos representam um aspecto biológico importante a ser compreendido. Neste contexto, este estudo visou caracterizar o repertório sonoro e investigar o ambiente acústico dos botos-cinza que utilizam a baía de Benevente, ES. As gravações foram realizadas utilizando-se hidrofone Cetacean Research C54XRS acoplado a gravador digital Fostex FR-2 LE gravando a 96kHz/24bits. Os dados coletados entre dezembro de 2011 e julho de 2012 totalizaram 27horas e 55minutos de esforço de gravação. Foram analisados 69 assobios, 42 sons pulsantes explosivos e 33 cadeias de cliques. Dentre os assobios o contorno mais comum foi o do tipo ascendente (N=37; 53%), seguido pelos tipos ascendente-descendente (N=15; 22%), múltiplo (N=13; 19%). A frequência fundamental dos assobios variou entre 3,51 kHz e 37,56 kHz. Os parâmetros analisados foram: duração, pontos de inflexão, frequências inicial, final, mínima, máxima, variação e frequências a 1/4, 1/2 e 3/4 da duração. A duração média destes sinais foi 0,298 segundos (DP= 0,147). Quanto aos sons pulsantes explosivos, o sinal do tipo “bray call” (N=36) foi mais comum e mais longo que o “buzz sound”(N=6). Estes dois tipos ocorreram imediatamente após ou próximos a cliques de ecolocalização. Em relação a estes, 33 cadeias foram analisadas e estas apresentaram 36,45 cliques (DP= 43,47) e 11,404 (DP= 21,226) segundos de duração, em média. Os intervalos entre cliques (ICI) duraram 0,308s (DP= 0,301), em média. Verificou-se dois padrões temporais distintos entre os ICIs: 81% (N=946) dos intervalos duraram entre 0,001 e 0,400 segundos e os 19% (N=224) restantes duraram entre 0,401 e 1,246 segundos. A maioria das médias dos parâmetros de frequência dos assobios foram superiores aos valores verificados em estudos com S. guianensis ao sul da área de estudo e inferiores aos valores de populações ao norte. Isto pode estar relacionado aos diferentes limites de frequência destes trabalhos e/ou à hipótese de que a frequência aumenta do sul para o norte. Os sons explosivos observados foram verificados anteriormente para S. guianensis e outros odontocetos. Além de apresentarem função social, também podem estar relacionados à obtenção de presas. A distribuição dos valores de ICI em padrões temporais distintos já foi observada para S. guianensis e outras espécies de golfinhos, podendo representar as distintas funções dos cliques de ecolocalização. O boto-cinza ocorreu em áreas onde existia um ruído antropogênico de baixa frequência. Na presença deste ruído ocorreram alterações no comportamento acústico que possivelmente expressam uma tentativa de compensar o efeito de mascaramento para manter a comunicação eficiente em um ambiente acústicamente poluído. / The Delphinidae exhibits great plasticity of acoustic signals and adapts their sound emission according to circumstances. Currently, ocean noise pollution is a threat to cetaceans and this issue has not been well studied in relation to the estuarine dolphin (Sotalia guianensis). Therefore, their acoustic signals represent an important biological aspect to be understood. In this context, this study aimed to characterize the sound repertoire and investigate the acoustic environment of estuarine dolphin in Benevente Bay, ES, Brazil. The recordings were performed using hydrophone Cetacean Research C54XRS coupled to digital recorder Fostex FR-2 LE recording at 96kHz/24bits. Data collected between December 2011 and July 2012 totaled 27 hours and 55 minutes of effort recording. We evaluated 69 whistles, 42 burst sounds and 33 clicks’ train. Among the whistles contour the most common type was ascending (N = 37, 53%), followed by ascending-descending (N = 15, 22%) and multiple (N = 13, 19%) types. The fundamental frequency of whistles ranged between 3.51 kHz and 37.56 kHz. The frequency parameters analyzed were: start, end, minimum, maximum, range and frequencies at 1/4, 1/2 and 3/4 of the duration. The duration and inflection points were also analyzed. The average duration of whistles was 0.298 second (SD = 0.147). About the burst pulse sounds the sign "bray call" (N = 36) was more common and longer than the "buzz sound" (N = 6). These two types occurred immediately after or near echolocation clicks. On these, 33 trains were analyzed and presented 36.45 (SD = 43.47) clicks and 11.404 (SD = 21.226) seconds, in average length. The interval between clicks or “Inter-click interval” (ICI) lasted 0.308 (SD = 0.301) seconds in average. It was also found two distinct temporal patterns for ICIs: 81% (N = 946) intervals lasted between 0.001 and 0.400 seconds and 19% (N = 224) lasted between 0.401 and 1.246 seconds. Most of the frequency parameters’ average from whistles were higher than those observed in studies with S. guianensis at south of the study area and lower than populations at north. This could be related to different frequency limits and/or the assumption that the frequency increases from south to north. The burst sounds observed were previously cited for S. guianensis and other odontocetes. Besides presenting a social function, these sounds may be related to obtaining prey. The distribution of the ICIs’ values in distinct temporal patterns was also reported for S. guianensis and other species of dolphins, which may represent the different functions of echolocation clicks. The estuarine dolphin occurred in the presence of a low-frequency anthropogenic noise. On this occasion there was a change in the acoustic behavior that possibly expresses an attempt to compensate for the effect of masking to maintain effective communication in an acoustically polluted environment.
219

Osynliga Processer : En audiell utforskning av det osynliga

Forsse, Viktor, Anderberg, Tobias January 2017 (has links)
Med detta kandidatarbete vill vi skapa en koppling mellan vårt lyssnande och de osynliga processer som omringar oss i den digitala teknik vi använder vardagligen. Med Salomé Voegelins okonventionella syn på hur vi lyssnar på vår omgivning samt teorin kring AlgoRHYTHMS som utgångspunkt har vi genomgått en djupgående utforskning av denna relation.Genom att tillämpa Critical Making i en experimentell skapandeprocess med ambitioner´ att hitta en materialitet i det subliminala hittade vi istället ett samspel mellan två osynliga medier som tillsammans bildar en digital materialitet. / With this bachelor thesis our aim to establish a connection between our listening and the invisible processes that surrounds us inside the digital technology we use on a daily basis. With the unconventional views of Salomé Voegelin on how we listen to our surroundings together with the theory of AlgoRHYTHMS as a starting point we have undergone a in-depth exploration of this relation.By applying Critical Making in an experimental creative process with the ambition to find a materiality in the subliminal we instead found an interaction between two invisible mediums which together forms a digital materiality.
220

Förbättrat hörande, förbättat övane, förbättrat stridande : En analys av hur artificiellt ljud kan användas för att öka realismen vid stridsträning. / Improved hearing, improved training, improved warfighting

Eriksson, Lars January 2011 (has links)
Saab Training Systems är ett företag i Saab-koncernen som utvecklar, tillverkar och marknadsför produkter för militär utbildning. Ett av de centrala systemen i företagets produktportfölj är det instrumenterade stridsträningskonceptet GAMER, i vilket soldater kan öva dubbelsidig träning samtidigt som övningsledningen via ett centralt styrsystem kan övervaka och interagera i övningsförloppet. I och med den övergång som skett i många av världens försvarsmakter, från ett tungt territorialförsvar till ett välutbildat insatsförsvar har kraven på den enskilda soldatens förmågor höjts avsevärt. Samtidigt har det även skett en övergång gällande den generella hotbilden, från att tidigare ha utgjorts av en annan militärmakt, till paramilitära grupperingar som ofta bedriver en gerillaliknande krigsföring. Dessa två faktorer innebär att även förutsättningarna för soldatutbildningen har förändrats. Som ett led i anpassningen till dessa förändringar utreds i denna rapport möjligheterna att utveckla soldaternas audiella upplevelse i GAMER. Detta för att skapa ökade förutsättningar för en komplett upplevelsebaserad inlärning, som inte bara övar soldatens förmåga att använda sig av den taktiska information som ljudbilden förmedlar, utan även förbereder soldaterna på bästa sätt för den främmande miljö som dagens militära insatser ofta innebär. Inledningsvis genomförs en ljudcentrerad användningsfallsanalys av en uppsättning grundläggande övningsscenarion. Genom denna analys skapas en uppfattning om vilka ljud som är önskvärt att kunna reproducera i ett träningssystem. Detta jämförs sedan med de ljudåtergivningsfunktioner som redan finns i GAMER för att därmed kunna utröna vilka kompletterande ljudåtergivningsfunktioner som skulle vara önskvärt att införa i systemet. För att skapa maximala förutsättningar för en sådan vidareutveckling genomförs även en aktör-nätverksanalys på det sociotekniska nätverk som GAMER är en del av. Detta ökar möjligheten att framställa en produkt som inte bara är en teknikdeterministisk härledning, utan även tar hänsyn till faktorer som utvecklings- och produktionskostnad, intern know-how, kundkrav och konkurrenssituationen. Slutligen presenteras ett förslag för den fortsatta utvecklingen, som tagits fram utifrån de explicita och implicita krav som identifierats i analysprocessen. Lösningen som förordas är ett dynamiskt ljudstyrsystem som med indata från redan existerande funktioner i GAMER kan projicera artificiella ljudkällor på övningsområdet. Detta åstadkoms med ljudkanaler på fasta positioner på övningsområdet från vilka ljudkällorna reproduceras genom användning av vedertagen ljudåtergivningsteori. I förslaget ingår även en egenkonstruerad vidareutveckling av stereopanoreringstekniken vilken möjliggör ljudåtergivning för multipla rörliga övningsdeltagare. / The aim of this thesis is to investigate the possibilities to extend the use of artificial sound in the GAMER Combat Training System. This in order to enhance the experience based learning, which in turn will result in a better preparation for live missions and also increase the commercial value of the training system. The study is executed through a use case analysis of a set of basic urban combat training exercises to establish which sounds are desired to reproduce a realistic training environment. The result of the use case analysis is then compared with the present audio functionality of the system in order to determine which functions are in need of improvement or to be added. To create optimal conditions for the implementation of an enhanced audio functionality, further an actor-network theory based analysis was conducted on the socio-technical network surrounding the GAMER system. By doing this, not only the technical aspects was taken account for, but also factors s such as customer needs, internal interests and competing companies which all are certain to have an impact of the developmental process and thereby the success of the outcome. Finally an outline is presented for the continued development, which is compiled from the explicit and implicit requirements identified in the empirical analysis. The proposed solution is a dynamic audio control system which uses already existing input functions in order to project artificial sound sources on arbitrary positions within the training area. The projections are produced from static audio channels placed on strategic positions, using well-recognized theories for spatial sound restoration. The proposition also includes a further development of the stereo panning theory to enable realistic sound projections for multiple mobile exercise participants.

Page generated in 0.0458 seconds