• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 11
  • 11
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Optimizing text-independent speaker recognition using an LSTM neural network

Larsson, Joel January 2014 (has links)
In this paper a novel speaker recognition system is introduced. Automated speaker recognition has become increasingly popular to aid in crime investigations and authorization processes with the advances in computer science. Here, a recurrent neural network approach is used to learn to identify ten speakers within a set of 21 audio books. Audio signals are processed via spectral analysis into Mel Frequency Cepstral Coefficients that serve as speaker specific features, which are input to the neural network. The Long Short-Term Memory algorithm is examined for the first time within this area, with interesting results. Experiments are made as to find the optimum network model for the problem. These show that the network learns to identify the speakers well, text-independently, when the recording situation is the same. However the system has problems to recognize speakers from different recordings, which is probably due to noise sensitivity of the speech processing algorithm in use.
22

Modelování dynamiky prosodie pro rozpoznávání řečníka / Modelling Prosodic Dynamics for Speaker Recognition

Jančík, Zdeněk January 2008 (has links)
Most current automatic speaker recognition system extract speaker-depend features by looking at short-term spectral information. This approach ignores long-term information. I explored approach that use the fundamental frequency and energy trajectories for each speaker. This approach models prosody dynamics on single fonemes or syllables. It is known from literature that prosodic systems do not work as well the acoustic one but it improve the system when fusing. I verified this assumption by fusing my results with state of the art acoustic system from BUT. Data from standard evaluation campaigns organized by National Institute of Standarts and Technology are used for all experiments.
23

Analysis of speaking time and content of the various debates of the presidential campaign : Automated AI analysis of speech time and content of presidential debates based on the audio using speaker detection and topic detection / Analys av talartid och innehåll i de olika debatterna under presidentvalskampanjen. : Automatiserad AI-analys av taltid och innehåll i presidentdebatter baserat på ljudet med hjälp av talardetektering och ämnesdetektering.

Valentin Maza, Axel January 2023 (has links)
The field of artificial intelligence (AI) has grown rapidly in recent years and its applications are becoming more widespread in various fields, including politics. In particular, presidential debates have become a crucial aspect of election campaigns and it is important to analyze the information exchanged in these debates in an objective way to let voters choose without being influenced by biased data. The objective of this project was to create an automatic analysis tool for presidential debates using AI. The main challenge of the final system was to determine the speaking time of each candidate and to analyze what each candidate said, to detect the topics discussed and to calculate the time spent on each topic. This thesis focus mainly on the speaker detection part of this system. In addition, the high overlap rate in the debates, where candidates cut each other off, posed a significant challenge for speaker diarization, which aims to determine who speaks when. This problem was considered appropriate for a Master’s thesis project, as it involves a combination of advanced techniques in AI and speech processing, making it an important and difficult task. The application to political debates and the accompanying overlapping pathways makes this task both challenging and innovative. There are several ways to solve the problem of speaker detection. We have implemented classical approaches that involve segmentation techniques, speaker representation using embeddings such as i-vectors or x-vectors, and clustering. Yet, due to speech overlaps, the End-to-end solution was implemented using pyannote-audio (an open-source toolkit written in Python for speaker diarization) and the diarization error rate was significantly reduced after refining the model using our own labeled data. The results of this project showed that it was possible to create an automated presidential debate analysis tool using AI. Specifically, this thesis has established a state of the art of speaker detection taking into account the particularities of the politics such as the high speaker overlap rate. / AI-området (artificiell intelligens) har vuxit snabbt de senaste åren och dess tillämpningar blir alltmer utbredda inom olika områden, inklusive politik. Särskilt presidentdebatter har blivit en viktig aspekt av valkampanjerna och det är viktigt att analysera den information som utbyts i dessa debatter på ett objektivt sätt så att väljarna kan välja utan att påverkas av partiska uppgifter. Målet med detta projekt var att skapa ett automatiskt analysverktyg för presidentdebatter med hjälp av AI. Den största utmaningen för det slutliga systemet var att bestämma taltid för varje kandidat och att analysera vad varje kandidat sa, att upptäcka diskuterade ämnen och att beräkna den tid som spenderades på varje ämne. Denna avhandling fokuserar huvudsakligen på detektering av talare i detta system. Dessutom innebar den höga överlappningsgraden i debatterna, där kandidaterna avbröt varandra, en stor utmaning för talardarization, som syftar till att fastställa vem som talar när. Detta problem ansågs lämpligt för ett examensarbete, eftersom det omfattar en kombination av avancerade tekniker inom AI och talbehandling, vilket gör det till en viktig och svår uppgift. Tillämpningen på politiska debatter och den åtföljande överlappande vägar gör denna uppgift både utmanande och innovativ. Det finns flera sätt att lösa problemet med att upptäcka talare. Vi har genomfört klassiska metoder som innefattar segmentering tekniker, representation av talare med hjälp av inbäddningar som i-vektorer eller x-vektorer och klustering. På grund av talöverlappningar implementerades dock Endto-end-lösningen med pyannote-audio (en verktygslåda med öppen källkod skriven i Python för diarisering av talare) och diariseringsfelprocenten reducerades avsevärt efter att modellen förfinats med hjälp av våra egna märkta data. Resultaten av detta projekt visade att det var möjligt att skapa ett automatiserat verktyg för analys av presidentdebatten med hjälp av AI. Specifikt har denna avhandling etablerat en state of the art av talardetektion med hänsyn till politikens särdrag såsom den höga överlappningsfrekvensen av talare.
24

Use of Coherent Point Drift in computer vision applications

Saravi, Sara January 2013 (has links)
This thesis presents the novel use of Coherent Point Drift in improving the robustness of a number of computer vision applications. CPD approach includes two methods for registering two images - rigid and non-rigid point set approaches which are based on the transformation model used. The key characteristic of a rigid transformation is that the distance between points is preserved, which means it can be used in the presence of translation, rotation, and scaling. Non-rigid transformations - or affine transforms - provide the opportunity of registering under non-uniform scaling and skew. The idea is to move one point set coherently to align with the second point set. The CPD method finds both the non-rigid transformation and the correspondence distance between two point sets at the same time without having to use a-priori declaration of the transformation model used. The first part of this thesis is focused on speaker identification in video conferencing. A real-time, audio-coupled video based approach is presented, which focuses more on the video analysis side, rather than the audio analysis that is known to be prone to errors. CPD is effectively utilised for lip movement detection and a temporal face detection approach is used to minimise false positives if face detection algorithm fails to perform. The second part of the thesis is focused on multi-exposure and multi-focus image fusion with compensation for camera shake. Scale Invariant Feature Transforms (SIFT) are first used to detect keypoints in images being fused. Subsequently this point set is reduced to remove outliers, using RANSAC (RANdom Sample Consensus) and finally the point sets are registered using CPD with non-rigid transformations. The registered images are then fused with a Contourlet based image fusion algorithm that makes use of a novel alpha blending and filtering technique to minimise artefacts. The thesis evaluates the performance of the algorithm in comparison to a number of state-of-the-art approaches, including the key commercial products available in the market at present, showing significantly improved subjective quality in the fused images. The final part of the thesis presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR task and may capture vehicles at different approaching angles. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximise the reliability of the final outcome. Experimental results are provided to prove that the proposed system demonstrates an accuracy in excess of 95% when tested on real CCTV footage with no prior camera calibration.
25

Intersession Variability Compensation in Language and Speaker Identification / Intersession Variability Compensation in Language and Speaker Identification

Hubeika, Valiantsina January 2008 (has links)
Variabilita kanálu a hovoru je velmi důležitým problémem v úloze rozpoznávání mluvčího. V současné době je ve velkém množství vědeckých článků uvedeno několik technik pro kompenzaci vlivu kanálu. Kompenzace vlivu kanálu může být implementována jak v doméně modelu, tak i v doménách příznaků i skóre. Relativně nová výkoná technika je takzvaná eigenchannel adaptace pro GMM (Gaussian Mixture Models). Mevýhodou této metody je nemožnost její aplikace na jiné klasifikátory, jako napřílad takzvané SVM (Support Vector Machines), GMM s různým počtem Gausových komponent nebo v rozpoznávání řeči s použitím skrytých markovových modelů (HMM). Řešením může být aproximace této metody, eigenchannel adaptace v doméně příznaků. Obě tyto techniky, eigenchannel adaptace v doméně modelu a doméně příznaků v systémech rozpoznávání mluvčího, jsou uvedeny v této práci. Po dosažení dobrých výsledků v rozpoznávání mluvčího, byl přínos těchto technik zkoumán pro akustický systém rozpoznávání jazyka zahrnující 14 jazyků. V této úloze má nežádoucí vliv nejen variabilita kanálu, ale i variabilita mluvčího. Výsledky jsou prezentovány na datech definovaných pro evaluaci rozpoznávání mluvčího z roku 2006 a evaluaci rozpoznávání jazyka v roce 2007, obě organizované Amerických Národním Institutem pro Standard a Technologie (NIST)
26

L’effet de la familiarité sur l’identification des locuteurs : pour un perfectionnement de la parade vocale

Plante-Hébert, Julien 08 1900 (has links)
La présente étude porte sur les effets de la familiarité dans l’identification d’individus en situation de parade vocale. La parade vocale est une technique inspirée d’une procédure paralégale d’identification visuelle d’individus. Elle consiste en la présentation de plusieurs voix avec des aspects acoustiques similaires définis selon des critères reconnus dans la littérature. L’objectif principal de la présente étude était de déterminer si la familiarité d’une voix dans une parade vocale peut donner un haut taux d’identification correcte (> 99 %) de locuteurs. Cette étude est la première à quantifier le critère de familiarité entre l’identificateur et une personne associée à « une voix-cible » selon quatre paramètres liés aux contacts (communications) entre les individus, soit la récence du contact (à quand remonte la dernière rencontre avec l’individu), la durée et la fréquence moyenne du contact et la période pendant laquelle avaient lieu les contacts. Trois différentes parades vocales ont été élaborées, chacune contenant 10 voix d’hommes incluant une voix-cible pouvant être très familière; ce degré de familiarité a été établi selon un questionnaire. Les participants (identificateurs, n = 44) ont été sélectionnés selon leur niveau de familiarité avec la voix-cible. Toutes les voix étaient celles de locuteurs natifs du franco-québécois et toutes avaient des fréquences fondamentales moyennes similaires à la voix-cible (à un semi-ton près). Aussi, chaque parade vocale contenait des énoncés variant en longueur selon un nombre donné de syllabes (1, 4, 10, 18 syll.). Les résultats démontrent qu’en contrôlant le degré de familiarité et avec un énoncé de 4 syllabes ou plus, on obtient un taux d’identification avec une probabilité exacte d’erreur de p < 1 x 10-12. Ces taux d’identification dépassent ceux obtenus actuellement avec des systèmes automatisés. / The present study deals with the effects of familiarity on speaker identification in the context of voice line-ups. The voice line-up is a paralegal technique, inspired by a visual identification procedure. The voice line-up consists in presenting a number of voices sharing similar acoustic parameters as specified in established procedures. The main objective was to determine if the familiarity of a voice could lead to a high rate of correct identification (> 99 %). Our study is the first to quantify the familiarity criterion linking an identifier and a « target voice ». The quantification was based on four parameters bearing on the degree of contact between individuals: recency, frenquency, duration, and the period during which the contact occurred. Three different voice line-ups were elaborated, each containing 10 voices, including one target voice which was well known by the identifier according to a questionnaire that served to quantify familiarity. Participants (identifiers, n = 44) were selected on the basis of their familiarity with the target voice. The speakers used in the voice line-ups were native speakers of Quebec French and all presented voices had similar fundamental frequencies (to within one semitone). In each line-up we used utterances of 4 different lengths (1, 4, 10, and 18 syll.). The results show that by controlling the familiarity criterion, a correct identification rate of a 100 % is obtained with an exact error probability of p < 1 x 10-12.These rates are superior to current automatic systems of voice identification.
27

La conception d'un système ultrasonore passif couche mince pour l'évaluation de l'état vibratoire des cordes vocales / A speaker recognition system based on vocal cords’ vibrations

Ishak, Dany 19 December 2017 (has links)
Dans ce travail, une approche de reconnaissance de l’orateur en utilisant un microphone de contact est développée et présentée. L'élément passif de contact est construit à partir d'un matériau piézoélectrique. La position du transducteur piézoélectrique sur le cou de l'individu peut affecter grandement la qualité du signal recueilli et par conséquent les informations qui en sont extraites. Ainsi, le milieu multicouche dans lequel les vibrations des cordes vocales se propagent avant d'être détectées par le transducteur est modélisé. Le meilleur emplacement sur le cou de l’individu pour attacher un élément transducteur particulier est déterminé en mettant en œuvre des techniques de simulation Monte Carlo et, par conséquent, les résultats de la simulation sont vérifiés en utilisant des expériences réelles. La reconnaissance est basée sur le signal généré par les vibrations des cordes vocales lorsqu'un individu parle et non sur le signal vocal à la sortie des lèvres qui est influencé par les résonances dans le conduit vocal. Par conséquent, en raison de la nature variable du signal recueilli, l'analyse a été effectuée en appliquant la technique de transformation de Fourier à court terme pour décomposer le signal en ses composantes de fréquence. Ces fréquences représentent les vibrations des cordes vocales (50-1000 Hz). Les caractéristiques en termes d'intervalle de fréquences sont extraites du spectrogramme résultant. Ensuite, un vecteur 1-D est formé à des fins d'identification. L'identification de l’orateur est effectuée en utilisant deux critères d'évaluation qui sont la mesure de la similarité de corrélation et l'analyse en composantes principales (ACP) en conjonction avec la distance euclidienne. Les résultats montrent qu'un pourcentage élevé de reconnaissance est atteint et que la performance est bien meilleure que de nombreuses techniques existantes dans la littérature. / In this work, a speaker recognition approach using a contact microphone is developed and presented. The contact passive element is constructed from a piezoelectric material. In this context, the position of the piezoelectric transducer on the individual’s neck may greatly affect the quality of the collected signal and consequently the information extracted from it. Thus, the multilayered medium in which the sound propagates before being detected by the transducer is modeled. The best location on the individual’ neck to place a particular transducer element is determined by implementing Monte Carlo simulation techniques and consequently, the simulation results are verified using real experiments. The recognition is based on the signal generated from the vocal cords’ vibrations when an individual is speaking and not on the vocal signal at the output of the lips that is influenced by the resonances in the vocal tract. Therefore, due to the varying nature of the collected signal, the analysis was performed by applying the Short Term Fourier Transform technique to decompose the signal into its frequency components. These frequencies represent the vocal folds’ vibrations (50-1000 Hz). The features in terms of frequencies’ interval are extracted from the resulting spectrogram. Then, a 1-D vector is formed for identification purposes. The identification of the speaker is performed using two evaluation criteria, namely, the correlation similarity measure and the Principal Component Analysis (PCA) in conjunction with the Euclidean distance. The results show that a high percentage of recognition is achieved and the performance is much better than many existing techniques in the literature.
28

Traitement neuronal des voix et familiarité : entre reconnaissance et identification du locuteur

Plante-Hébert, Julien 12 1900 (has links)
La capacité humaine de reconnaitre et d’identifier de nombreux individus uniquement grâce à leur voix est unique et peut s’avérer cruciale pour certaines enquêtes. La méconnaissance de cette capacité jette cependant de l’ombre sur les applications dites « légales » de la phonétique. Le travail de thèse présenté ici a comme objectif principal de mieux définir les différents processus liés au traitement des voix dans le cerveau et les paramètres affectant ce traitement. Dans une première expérience, les potentiels évoqués (PÉs) ont été utilisés pour démontrer que les voix intimement familières sont traitées différemment des voix inconnues, même si ces dernières sont fréquemment répétées. Cette expérience a également permis de mieux définir les notions de reconnaissance et d’identification de la voix et les processus qui leur sont associés (respectivement les composantes P2 et LPC). Aussi, une distinction importante entre la reconnaissance de voix intimement familières (P2) et inconnues, mais répétées (N250) a été observée. En plus d’apporter des clarifications terminologiques plus-que-nécessaires, cette première étude est la première à distinguer clairement la reconnaissance et l’identification de locuteurs en termes de PÉs. Cette contribution est majeure, tout particulièrement en ce qui a trait aux applications légales qu’elle recèle. Une seconde expérience s’est concentrée sur l’effet des modalités d’apprentissage sur l’identification de voix apprises. Plus spécifiquement, les PÉs ont été analysés suite à la présentation de voix apprises à l’aide des modalités auditive, audiovisuelle et audiovisuelle interactive. Si les mêmes composantes (P2 et LPC) ont été observées pour les trois conditions d’apprentissage, l’étendue de ces réponses variait. L’analyse des composantes impliquées a révélé un « effet d’ombrage du visage » (face overshadowing effect, FOE) tel qu’illustré par une réponse atténuée suite à la présentation de voix apprise à l’aide d’information audiovisuelle par rapport celles apprises avec dans la condition audio seulement. La simulation d’interaction à l’apprentissage à quant à elle provoqué une réponse plus importante sur la LPC en comparaison avec la condition audiovisuelle passive. De manière générale, les données rapportées dans les expériences 1 et 2 sont congruentes et indiquent que la P2 et la LPC sont des marqueurs fiables des processus de reconnaissance et d’identification de locuteurs. Les implications fondamentales et en phonétique légale seront discutées. / The human ability to recognize and identify speakers by their voices is unique and can be critical in criminal investigations. However, the lack of knowledge on the working of this capacity overshadows its application in the field of “forensic phonetics”. The main objective of this thesis is to characterize the processing of voices in the human brain and the parameters that influence it. In a first experiment, event related potentials (ERPs) were used to establish that intimately familiar voices are processed differently from unknown voices, even when the latter are repeated. This experiment also served to establish a clear distinction between neural components of speaker recognition and identification supported by corresponding ERP components (respectively the P2 and the LPC). An essential contrast between the processes underlying the recognition of intimately familiar voices (P2) and that of unknown but previously heard voices (N250) was also observed. In addition to clarifying the terminology of voice processing, the first study in this thesis is the first to unambiguously distinguish between speaker recognition and identification in terms of ERPs. This contribution is major, especially when it comes to applications of voice processing in forensic phonetics. A second experiment focused more specifically on the effects of learning modalities on later speaker identification. ERPs to trained voices were analysed along with behavioral responses of speaker identification following a learning phase where participants were trained on voices in three modalities : audio only, audiovisual and audiovisual interactive. Although the ERP responses for the trained voices showed effects on the same components (P2 and LPC) across the three training conditions, the range of these responses varied. The analysis of these components first revealed a face overshadowing effect (FOE) resulting in an impaired encoding of voice information. This well documented effect resulted in a smaller LPC for the audiovisual condition compared to the audio only condition. However, effects of the audiovisual interactive condition appeared to minimize this FOE when compared to the passive audiovisual condition. Overall, the data presented in both experiments is generally congruent and indicate that the P2 and the LPC are reliable electrophysiological markers of speaker recognition and identification. The implications of these findings for current voice processing models and for the field of forensic phonetics are discussed.
29

Evaluation of Methods for Sound Source Separation in Audio Recordings Using Machine Learning

Gidlöf, Amanda January 2023 (has links)
Sound source separation is a popular and active research area, especially with modern machine learning techniques. In this thesis, the focus is on single-channel separation of two speakers into individual streams, and specifically considering the case where two speakers are also accompanied by background noise. There are different methods to separate speakers and in this thesis three different methods are evaluated: the Conv-TasNet, the DPTNet, and the FaSNetTAC.  The methods were used to train models to perform the sound source separation. These models were evaluated and validated through three experiments. Firstly, previous results for the chosen separation methods were reproduced. Secondly, appropriate models applicable for NFC's datasets and applications were created, to fulfill the aim of this thesis. Lastly, all models were evaluated on an independent dataset, similar to datasets from NFC. The results were evaluated using the metrics SI-SNRi and SDRi. This thesis provides recommended models and methods suitable for NFC applications, especially concluding that the Conv-TasNet and the DPTNet are reasonable choices.

Page generated in 0.1386 seconds