Spelling suggestions: "subject:"audio segmentation"" "subject:"áudio segmentation""
1 |
Direct Speech Translation Toward High-Quality, Inclusive, and Augmented SystemsGaido, Marco 28 April 2023 (has links)
When this PhD started, the translation of speech into text in a different language was mainly tackled with a cascade of automatic speech recognition (ASR) and machine translation (MT) models, as the emerging direct speech translation (ST) models were not yet competitive. To close this gap, part of the PhD has been devoted to improving the quality of direct models, both in the simplified condition of test sets where the audio is split into well-formed sentences, and in the realistic condition in which the audio is automatically segmented. First, we investigated how to transfer knowledge from MT models trained on large corpora. Then, we defined encoder architectures that give different weights to the vectors in the input sequence, reflecting the variability of the amount of information over time in speech. Finally, we reduced the adverse effects caused by the suboptimal automatic audio segmentation in two ways: on one side, we created models robust to this condition; on the other, we enhanced the audio segmentation itself. The good results achieved in terms of overall translation quality allowed us to investigate specific behaviors of direct ST systems, which are crucial to satisfy real users’ needs. On one side, driven by the ethical goal of inclusive systems, we disclosed that established technical choices geared toward high general performance (statistical word segmentation of the target text, knowledge distillation from MT) cause an exacerbation of the gender representational disparities in the training data. Along this line of work, we proposed mitigation techniques that reduce the gender bias of ST models, and showed how gender-specific systems can be used to control the translation of gendered words related to the speakers, regardless of their vocal traits. On the other side, motivated by the practical needs of interpreters and translators, we evaluated the potential of direct ST systems in the “augmented translation” scenario, focusing on the translation and recognition of named entities (NEs). Along this line of work, we proposed solutions to cope with the major weakness of ST models (handling person names), and introduced direct models that jointly perform ST and NE recognition showing their superiority over a pipeline of dedicated tools for the two tasks. Overall, we believe that this thesis moves a step forward toward adopting direct ST systems in real applications, increasing the awareness of their strengths and weaknesses compared to the traditional cascade paradigm.
|
2 |
Audio Event Detection On Tv BroadcastOzan, Ezgi Can 01 September 2011 (has links) (PDF)
The availability of digital media has grown tremendously with the fast-paced ever-growing
storage and communication technologies. As a result, today, we are facing a problem in
indexing and browsing the huge amounts of multimedia data. This amount of data is
impossible to be indexed or browsed by hand so automatic indexing and browsing systems
are proposed. Audio Event Detection is a research area which tries to analyse the audio data
in a semantic and perceptual manner, to bring a conceptual solution to this problem. In this
thesis, a method for detecting several audio events in TV broadcast is proposed. The
proposed method includes an audio segmentation stage to detect event boundaries.
Broadcast audio is classified into 17 classes. The feature set for each event is obtained by
using a feature selection algorithm to select suitable features among a large set of popular
descriptors. Support Vector Machines and Gaussian Mixture Models are used as classifiers
and the proposed system achieved an average recall rate of 88% for 17 different audio
events. Comparing with the results in the literature, the proposed method is promising.
|
3 |
Feature selection for multimodal: acoustic Event detectionButko, Taras 08 July 2011 (has links)
Acoustic Event Detection / The detection of the Acoustic Events (AEs) naturally produced in a meeting room may help to describe the human and social activity. The automatic description of interactions between humans and environment can be useful for providing: implicit assistance to the people inside the room, context-aware and content-aware information requiring a minimum of human attention or interruptions, support for high-level analysis of the underlying acoustic scene, etc. On the other hand, the recent fast growth of available audio or audiovisual content strongly demands tools for analyzing, indexing, searching and retrieving the available documents. Given an audio document, the first processing step usually is audio segmentation (AS), i.e. the partitioning of the input audio stream into acoustically homogeneous regions which are labelled according to a predefined broad set of classes like speech, music, noise, etc. Acoustic event detection (AED) is the objective of this thesis work. A variety of features coming not only from audio but also from the video modality is proposed to deal with that detection problem in meeting-room and broadcast news domains. Two basic detection approaches are investigated in this work: a joint segmentation and classification using Hidden Markov Models (HMMs) with Gaussian Mixture Densities (GMMs), and a detection-by-classification approach using discriminative Support Vector Machines (SVMs). For the first case, a fast one-pass-training feature selection algorithm is developed in this thesis to select, for each AE class, the subset of multimodal features that shows the best detection rate. AED in meeting-room environments aims at processing the signals collected by distant microphones and video cameras in order to obtain the temporal sequence of (possibly overlapped) AEs that have been produced in the room. When applied to interactive seminars with a certain degree of spontaneity, the detection of acoustic events from only the audio modality alone shows a large amount of errors, which is mostly due to the temporal overlaps of sounds. This thesis includes several novelties regarding the task of multimodal AED. Firstly, the use of video features. Since in the video modality the acoustic sources do not overlap (except for occlusions), the proposed features improve AED in such rather spontaneous scenario recordings. Secondly, the inclusion of acoustic localization features, which, in combination with the usual spectro-temporal audio features, yield a further improvement in recognition rate. Thirdly, the comparison of feature-level and decision-level fusion strategies for the combination of audio and video modalities. In the later case, the system output scores are combined using two statistical approaches: weighted arithmetical mean and fuzzy integral. On the other hand, due to the scarcity of annotated multimodal data, and, in particular, of data with temporal sound overlaps, a new multimodal database with a rich variety of meeting-room AEs has been recorded and manually annotated, and it has been made publicly available for research purposes.
|
4 |
Video Segmentation Based On Audio Feature ExtractionAtar, Neriman 01 February 2009 (has links) (PDF)
In this study, an automatic video segmentation and classification system based on audio features has been presented. Video sequences are classified such as videos with &ldquo / speech&rdquo / , &ldquo / music&rdquo / , &ldquo / crowd&rdquo / and &ldquo / silence&rdquo / . The segments that do not belong to these regions are left as &ldquo / unclassified&rdquo / . For the silence segment detection, a simple threshold comparison method has been done on the short time energy feature of the embedded audio sequence. For the &ldquo / speech&rdquo / , &ldquo / music&rdquo / and &ldquo / crowd&rdquo / segment detection a multiclass classification scheme has been applied. For this purpose, three audio feature set have been formed, one of them is purely MPEG-7 audio features, other is the audio features that is used in [31] the last one is the combination of these two feature sets. For choosing the best feature a histogram comparison method has been used. Audio segmentation system was trained and tested with these feature sets. The evaluation results show that the Feature Set 3 that is the combination of other two feature sets gives better performance for the audio classification system. The output of the classification system is an XML file which contains MPEG-7 audio segment descriptors for the video sequence.
An application scenario is given by combining the audio segmentation results with visual analysis results for getting audio-visual video segments.
|
5 |
Structural analysis and segmentation of music signalsOng, Bee Suan 21 February 2007 (has links)
Con la reciente explosión cuantitativa de bibliotecas y colecciones de música en formatodigital, la descripción del contenido desempeña un papel fundamental para una gestión ybúsqueda eficientes de archivos de audio. La presente tesis doctoral pretende hacer unanálisis automático de la estructura de piezas musicales a partir del análisis de unagrabación, es decir, extraer una descripción estructural a partir de señales musicalespolifónicas. En la medida en que la repetición y transformación de la estructura de lamúsica genera una identificación única de una obra musical, extraer automáticamenteesta información puede vincular entre sí descripciones de bajo y alto nivel de una señalmusical y puede proporcionar al usuario una manera más efectiva de interactuar con uncontenido de audio. Para algunas aplicaciones basadas en contenido, encontrar los límitesde determinados segmentos de una grabación resulta indispensable. Así pues, también seinvestiga la segmentación temporal de audio a nivel semántico, al igual que laidentificación de extractos representativos de una señal musical que pueda servir comoresumen de la misma. Para ello se emplea una técnica de análisis a un nivel deabstracción más elevado que permite obtener una mejor división en segmentos. Tantodesde el punto de vista teórico como práctico, esta investigación no sólo ayuda aincrementar nuestro conocimiento respecto a la estructura musical, sino que tambiénproporciona una ayuda al examen y a la valoración musical. / With the recent explosion in the quantity of digital audio libraries and databases, contentdescriptions play an important role in efficiently managing and retrieving audio files.This doctoral research aims to discover and extract structural description frompolyphonic music signals. As repetition and transformations of music structure creates aunique identity of music itself, extracting such information can link low-level and higherleveldescriptions of music signal and provide better quality access plus powerful way ofinteracting with audio content. Finding appropriate boundary truncations is indispensablein certain content-based applications. Thus, temporal audio segmentation at the semanticlevel and the identification of representative excerpts from music audio signal are alsoinvestigated. We make use of higher-level analysis technique for better segmenttruncation. From both theoretical and practical points of view, this research not onlyhelps in increasing our knowledge of music structure but also facilitates in time-savingbrowsing and assessing of music.
|
6 |
Different Contributions to Cost-Effective Transcription and Translation of Video LecturesSilvestre Cerdà, Joan Albert 05 April 2016 (has links)
[EN] In recent years, on-line multimedia repositories have experiencied a strong
growth that have made them consolidated as essential knowledge assets, especially
in the area of education, where large repositories of video lectures have been
built in order to complement or even replace traditional teaching methods.
However, most of these video lectures are neither transcribed nor translated
due to a lack of cost-effective solutions to do so in a way that gives accurate
enough results. Solutions of this kind are clearly necessary in order to make
these lectures accessible to speakers of different languages and to people with
hearing disabilities. They would also facilitate lecture searchability and
analysis functions, such as classification, recommendation or plagiarism
detection, as well as the development of advanced educational functionalities
like content summarisation to assist student note-taking.
For this reason, the main aim of this thesis is to develop a cost-effective
solution capable of transcribing and translating video lectures to a reasonable
degree of accuracy. More specifically, we address the integration of
state-of-the-art techniques in Automatic Speech Recognition and Machine
Translation into large video lecture repositories to generate high-quality
multilingual video subtitles without human intervention and at a reduced
computational cost. Also, we explore the potential benefits of the exploitation
of the information that we know a priori about these repositories, that is,
lecture-specific knowledge such as speaker, topic or slides, to create
specialised, in-domain transcription and translation systems by means of
massive adaptation techniques.
The proposed solutions have been tested in real-life scenarios by carrying out
several objective and subjective evaluations, obtaining very positive results.
The main outcome derived from this thesis, The transLectures-UPV
Platform, has been publicly released as an open-source software, and, at the
time of writing, it is serving automatic transcriptions and translations for
several thousands of video lectures in many Spanish and European
universities and institutions. / [ES] Durante estos últimos años, los repositorios multimedia on-line han experimentado un gran
crecimiento que les ha hecho establecerse como fuentes fundamentales de conocimiento,
especialmente en el área de la educación, donde se han creado grandes repositorios de vídeo
charlas educativas para complementar e incluso reemplazar los métodos de enseñanza tradicionales.
No obstante, la mayoría de estas charlas no están transcritas ni traducidas debido a
la ausencia de soluciones de bajo coste que sean capaces de hacerlo garantizando una calidad
mínima aceptable. Soluciones de este tipo son claramente necesarias para hacer que las vídeo
charlas sean más accesibles para hablantes de otras lenguas o para personas con discapacidades auditivas.
Además, dichas soluciones podrían facilitar la aplicación de funciones de
búsqueda y de análisis tales como clasificación, recomendación o detección de plagios, así
como el desarrollo de funcionalidades educativas avanzadas, como por ejemplo la generación
de resúmenes automáticos de contenidos para ayudar al estudiante a tomar apuntes.
Por este motivo, el principal objetivo de esta tesis es desarrollar una solución de bajo
coste capaz de transcribir y traducir vídeo charlas con un nivel de calidad razonable. Más
específicamente, abordamos la integración de técnicas estado del arte de Reconocimiento del
Habla Automático y Traducción Automática en grandes repositorios de vídeo charlas educativas
para la generación de subtítulos multilingües de alta calidad sin requerir intervención
humana y con un reducido coste computacional. Además, también exploramos los beneficios
potenciales que conllevaría la explotación de la información de la que disponemos a priori
sobre estos repositorios, es decir, conocimientos específicos sobre las charlas tales como el
locutor, la temática o las transparencias, para crear sistemas de transcripción y traducción
especializados mediante técnicas de adaptación masiva.
Las soluciones propuestas en esta tesis han sido testeadas en escenarios reales llevando
a cabo nombrosas evaluaciones objetivas y subjetivas, obteniendo muy buenos resultados.
El principal legado de esta tesis, The transLectures-UPV Platform, ha sido liberado públicamente
como software de código abierto, y, en el momento de escribir estas líneas, está
sirviendo transcripciones y traducciones automáticas para diversos miles de vídeo charlas
educativas en nombrosas universidades e instituciones Españolas y Europeas. / [CA] Durant aquests darrers anys, els repositoris multimèdia on-line han experimentat un gran
creixement que els ha fet consolidar-se com a fonts fonamentals de coneixement, especialment
a l'àrea de l'educació, on s'han creat grans repositoris de vídeo xarrades educatives per
tal de complementar o inclús reemplaçar els mètodes d'ensenyament tradicionals. No obstant
això, la majoria d'aquestes xarrades no estan transcrites ni traduïdes degut a l'absència de
solucions de baix cost capaces de fer-ho garantint una qualitat mínima acceptable. Solucions
d'aquest tipus són clarament necessàries per a fer que les vídeo xarres siguen més accessibles
per a parlants d'altres llengües o per a persones amb discapacitats auditives. A més, aquestes
solucions podrien facilitar l'aplicació de funcions de cerca i d'anàlisi tals com classificació,
recomanació o detecció de plagis, així com el desenvolupament de funcionalitats educatives
avançades, com per exemple la generació de resums automàtics de continguts per ajudar a
l'estudiant a prendre anotacions.
Per aquest motiu, el principal objectiu d'aquesta tesi és desenvolupar una solució de baix
cost capaç de transcriure i traduir vídeo xarrades amb un nivell de qualitat raonable. Més
específicament, abordem la integració de tècniques estat de l'art de Reconeixement de la
Parla Automàtic i Traducció Automàtica en grans repositoris de vídeo xarrades educatives
per a la generació de subtítols multilingües d'alta qualitat sense requerir intervenció humana
i amb un reduït cost computacional. A més, també explorem els beneficis potencials que
comportaria l'explotació de la informació de la que disposem a priori sobre aquests repositoris,
és a dir, coneixements específics sobre les xarrades tals com el locutor, la temàtica o
les transparències, per a crear sistemes de transcripció i traducció especialitzats mitjançant
tècniques d'adaptació massiva.
Les solucions proposades en aquesta tesi han estat testejades en escenaris reals duent a
terme nombroses avaluacions objectives i subjectives, obtenint molt bons resultats. El principal
llegat d'aquesta tesi, The transLectures-UPV Platform, ha sigut alliberat públicament
com a programari de codi obert, i, en el moment d'escriure aquestes línies, està servint transcripcions
i traduccions automàtiques per a diversos milers de vídeo xarrades educatives en
nombroses universitats i institucions Espanyoles i Europees. / Silvestre Cerdà, JA. (2016). Different Contributions to Cost-Effective Transcription and Translation of Video Lectures [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62194
|
Page generated in 0.0931 seconds