1 |
Military Confidence Building Measures Across the Strait, constitution, cognitive and condition of the analysis-Take a military personnel in for exampleYu, Yeou-ruey 29 August 2012 (has links)
none
|
2 |
Advanced Quality Measures for Speech Translation / Mesures de qualité avancées pour la traduction de la paroleLe, Ngoc Tien 29 January 2018 (has links)
Le principal objectif de cette thèse vise à estimer de manière automatique la qualité de la traduction de langue parlée (Spoken Language Translation ou SLT), appelée estimation de confiance (Confidence Estimation ou CE). Le système de SLT génère les hypothèses représentées par les séquences de mots pour l'audio qui contient parfois des erreurs. En raison de multiples facteurs, la sortie de SLT, ayant une qualité insatisfaisante, pourrait causer différents problèmes pour les utilisateurs finaux. Par conséquent, il est utile de savoir combien de confiance les tokens corrects pourraient être trouvés au sein de l'hypothèse. L'objectif de l'estimation de confiance consistait à obtenir des scores qui quantifient le niveau de confiance ou à annoter les tokens cibles en appliquant le seuil de décision (par exemple, seuil par défaut = 0,5). Dans le cadre de cette thèse, nous avons proposé un boîte à outils, qui consiste en un framework personnalisable, flexible et en une plate-forme portative, pour l'estimation de confiance au niveau de mots (Word-level Confidence Estimation ou WCE) de SLT.En premier lieu, les erreurs dans le SLT ont tendance à se produire sur les hypothèses de la reconnaissance automatique de la parole (Automatic Speech Recognition ou ASR) et sur celles de la traduction automatique (Machine Translation ou MT), qui sont représentées par des séquences de mots. Ce phénomène est étudié par l'estimation de confiance (CE) au niveau des mots en utilisant les modèles de champs aléatoires conditionnels (Conditional Random Fields ou CRF). Cette tâche, relativement nouvelle, est définie et formalisée comme un problème d'étiquetage séquentiel dans lequel chaque mot, dans l'hypothèse de SLT, est annoté comme bon ou mauvais selon un ensemble des traits importants. Nous proposons plusieurs outils servant d’estimer la confiance des mots (WCE) en fonction de notre évaluation automatique de la qualité de la transcription (ASR), de la qualité de la traduction (MT), ou des deux (combiner ASR et MT). Ce travail de recherche est réalisable parce que nous avons construit un corpus spécifique, qui contient 6.7k des énoncés pour lesquels un quintuplet est normalisé comme suit : (1) sortie d’ASR, (2) transcription en verbatim, (3) traduction textuelle, (4) traduction vocale et (5) post-édition de la traduction. La conclusion de nos multiples expérimentations, utilisant les traits conjoints entre ASR et MT pour WCE, est que les traits de MT demeurent les plus influents, tandis que les traits de ASR peuvent apporter des informations intéressantes complémentaires.En deuxième lieu, nous proposons deux méthodes pour distinguer des erreurs susceptibles d’ASR et de celles de MT, dans lesquelles chaque mot, dans l'hypothèse de SLT, est annoté comme good (bon), asr_error (concernant les erreurs d’ASR) ou mt_error (concernant les erreurs de MT). Nous contribuons donc à l’estimation de confiance au niveau de mots (WCE) pour SLT par trouver la source des erreurs au sein des systèmes de SLT.En troisième lieu, nous proposons une nouvelle métrique, intitulée Word Error Rate with Embeddings (WER-E), qui est exploitée afin de rendre cette tâche possible. Cette approche génère de meilleures hypothèses de SLT lors de l'optimisation de l'hypothèse de N-meilleure hypothèses avec WER-E.En somme, nos stratégies proposées pour l'estimation de la confiance se révèlent un impact positif sur plusieurs applications pour SLT. Les outils robustes d’estimation de la qualité pour SLT peuvent être utilisés dans le but de re-calculer des graphes de la traduction de parole ou dans le but de fournir des retours d’information aux utilisateurs dans la traduction vocale interactive ou des scénarios de parole aux textes assistés par ordinateur.Mots-clés: Estimation de la qualité, Estimation de confiance au niveau de mots (WCE), Traduction de langue parlée (SLT), traits joints, Sélection des traits. / The main aim of this thesis is to investigate the automatic quality assessment of spoken language translation (SLT), called Confidence Estimation (CE) for SLT. Due to several factors, SLT output having unsatisfactory quality might cause various issues for the target users. Therefore, it is useful to know how we are confident in the tokens of the hypothesis. Our first contribution of this thesis is a toolkit LIG-WCE which is a customizable, flexible framework and portable platform for Word-level Confidence Estimation (WCE) of SLT.WCE for SLT is a relatively new task defined and formalized as a sequence labelling problem where each word in the SLT hypothesis is tagged as good or bad accordingto a large feature set. We propose several word confidence estimators (WCE) based on our automatic evaluation of transcription (ASR) quality, translation (MT) quality,or both (combined/joint ASR+MT). This research work is possible because we built a specific corpus, which contains 6.7k utterances for which a quintuplet containing: ASRoutput, verbatim transcript, text translation, speech translation and post-edition of the translation is built. The conclusion of our multiple experiments using joint ASR and MT features for WCE is that MT features remain the most influent while ASR features can bring interesting complementary information.As another contribution, we propose two methods to disentangle ASR errors and MT errors, where each word in the SLT hypothesis is tagged as good, asr_error or mt_error.We thus explore the contributions of WCE for SLT in finding out the source of SLT errors.Furthermore, we propose a simple extension of WER metric in order to penalize differently substitution errors according to their context using word embeddings. For instance, the proposed metric should catch near matches (mainly morphological variants) and penalize less this kind of error which has a more limited impact on translation performance. Our experiments show that the correlation of the new proposed metric with SLT performance is better than the one of WER. Oracle experiments are also conducted and show the ability of our metric to find better hypotheses (to be translated) in the ASR N-best. Finally, a preliminary experiment where ASR tuning is based on our new metric shows encouraging results.To conclude, we have proposed several prominent strategies for CE of SLT that could have a positive impact on several applications for SLT. Robust quality estimators for SLT can be used for re-scoring speech translation graphs or for providing feedback to the user in interactive speech translation or computer-assisted speech-to-text scenarios.Keywords: Quality estimation, Word confidence estimation (WCE), Spoken Language Translation (SLT), Joint Features, Feature Selection.
|
3 |
Interactive Transcription of Old Text DocumentsSerrano Martínez-Santos, Nicolás 09 June 2014 (has links)
Nowadays, there are huge collections of handwritten text documents in libraries
all over the world. The high demand for these resources has led to the creation
of digital libraries in order to facilitate the preservation and provide electronic
access to these documents. However text transcription of these documents im-
ages are not always available to allow users to quickly search information, or
computers to process the information, search patterns or draw out statistics.
The problem is that manual transcription of these documents is an expensive
task from both economical and time viewpoints. This thesis presents a novel ap-
proach for e cient Computer Assisted Transcription (CAT) of handwritten text
documents using state-of-the-art Handwriting Text Recognition (HTR) systems.
The objective of CAT approaches is to e ciently complete a transcription
task through human-machine collaboration, as the e ort required to generate a
manual transcription is high, and automatically generated transcriptions from
state-of-the-art systems still do not reach the accuracy required. This thesis
is centered on a special application of CAT, that is, the transcription of old
text document when the quantity of user e ort available is limited, and thus,
the entire document cannot be revised. In this approach, the objective is to
generate the best possible transcription by means of the user e ort available.
This thesis provides a comprehensive view of the CAT process from feature
extraction to user interaction.
First, a statistical approach to generalise interactive transcription is pro-
posed. As its direct application is unfeasible, some assumptions are made to
apply it to two di erent tasks. First, on the interactive transcription of hand-
written text documents, and next, on the interactive detection of the document
layout.
Next, the digitisation and annotation process of two real old text documents
is described. This process was carried out because of the scarcity of similar
resources and the need of annotated data to thoroughly test all the developed
tools and techniques in this thesis. These two documents were carefully selected
to represent the general di culties that are encountered when dealing with
HTR. Baseline results are presented on these two documents to settle down a
benchmark with a standard HTR system. Finally, these annotated documents
were made freely available to the community. It must be noted that, all the
techniques and methods developed in this thesis have been assessed on these
two real old text documents.
Then, a CAT approach for HTR when user e ort is limited is studied and
extensively tested. The ultimate goal of applying CAT is achieved by putting
together three processes. Given a recognised transcription from an HTR system.
The rst process consists in locating (possibly) incorrect words and employs the
user e ort available to supervise them (if necessary). As most words are not
expected to be supervised due to the limited user e ort available, only a few are
selected to be revised. The system presents to the user a small subset of these
words according to an estimation of their correctness, or to be more precise,
according to their con dence level. Next, the second process starts once these low con dence words have been supervised. This process updates the recogni-
tion of the document taking user corrections into consideration, which improves
the quality of those words that were not revised by the user. Finally, the last
process adapts the system from the partially revised (and possibly not perfect)
transcription obtained so far. In this adaptation, the system intelligently selects
the correct words of the transcription. As results, the adapted system will bet-
ter recognise future transcriptions. Transcription experiments using this CAT
approach show that this approach is mostly e ective when user e ort is low.
The last contribution of this thesis is a method for balancing the nal tran-
scription quality and the supervision e ort applied using our previously de-
scribed CAT approach. In other words, this method allows the user to control
the amount of errors in the transcriptions obtained from a CAT approach. The
motivation of this method is to let users decide on the nal quality of the desired
documents, as partially erroneous transcriptions can be su cient to convey the
meaning, and the user e ort required to transcribe them might be signi cantly
lower when compared to obtaining a totally manual transcription. Consequently,
the system estimates the minimum user e ort required to reach the amount of
error de ned by the user. Error estimation is performed by computing sepa-
rately the error produced by each recognised word, and thus, asking the user to
only revise the ones in which most errors occur.
Additionally, an interactive prototype is presented, which integrates most
of the interactive techniques presented in this thesis. This prototype has been
developed to be used by palaeographic expert, who do not have any background
in HTR technologies. After a slight ne tuning by a HTR expert, the prototype
lets the transcribers to manually annotate the document or employ the CAT ap-
proach presented. All automatic operations, such as recognition, are performed
in background, detaching the transcriber from the details of the system. The
prototype was assessed by an expert transcriber and showed to be adequate and
e cient for its purpose. The prototype is freely available under a GNU Public
Licence (GPL). / Serrano Martínez-Santos, N. (2014). Interactive Transcription of Old Text Documents [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37979
|
4 |
CONTRIBUTIONS TO EFFICIENT AUTOMATIC TRANSCRIPTION OF VIDEO LECTURESAgua Teba, Miguel Ángel del 04 November 2019 (has links)
Tesis por compendio / [ES] Durante los últimos años, los repositorios multimedia en línea se han convertido
en fuentes clave de conocimiento gracias al auge de Internet, especialmente en
el área de la educación. Instituciones educativas de todo el mundo han dedicado
muchos recursos en la búsqueda de nuevos métodos de enseñanza, tanto para
mejorar la asimilación de nuevos conocimientos, como para poder llegar a una
audiencia más amplia. Como resultado, hoy en día disponemos de diferentes
repositorios con clases grabadas que siven como herramientas complementarias en
la enseñanza, o incluso pueden asentar una nueva base en la enseñanza a
distancia. Sin embargo, deben cumplir con una serie de requisitos para que la
experiencia sea totalmente satisfactoria y es aquí donde la transcripción de los
materiales juega un papel fundamental. La transcripción posibilita una búsqueda
precisa de los materiales en los que el alumno está interesado, se abre la
puerta a la traducción automática, a funciones de recomendación, a la
generación de resumenes de las charlas y además, el poder hacer
llegar el contenido a personas con discapacidades auditivas. No obstante, la
generación de estas transcripciones puede resultar muy costosa.
Con todo esto en mente, la presente tesis tiene como objetivo proporcionar
nuevas herramientas y técnicas que faciliten la transcripción de estos
repositorios. En particular, abordamos el desarrollo de un conjunto de herramientas
de reconocimiento de automático del habla, con énfasis en las técnicas de aprendizaje
profundo que contribuyen a proporcionar transcripciones precisas en casos de
estudio reales. Además, se presentan diferentes participaciones en competiciones
internacionales donde se demuestra la competitividad del software comparada con
otras soluciones. Por otra parte, en aras de mejorar los sistemas de
reconocimiento, se propone una nueva técnica de adaptación de estos sistemas al
interlocutor basada en el uso Medidas de Confianza. Esto además motivó el
desarrollo de técnicas para la mejora en la estimación de este tipo de medidas
por medio de Redes Neuronales Recurrentes.
Todas las contribuciones presentadas se han probado en diferentes repositorios
educativos. De hecho, el toolkit transLectures-UPV es parte de un conjunto de
herramientas que sirve para generar transcripciones de clases en diferentes
universidades e instituciones españolas y europeas. / [CA] Durant els últims anys, els repositoris multimèdia en línia s'han convertit
en fonts clau de coneixement gràcies a l'expansió d'Internet, especialment en
l'àrea de l'educació. Institucions educatives de tot el món han dedicat
molts recursos en la recerca de nous mètodes d'ensenyament, tant per
millorar l'assimilació de nous coneixements, com per poder arribar a una
audiència més àmplia. Com a resultat, avui dia disposem de diferents
repositoris amb classes gravades que serveixen com a eines complementàries en
l'ensenyament, o fins i tot poden assentar una nova base a l'ensenyament a
distància. No obstant això, han de complir amb una sèrie de requisits perquè la
experiència siga totalment satisfactòria i és ací on la transcripció dels
materials juga un paper fonamental. La transcripció possibilita una recerca
precisa dels materials en els quals l'alumne està interessat, s'obri la
porta a la traducció automàtica, a funcions de recomanació, a la
generació de resums de les xerrades i el poder fer
arribar el contingut a persones amb discapacitats auditives. No obstant, la
generació d'aquestes transcripcions pot resultar molt costosa.
Amb això en ment, la present tesi té com a objectiu proporcionar noves
eines i tècniques que faciliten la transcripció d'aquests repositoris. En
particular, abordem el desenvolupament d'un conjunt d'eines de reconeixement
automàtic de la parla, amb èmfasi en les tècniques d'aprenentatge profund que
contribueixen a proporcionar transcripcions precises en casos d'estudi reals. A
més, es presenten diferents participacions en competicions internacionals on es
demostra la competitivitat del programari comparada amb altres solucions.
D'altra banda, per tal de millorar els sistemes de reconeixement, es proposa una
nova tècnica d'adaptació d'aquests sistemes a l'interlocutor basada en l'ús de
Mesures de Confiança. A més, això va motivar el desenvolupament de tècniques per
a la millora en l'estimació d'aquest tipus de mesures per mitjà de Xarxes
Neuronals Recurrents.
Totes les contribucions presentades s'han provat en diferents repositoris
educatius. De fet, el toolkit transLectures-UPV és part d'un conjunt d'eines
que serveix per generar transcripcions de classes en diferents universitats i
institucions espanyoles i europees. / [EN] During the last years, on-line multimedia repositories have become key
knowledge assets thanks to the rise of Internet and especially in the area of
education. Educational institutions around the world have devoted big efforts
to explore different teaching methods, to improve the transmission of knowledge
and to reach a wider audience. As a result, online video lecture repositories
are now available and serve as complementary tools that can boost the learning
experience to better assimilate new concepts. In order to guarantee the success
of these repositories the transcription of each lecture plays a very important
role because it constitutes the first step towards the availability of many other
features. This transcription allows the searchability of learning materials,
enables the translation into another languages, provides recommendation
functions, gives the possibility to provide content summaries, guarantees
the access to people with hearing disabilities, etc. However, the
transcription of these videos is expensive in terms of time and human cost.
To this purpose, this thesis aims at providing new tools and techniques that
ease the transcription of these repositories. In particular, we address the
development of a complete Automatic Speech Recognition Toolkit with an special
focus on the Deep Learning techniques that contribute to provide accurate
transcriptions in real-world scenarios. This toolkit is tested against many
other in different international competitions showing comparable transcription
quality. Moreover, a new technique to improve the recognition accuracy has been
proposed which makes use of Confidence Measures, and constitutes the spark that
motivated the proposal of new Confidence Measures techniques that helped to
further improve the transcription quality. To this end, a new speaker-adapted
confidence measure approach was proposed for models based on Recurrent Neural
Networks.
The contributions proposed herein have been tested in real-life scenarios in
different educational repositories. In fact, the transLectures-UPV toolkit is
part of a set of tools for providing video lecture transcriptions in many
different Spanish and European universities and institutions. / Agua Teba, MÁD. (2019). CONTRIBUTIONS TO EFFICIENT AUTOMATIC TRANSCRIPTION OF VIDEO LECTURES [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/130198 / Compendio
|
5 |
Guerra das Malvinas: o impacto geopolítico do conflito no relacionamento entre a Armada da República Argentina (ARA) e a Marinha do Brasil (MB) / Malvinas war: the geopolitic influence of the conflict in the ARA and MB relationshipArtur Luiz Santana Moreira 20 March 2008 (has links)
Por meio de tradicionais e novos conceitos da Geopolítica, são analisadas as especiais circunstâncias que cercaram as relações entre a Argentina e os principais países do mundo desenvolvido, os EUA e parte da Europa, durante a Guerra das Malvinas, para, a partir desse mesmo ferramental conceitual, verificar como esse episódio teve seu impacto nas relações de Brasil e Argentina na América do Sul. Os principais acontecimentos políticos, táticos e logísticos desse conflito são descritos para auxiliar nessa análise feita. A partir desse ponto de inflexão na história sul-americana, utiliza-se o conceito de Medidas de Confiança Mútua
(MCM) para se verificar como as Marinhas de Brasil e Argentina intensificaram suas relações dentro do novo marco geopolítico acordado entre os dois países. São descritos os sucessos dessa política de aproximação em cinco fases históricas distintas, didaticamente elaboradas: duas anteriores à própria Guerra das Malvinas, e três posteriores. Destacam-se, neste estudo, justamente, as três últimas fases. Ou seja, a terceira fase, após a Guerra das Malvinas, onde são descritos, dentre outros aspectos, os encontros estratégicos organizados pelo EMFA
(Brasil) e pelo EMCFA (Argentina) no final da década de 80 do século passado; a quarta fase, ao longo da década de 90, por ter sido o período em que as principais MCM de sucesso ocorreram; e a quinta fase, já na virada do milênio, onde são discutidos os limites atuais das MCM que vêm sendo adotadas e as possíveis perspectivas futuras. A primeira e a segunda fases situam-se ainda nos períodos iniciais e intermediários da Guerra Fria, mas, por já existirem ali algumas MCM embrionárias entre as Marinhas de Brasil e Argentina, importantes para desdobramentos futuros, estas fases têm também discutidas as suas importâncias históricas. Enfatiza-se que, na primeira fase, os principais episódios ocorreram sob grande influência dos EUA, enquanto, na segunda fase, já se constatava uma ligeira autonomia regional nas medidas adotadas. / There are analyzede, by means of traditional and new concepts of geopolitics, the special circumstances that surrounded the relations between Argentina and the main developed countries, the United States and part of Europe, during the Malvinas War, in order to, based on such conceptual tool, exam, how such episode impacted the Brazil-Argentina relations on South America. The main political, tactic and logistic events are described to help with the analysis. As of this turning point in the South-American history, it is used the concept of Mutual Confidence Measures (MCM) to verify how the Brazilian and Argentinean Navies intensified their relations in the new giopolitics milestone agreed upon between both countries. There are described the successful events in such approaching policy and politics in five different phases, didactically worked out: two prior to the Malvinas War and three afterwards. This study high lights precisely the three last phases, i.e., the phases after the Malvinas War. In the third phase, there are described, among other aspects, the strategic meetings organized by the EMFA (Brazil) and the EMCFA (Argentina) at the end of the
1980s; in the forth phase, are highly emphasized the 1990s since this is the period during which most of the MCM succeeded; and, in the fifth phase, already at the turning of the millennium, the current limits of the MCM, that are being adopted, are discussed, as well as the possible perspectives for the future. The first and second phases are the ones still during the beginning and intermediate periods of the Cold War. These phases are also discussed because they already presented some embryonic MCM policies between the Brazilian and Argentinean Navies, important for the way the events unfolded later on. In the first phase, there are stressed the main episodes that took place under the influence of the US, while, in the second, one can already notice a slight regional autonomy in the measures adopted.
|
6 |
Guerra das Malvinas: o impacto geopolítico do conflito no relacionamento entre a Armada da República Argentina (ARA) e a Marinha do Brasil (MB) / Malvinas war: the geopolitic influence of the conflict in the ARA and MB relationshipArtur Luiz Santana Moreira 20 March 2008 (has links)
Por meio de tradicionais e novos conceitos da Geopolítica, são analisadas as especiais circunstâncias que cercaram as relações entre a Argentina e os principais países do mundo desenvolvido, os EUA e parte da Europa, durante a Guerra das Malvinas, para, a partir desse mesmo ferramental conceitual, verificar como esse episódio teve seu impacto nas relações de Brasil e Argentina na América do Sul. Os principais acontecimentos políticos, táticos e logísticos desse conflito são descritos para auxiliar nessa análise feita. A partir desse ponto de inflexão na história sul-americana, utiliza-se o conceito de Medidas de Confiança Mútua
(MCM) para se verificar como as Marinhas de Brasil e Argentina intensificaram suas relações dentro do novo marco geopolítico acordado entre os dois países. São descritos os sucessos dessa política de aproximação em cinco fases históricas distintas, didaticamente elaboradas: duas anteriores à própria Guerra das Malvinas, e três posteriores. Destacam-se, neste estudo, justamente, as três últimas fases. Ou seja, a terceira fase, após a Guerra das Malvinas, onde são descritos, dentre outros aspectos, os encontros estratégicos organizados pelo EMFA
(Brasil) e pelo EMCFA (Argentina) no final da década de 80 do século passado; a quarta fase, ao longo da década de 90, por ter sido o período em que as principais MCM de sucesso ocorreram; e a quinta fase, já na virada do milênio, onde são discutidos os limites atuais das MCM que vêm sendo adotadas e as possíveis perspectivas futuras. A primeira e a segunda fases situam-se ainda nos períodos iniciais e intermediários da Guerra Fria, mas, por já existirem ali algumas MCM embrionárias entre as Marinhas de Brasil e Argentina, importantes para desdobramentos futuros, estas fases têm também discutidas as suas importâncias históricas. Enfatiza-se que, na primeira fase, os principais episódios ocorreram sob grande influência dos EUA, enquanto, na segunda fase, já se constatava uma ligeira autonomia regional nas medidas adotadas. / There are analyzede, by means of traditional and new concepts of geopolitics, the special circumstances that surrounded the relations between Argentina and the main developed countries, the United States and part of Europe, during the Malvinas War, in order to, based on such conceptual tool, exam, how such episode impacted the Brazil-Argentina relations on South America. The main political, tactic and logistic events are described to help with the analysis. As of this turning point in the South-American history, it is used the concept of Mutual Confidence Measures (MCM) to verify how the Brazilian and Argentinean Navies intensified their relations in the new giopolitics milestone agreed upon between both countries. There are described the successful events in such approaching policy and politics in five different phases, didactically worked out: two prior to the Malvinas War and three afterwards. This study high lights precisely the three last phases, i.e., the phases after the Malvinas War. In the third phase, there are described, among other aspects, the strategic meetings organized by the EMFA (Brazil) and the EMCFA (Argentina) at the end of the
1980s; in the forth phase, are highly emphasized the 1990s since this is the period during which most of the MCM succeeded; and, in the fifth phase, already at the turning of the millennium, the current limits of the MCM, that are being adopted, are discussed, as well as the possible perspectives for the future. The first and second phases are the ones still during the beginning and intermediate periods of the Cold War. These phases are also discussed because they already presented some embryonic MCM policies between the Brazilian and Argentinean Navies, important for the way the events unfolded later on. In the first phase, there are stressed the main episodes that took place under the influence of the US, while, in the second, one can already notice a slight regional autonomy in the measures adopted.
|
7 |
On the effective deployment of current machine translation technologyGonzález Rubio, Jesús 03 June 2014 (has links)
Machine translation is a fundamental technology that is gaining more importance
each day in our multilingual society. Companies and particulars are
turning their attention to machine translation since it dramatically cuts down
their expenses on translation and interpreting. However, the output of current
machine translation systems is still far from the quality of translations generated
by human experts. The overall goal of this thesis is to narrow down
this quality gap by developing new methodologies and tools that improve the
broader and more efficient deployment of machine translation technology.
We start by proposing a new technique to improve the quality of the
translations generated by fully-automatic machine translation systems. The
key insight of our approach is that different translation systems, implementing
different approaches and technologies, can exhibit different strengths and
limitations. Therefore, a proper combination of the outputs of such different
systems has the potential to produce translations of improved quality.
We present minimum Bayes¿ risk system combination, an automatic approach
that detects the best parts of the candidate translations and combines them
to generate a consensus translation that is optimal with respect to a particular
performance metric. We thoroughly describe the formalization of our
approach as a weighted ensemble of probability distributions and provide efficient
algorithms to obtain the optimal consensus translation according to the
widespread BLEU score. Empirical results show that the proposed approach
is indeed able to generate statistically better translations than the provided
candidates. Compared to other state-of-the-art systems combination methods,
our approach reports similar performance not requiring any additional data
but the candidate translations.
Then, we focus our attention on how to improve the utility of automatic
translations for the end-user of the system. Since automatic translations are
not perfect, a desirable feature of machine translation systems is the ability
to predict at run-time the quality of the generated translations. Quality estimation
is usually addressed as a regression problem where a quality score
is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no
consensus on which are the features that actually account for it. As a consequence,
quality estimation systems for machine translation have to utilize
a large number of weak features to predict translation quality. This involves
several learning problems related to feature collinearity and ambiguity, and
due to the ¿curse¿ of dimensionality. We address these challenges by adopting
a two-step training methodology. First, a dimensionality reduction method
computes, from the original features, the reduced set of features that better
explains translation quality. Then, a prediction model is built from this
reduced set to finally predict the quality score. We study various reduction
methods previously used in the literature and propose two new ones based on
statistical multivariate analysis techniques. More specifically, the proposed dimensionality
reduction methods are based on partial least squares regression.
The results of a thorough experimentation show that the quality estimation
systems estimated following the proposed two-step methodology obtain better
prediction accuracy that systems estimated using all the original features.
Moreover, one of the proposed dimensionality reduction methods obtained the
best prediction accuracy with only a fraction of the original features. This
feature reduction ratio is important because it implies a dramatic reduction
of the operating times of the quality estimation system.
An alternative use of current machine translation systems is to embed them
within an interactive editing environment where the system and a human expert
collaborate to generate error-free translations. This interactive machine
translation approach have shown to reduce supervision effort of the user in
comparison to the conventional decoupled post-edition approach. However,
interactive machine translation considers the translation system as a passive
agent in the interaction process. In other words, the system only suggests translations
to the user, who then makes the necessary supervision decisions. As
a result, the user is bound to exhaustively supervise every suggested translation.
This passive approach ensures error-free translations but it also demands
a large amount of supervision effort from the user.
Finally, we study different techniques to improve the productivity of current
interactive machine translation systems. Specifically, we focus on the development
of alternative approaches where the system becomes an active agent
in the interaction process. We propose two different active approaches. On the
one hand, we describe an active interaction approach where the system informs
the user about the reliability of the suggested translations. The hope is that
this information may help the user to locate translation errors thus improving
the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence
of such information in the productivity of an interactive machine translation
system. Empirical results show that the proposed active interaction protocol
is able to achieve a large reduction in supervision effort while still generating
translations of very high quality. On the other hand, we study an active learning
framework for interactive machine translation. In this case, the system is
not only able to inform the user of which suggested translations should be
supervised, but it is also able to learn from the user-supervised translations to
improve its future suggestions. We develop a value-of-information criterion to
select which automatic translations undergo user supervision. However, given
its high computational complexity, in practice we study different selection
strategies that approximate this optimal criterion. Results of a large scale experimentation
show that the proposed active learning framework is able to
obtain better compromises between the quality of the generated translations
and the human effort required to obtain them. Moreover, in comparison to
a conventional interactive machine translation system, our proposal obtained
translations of twice the quality with the same supervision effort. / González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888
|
8 |
Confidence Measures for Automatic and Interactive Speech RecognitionSánchez Cortina, Isaías 07 March 2016 (has links)
[EN] This thesis work contributes to the field of the {Automatic Speech Recognition} (ASR). And particularly to the {Interactive Speech Transcription} and {Confidence Measures} (CM) for ASR.
The main goals of this thesis work can be summarised as follows:
1. To design IST methods and tools to tackle the problem of improving automatically generated transcripts.
2. To assess the designed IST methods and tools on real-life tasks of transcription in large educational repositories of video lectures.
3. To improve the reliability of the IST by improving the underlying (CM).
Abstracts:
The {Automatic Speech Recognition} (ASR) is a crucial task in a broad range of important applications which could not accomplished by means of manual transcription. The ASR can provide cost-effective transcripts in scenarios of increasing social impact such as the {Massive Open Online Courses} (MOOC), for which the availability of accurate enough is crucial even if they are not flawless. The transcripts enable search-ability, summarisation, recommendation, translation; they make the contents accessible to non-native speakers and users with impairments, etc. The usefulness is such that students improve their academic performance when learning from subtitled video lectures even when transcript is not perfect.
Unfortunately, the current ASR technology is still far from the necessary accuracy.
The imperfect transcripts resulting from ASR can be manually supervised and corrected, but the effort can be even higher than manual transcription.
For the purpose of alleviating this issue, a novel {Interactive Transcription of Speech} (IST) system is presented in this thesis. This IST succeeded in reducing the effort if a small quantity of errors can be allowed; and also in improving the underlying ASR models in a cost-effective way.
In other to adequate the proposed framework into real-life MOOCs,
another intelligent interaction methods involving limited user effort were investigated. And also, it was introduced a new method which benefit from the user interactions to improve automatically the unsupervised parts ({Constrained Search} for ASR).
The conducted research was deployed into a web-based IST platform with which it was possible to produce a massive number of semi-supervised lectures from two different well-known repositories, videoLectures.net and poliMedia.
Finally, the performance of the IST and ASR systems can be easily increased by improving the computation of the {Confidence Measure} (CM) of transcribed words. As so, two contributions were developed:
a new particular {Logistic Regresion} (LR) model;
and the speaker adaption of the CM for cases in which it is possible, such with MOOCs. / [ES] Este trabajo contribuye en el campo del {reconocimiento automático del habla} (RAH). Y en especial, en el de la {transcripción interactiva del habla} (TIH) y el de las {medidas de confianza} (MC) para RAH. Los objetivos principales son los siguientes:
1. Diseño de métodos y herramientas TIH para mejorar las transcripciones automáticas.
2. Evaluar los métodos y herramientas TIH empleando tareas de transcripción realistas extraídas de grandes repositorios de vídeos educacionales.
3. Mejorar la fiabilidad del TIH mediante la mejora de las MC.
Resumen:
El {reconocimiento automático del habla} (RAH) es una tarea crucial en una amplia gama de aplicaciones importantes que no podrían realizarse mediante transcripción manual. El RAH puede proporcionar transcripciones rentables en escenarios de creciente impacto social como el de los {cursos abiertos en linea masivos} (MOOC), para el que la disponibilidad de transcripciones es crucial, incluso cuando no son completamente perfectas. Las transcripciones permiten la automatización de procesos como buscar, resumir, recomendar, traducir; hacen que los contenidos sean más accesibles para hablantes no nativos y usuarios con discapacidades, etc. Incluso se ha comprobado que mejora el rendimiento de los estudiantes que aprenden de videos con subtítulos incluso cuando estos no son completamente perfectos.
Desafortunadamente, la tecnología RAH actual aún está lejos de la precisión necesaria.
Las transcripciones imperfectas resultantes del RAH pueden ser supervisadas y corregidas manualmente, pero el esfuerzo puede ser incluso superior al de la transcripción manual. Con el fin de aliviar este problema, esta tesis presenta un novedoso sistema de {transcripción interactiva del habla} (TIH).
Este método TIH consigue reducir el esfuerzo de semi-supervisión siempre que sea aceptable una pequeña cantidad de errores; además mejora a la par los modelos RAH subyacentes.
Con objeto de transportar el marco propuesto para MOOCs, también se investigaron otros métodos de interacción inteligentes que involucran esfuerzo limitado por parte del usuario. Además, se introdujo un nuevo método que aprovecha las interacciones para mejorar aún más las partes no supervisadas (ASR con {búsqueda restringida}).
La investigación en TIH llevada a cabo se desplegó en una plataforma web con el que fue posible producir un número masivo de transcripciones de videos de dos conocidos repositorios, videoLectures.net y poliMedia.
Por último, el rendimiento de la TIH y los sistemas de RAH se puede aumentar directamente mediante la mejora de la estimación de la {medida de confianza} (MC) de las palabras transcritas. Por este motivo se desarrollaron dos contribuciones: un nuevo modelo discriminativo {logístico} (LR);
y la adaptación al locutor de la MC para los casos en que es posible, como por ejemplo en MOOCs. / [CA] Aquest treball hi contribueix al camp del {reconeixment automàtic de la parla} (RAP).
I en especial, al de la {transcripció interactiva de la parla} i el de {mesures de confiança} (MC) per a RAP.
Els objectius principals són els següents:
1. Dissenyar mètodes i eines per a TIP per tal de millorar les transcripcions automàtiques.
2. Avaluar els mètodes i eines TIP per a tasques de transcripció realistes extretes de grans repositoris de vídeos educacionals.
3. Millorar la fiabilitat del TIP, mitjançant la millora de les MC.
Resum:
El {reconeixment automàtic de la parla} (RAP) és una tasca crucial per una àmplia gamma d'aplicacions importants que no es poden dur a terme per mitjà de la transcripció manual. El RAP pot proporcionar transcripcions en escenaris de creixent impacte social com els {cursos online oberts massius} (MOOC). Les transcripcions permeten automatitzar tasques com ara cercar, resumir, recomanar, traduir; a més a més,
fa accessibles els continguts als parlants no nadius i els usuaris amb discapacitat, etc. Fins i tot, pot millorar el rendiment acadèmic de estudiants que aprenen de xerrades amb subtítols, encara que aquests subtítols no siguen perfectes. Malauradament, la tecnologia RAP actual encara està lluny de la precisió necessària.
Les transcripcions imperfectes resultants de RAP poden ser supervisades i corregides manualment, però aquest l'esforç pot acabar sent superior a la transcripció manual. Per tal de resoldre aquest problema, en aquest treball es presenta un sistema nou per a {transcripció interactiva de la parla} (TIP). Aquest sistema TIP va ser reeixit en la reducció de l'esforç per quan es pot permetre una certa quantitat d'errors;
així com també en en la millora dels models RAP subjacents.
Per tal d'adequar el marc proposat per a MOOCs, també es van investigar altres mètodes d'interacció intel·ligents amb esforç d''usuari limitat.
A més a més, es va introduir un nou mètode que aprofita les interaccions per tal de millorar encara més les parts no supervisades (RAP amb {cerca restringida}).
La investigació en TIP duta a terme es va desplegar en una plataforma web amb la qual va ser possible produir un nombre massiu de transcripcions semi-supervisades de xerrades de repositoris ben coneguts, videoLectures.net i poliMedia.
Finalment, el rendiment de la TIP i els sistemes de RAP es pot augmentar directament mitjançant la millora de l'estimació de la {Confiança Mesura} (MC) de les paraules transcrites. Per tant, es van desenvolupar dues contribucions: un nou model discriminatiu logístic (LR);
i l'adaptació al locutor de la MC per casos en que és possible, per exemple amb MOOCs. / Sánchez Cortina, I. (2016). Confidence Measures for Automatic and Interactive Speech Recognition [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61473
|
Page generated in 0.0694 seconds