• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • Tagged with
  • 12
  • 12
  • 6
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Metodika měření kvality otisků prstu / Methodology of Fingerprint Image Quality Measurement

Oravec, Tomáš January 2018 (has links)
This thesis deals with the problem of methodology of fingerprint image quality measurement. The first task was to analyze already existing software used for fingerprint quality measurement called NFIQ (NIST Fingerprint Image Quality), evaluate its performance and identify weaknesses. In order to eliminate discovered NFIQ weaknesses, different fingerprint quality estimation methodology was introduced, and its results were compared to other methodologies.
2

Automatic Post-editing and Quality Estimation in Machine Translation of Product Descriptions

Kukk, Kätriin January 2022 (has links)
As a result of drastically improved machine translation quality in recent years, machine translation followed by manual post-editing is currently a trend in the language industry that is slowly but surely replacing manual translation from scratch. In this thesis, the applicability of machine translation to product descriptions of clothing items is studied. The focus lies on determining whether automatic post-editing is a viable approach for improving baseline translations when new training data becomes available and finding out if there is an existing quality estimation system that could reliably assign quality scores to machine translated texts. It is shown that machine translation is a promising approach for the target domain with the majority of systems experimented with being able to generate translations that on average are of almost publishable quality according to the human evaluation carried out, meaning that only light post-editing is needed before the translations can be published. Automatic post-editing is shown to be able to improve the worst baseline translations but struggles with improving the overall translation quality due to its tendency to overcorrect good translations. Nevertheless, one of the trained post-editing systems is still rated higher than the baseline by human evaluators. A new finding is that training a post-editing model on more data using worse translations leads to better performance compared to training on less but higher-quality data. None of the quality estimation systems experimented with shows a strong correlation with human evaluation results which is why it is suggested not to provide the confidence scores of the baseline model to the human evaluators responsible for correcting and approving translations. The main contributions of this work are showing that the target domain of product descriptions is suitable for integrating machine translation into the translation workflow, proposing an approach for that translation workflow that is more automated than the current one as well as the finding that it is better to use more data and poorer translations compared to less data and higher-quality translations when training an automatic post-editing system.
3

Automatic Recognition and Classification of Translation Errors in Human Translation / Automatisk igenkänning och klassificering av fel i mänsklig översättning

Dürlich, Luise January 2020 (has links)
Grading assignments is a time-consuming part of teaching translation. Automatic tools that facilitate this task would allow teachers of professional translation to focus more on other aspects of their job. Within Natural Language Processing, error recognitionhas not been studied for human translation in particular. This thesis is a first attempt at both error recognition and classification with both mono- and bilingual models. BERT– a pre-trained monolingual language model – and NuQE – a model adapted from the field of Quality Estimation for Machine Translation – are trained on a relatively small hand annotated corpus of student translations. Due to the nature of the task, errors are quite rare in relation to correctly translated tokens in the corpus. To account for this,we train the models with both under- and oversampled data. While both models detect errors with moderate success, the NuQE model adapts very poorly to the classification setting. Overall, scores are quite low, which can be attributed to class imbalance and the small amount of training data, as well as some general concerns about the corpus annotations. However, we show that powerful monolingual language models can detect formal, lexical and translational errors with some success and that, depending on the model, simple under- and oversampling approaches can already help a great deal to avoid pure majority class prediction.
4

Blind Estimation of Perceptual Quality for Modern Speech Communications

Falk, Tiago 05 January 2009 (has links)
Modern speech communication technologies expose users to perceptual quality degradations that were not experienced earlier with conventional telephone systems. Since perceived speech quality is a major contributor to the end user's perception of quality of service, speech quality estimation has become an important research field. In this dissertation, perceptual quality estimators are proposed for several emerging speech communication applications, in particular for i) wireless communications with noise suppression capabilities, ii) wireless-VoIP communications, iii) far-field hands-free speech communications, and iv) text-to-speech systems. First, a general-purpose speech quality estimator is proposed based on statistical models of normative speech behaviour and on innovative techniques to detect multiple signal distortions. The estimators do not depend on a clean reference signal hence are termed ``blind." Quality meters are then distributed along the network chain to allow for both quality degradations and quality enhancements to be handled. In order to improve estimation performance for wireless communications, statistical models of noise-suppressed speech are also incorporated. Next, a hybrid signal-and-link-parametric quality estimation paradigm is proposed for emerging wireless-VoIP communications. The algorithm uses VoIP connection parameters to estimate a base quality representative of the packet switching network. Signal-based distortions are then detected and quantified in order to adjust the base quality accordingly. The proposed hybrid methodology is shown to overcome the limitations of existing pure signal-based and pure link parametric algorithms. Temporal dynamics information is then investigated for quality diagnosis for hands-free speech communications. A spectro-temporal signal representation, where speech and reverberation tail components are shown to be separable, is used for blind characterization of room acoustics. In particular, estimators of reverberation time, direct-to-reverberation energy ratio, and reverberant speech quality are developed. Lastly, perceptual quality estimation for text-to-speech systems is addressed. Text- and speaker-independent hidden Markov models, trained on naturally produced speech, are used to capture normative spectral-temporal information. Deviations from the models, computed by means of a log-likelihood measure, are shown to be reliable indicators of multiple quality attributes including naturalness, fluency, and intelligibility. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2008-12-22 14:54:49.28
5

Système embarqué autonome en énergie pour objets mobiles communicants / Energy self-sufficient embedded system for mobile communicating objects

Chaabane, Chiraz 30 June 2014 (has links)
Le nombre et la complexité croissante des applications qui sont intégrées dans des objets mobiles communicants sans fil (téléphone mobile, PDA, etc.) implique une augmentation de la consommation d'énergie. Afin de limiter l'impact de la pollution due aux déchets des batteries et des émissions de CO2, il est important de procéder à une optimisation de la consommation d'énergie de ces appareils communicants. Cette thèse porte sur l'efficacité énergétique dans les réseaux de capteurs. Dans cette étude, nous proposons de nouvelles approches pour gérer efficacement les objets communicants mobiles. Tout d’abord, nous proposons une architecture globale de réseau de capteurs et une nouvelle approche de gestion de la mobilité économe en énergie pour les appareils terminaux de type IEEE 802.15.4/ZigBee. Cette approche est basée sur l'indicateur de la qualité de lien (LQI) et met en œuvre un algorithme spéculatif pour déterminer le prochain coordinateur. Nous avons ainsi proposé et évalué deux algorithmes spéculatifs différents. Ensuite, nous étudions et évaluons l'efficacité énergétique lors de l'utilisation d'un algorithme d'adaptation de débit prenant en compte les conditions du canal de communication. Nous proposons d'abord une approche mixte combinant un nouvel algorithme d'adaptation de débit et notre approche de gestion de la mobilité. Ensuite, nous proposons et évaluons un algorithme d'adaptation de débit hybride qui repose sur une estimation plus précise du canal de liaison. Les différentes simulations effectuées tout au long de ce travail montrent l’efficacité énergétique des approches proposées ainsi que l’amélioration de la connectivité des nœuds. / The increasing number and complexity of applications that are embedded into wireless mobile communicating devices (mobile phone, PDA, etc.) implies an increase of energy consumption. In order to limit the impact of pollution due to battery waste and CO2 emission, it is important to conduct an optimization of the energy consumption of these communicating end devices. This thesis focuses on energy efficiency in sensor networks. It proposes new approaches to handle mobile communicating objects. First, we propose a global sensor network architecture and a new energy-efficient mobility management approach for IEEE 802.15.4/ZigBee end devices. This new approach is based on the link quality estimator (LQI) and uses a speculative algorithm. We propose and evaluate two different speculative algorithms. Then, we study and evaluate the energy efficiency when using a rate adaptation algorithm that takes into account the communication channel conditions. We first propose a mobility-aware rate adaptation algorithm and evaluate its efficiency in our network architecture. Then, we propose and evaluate a hybrid rate adaptation algorithm that relies on more accurate link channel estimation. Simulations conducted all along this study show the energy-efficiency of our proposed approaches and the improvement of the nodes’ connectivity.
6

Documents Usability Estimation

Yaghmaei, Ayoub January 2018 (has links)
The improvements of technical documents quality influence the popularity of its relevant product; as the customers do not like to waste their time in the help desk’s queue, they will be more satisfied if they can independently solve their problems through the technical manuals in an acceptable time. Moreover, the cost of support issues will decrease for the product providers. In addition, the help desk team members could have more time to support the rest of unresolved issues in a better-qualified way. To afford the mentioned benefits, we have done the current thesis to estimate the usability of the documents before publishing them. As the result of such prediction, the technical documentation writers could have a goal-driven approach to improve the quality of their products or services’ manuals. Furthermore, as different structural metrics have been observed in this research, the result of the thesis could create an opportunity to have multi-discipline improvement in Information Quality (IQ) process management.
7

Word Confidence Estimation and Its Applications in Statistical Machine Translation / Les mesures de confiance au niveau des mots et leurs applications pour la traduction automatique statistique

Luong, Ngoc Quang 12 November 2014 (has links)
Les systèmes de traduction automatique (TA), qui génèrent automatiquement la phrase de la langue cible pour chaque entrée de la langue source, ont obtenu plusieurs réalisations convaincantes pendant les dernières décennies et deviennent les aides linguistiques efficaces pour la communauté entière dans un monde globalisé. Néanmoins, en raison de différents facteurs, sa qualité en général est encore loin de la perfection, constituant le désir des utilisateurs de savoir le niveau de confiance qu'ils peuvent mettre sur une traduction spécifique. La construction d'une méthode qui est capable d'indiquer des bonnes parties ainsi que d'identifier des erreurs de la traduction est absolument une bénéfice pour non seulement les utilisateurs, mais aussi les traducteurs, post-éditeurs, et les systèmes de TA eux-mêmes. Nous appelons cette méthode les mesures de confiance (MC). Cette thèse se porte principalement sur les méthodes des MC au niveau des mots (MCM). Le système de MCM assigne à chaque mot de la phrase cible un étiquette de qualité. Aujourd'hui, les MCM jouent un rôle croissant dans nombreux aspects de TA. Tout d'abord, elles aident les post-éditeurs d'identifier rapidement les erreurs dans la traduction et donc d'améliorer leur productivité de travail. De plus, elles informent les lecteurs des portions qui ne sont pas fiables pour éviter leur malentendu sur le contenu de la phrase. Troisièmement, elles sélectionnent la meilleure traduction parmi les sorties de plusieurs systèmes de TA. Finalement, et ce qui n'est pas le moins important, les scores MCM peuvent aider à perfectionner la qualité de TA via certains scénarios: ré-ordonnance des listes N-best, ré-décodage du graphique de la recherche, etc. Dans cette thèse, nous visons à renforcer et optimiser notre système de MCM, puis à l'exploiter pour améliorer TA ainsi que les mesures de confiance au niveau des phrases (MCP). Comparer avec les approches précédentes, nos nouvelles contributions étalent sur les points principaux comme suivants. Tout d'abord, nous intégrons différents types des paramètres: ceux qui sont extraits du système TA, avec des caractéristiques lexicales, syntaxiques et sémantiques pour construire le système MCM de base. L'application de différents méthodes d'apprentissage nous permet d'identifier la meilleure (méthode: "Champs conditionnels aléatoires") qui convient le plus nos donnés. En suite, l'efficacité de touts les paramètres est plus profond examinée en utilisant un algorithme heuristique de sélection des paramètres. Troisièmement, nous exploitons l'algorithme Boosting comme notre méthode d'apprentissage afin de renforcer la contribution des sous-ensembles des paramètres dominants du système MCM, et en conséquence d'améliorer la capacité de prédiction du système MCM. En outre, nous enquérons les contributions des MCM vers l'amélioration de la qualité de TA via différents scénarios. Dans le re-ordonnance des liste N-best, nous synthétisons les scores à partir des sorties du système MCM et puis les intégrons avec les autres scores du décodeur afin de recalculer la valeur de la fonction objective, qui nous permet d'obtenir un mieux candidat. D'ailleurs, dans le ré-décodage du graphique de la recherche, nous appliquons des scores de MCM directement aux noeuds contenant chaque mot pour mettre à jour leurs coûts. Une fois la mise à jour se termine, la recherche pour meilleur chemin sur le nouveau graphique nous donne la nouvelle hypothèse de TA. Finalement, les scores de MCM sont aussi utilisés pour renforcer les performances des systèmes de MCP. Au total, notre travail apporte une image perspicace et multidimensionnelle sur des MCM et leurs impacts positifs sur différents secteurs de la TA. Les résultats très prometteurs ouvrent une grande avenue où MCM peuvent exprimer leur rôle, comme: MCM pour la reconnaissance automatique de la parole (RAP), pour la sélection parmi plusieurs systèmes de TA, et pour les systèmes de TA auto-apprentissage. / Machine Translation (MT) systems, which generate automatically the translation of a target language for each source sentence, have achieved impressive gains during the recent decades and are now becoming the effective language assistances for the entire community in a globalized world. Nonetheless, due to various factors, MT quality is still not perfect in general, and the end users therefore expect to know how much should they trust a specific translation. Building a method that is capable of pointing out the correct parts, detecting the translation errors and concluding the overall quality of each MT hypothesis is definitely beneficial for not only the end users, but also for the translators, post-editors, and MT systems themselves. Such method is widely known under the name Confidence Estimation (CE) or Quality Estimation (QE). The motivations of building such automatic estimation methods originate from the actual drawbacks of assessing manually the MT quality: this task is time consuming, effort costly, and sometimes impossible in case where the readers have little or no knowledge of the source language. This thesis mostly focuses on the CE methods at word level (WCE). The WCE classifier tags each word in the MT output a quality label. The WCE working mechanism is straightforward: a classifier trained beforehand by a number of features using ML methods computes the confidence score of each label for each MT output word, then tag this word with highest score label. Nowadays, WCE shows an increasing importance in many aspects of MT. Firstly, it assists the post-editors to quickly identify the translation errors, hence improve their productivity. Secondly, it informs readers of portions of sentence that are not reliable to avoid the misunderstanding about the sentence's content. Thirdly, it selects the best translation among options from multiple MT systems. Last but not least, WCE scores can help to improve the MT quality via some scenarios: N-best list re-ranking, Search Graph Re-decoding, etc. In this thesis, we aim at building and optimizing our baseline WCE system, then exploiting it to improve MT and Sentence Confidence Estimation (SCE). Compare to the previous approaches, our novel contributions spread of these following main points. Firstly, we integrate various types of prediction indicators: system-based features extracted from the MT system, together with lexical, syntactic and semantic features to build the baseline WCE systems. We also apply multiple Machine Learning (ML) models on the entire feature set and then compare their performances to select the optimal one to optimize. Secondly, the usefulness of all features is deeper investigated using a greedy feature selection algorithm. Thirdly, we propose a solution that exploits Boosting algorithm as a learning method in order to strengthen the contribution of dominant feature subsets to the system, thus improve of the system's prediction capability. Lastly, we explore the contributions of WCE in improving MT quality via some scenarios. In N-best list re-ranking, we synthesize scores from WCE outputs and integrate them with decoder scores to calculate again the objective function value, then to re-order the N-best list to choose a better candidate. In the decoder's search graph re-decoding, the proposition is to apply WCE score directly to the nodes containing each word to update its cost regarding on the word quality. Furthermore, WCE scores are used to build useful features, which can enhance the performance of the Sentence Confidence Estimation system. In total, our work brings the insightful and multidimensional picture of word quality prediction and its positive impact on various sectors for Machine Translation. The promising results open up a big avenue where WCE can play its role, such as WCE for Automatic Speech Recognition (ASR) System, WCE for multiple MT selection, and WCE for re-trainable and self-learning MT systems.
8

Automatické rozpoznání kvality signálů EKG / Automatic ECG signal quality assesment

Malý, Tomáš January 2020 (has links)
This thesis deals with issues of automatic quality estimation of ECG signals. The main aim of this thesis is to implement own algorithm for classifying ECG signals into three classes of quality. Theoretical part of the thesis contains mostly description of recording electrical activity of the heart, anatomy and physiology of the heart, electrocardiography, different types of ECG signals interference and two of the chosen methods for quality estimation. Implementation of the chosen methods is presented in the practical part. Result of this thesis are two implemented algorithms, which are based on methods described in the theoretical part. The first of two is based on detection of R-waves, validation of physiological assumptions and the subsequent calculation of the correlation coefficient between adaptive template and interfered signal. Second is based on calculation of a continuous SNR value over time, which is then thresholded. The robustness of the methods was verified on the three specified real ECG signals, which are all available on UBMI including annotation of specific signal parts. Those 24-hour long signals were recorded by Holter monitor, which is described in the theoretical part of the thesis. Achieved results of individual methods, including their comparison with annotation and statistical evaluation are presented in the conclusion of this thesis.
9

On the Impact of Channel and Channel Quality Estimation on Adaptive Modulation

Jain, Payal 20 December 2002 (has links)
The rapid growth in wireless communications has given rise to an increasing demand for channel capacity using limited bandwidth. Wireless channels vary over time due to fading and changing interference conditions. Typical wireless systems are designed by choosing a modulation scheme to meet worst case conditions and thus rely on power control to adapt to changing channel conditions. Adaptive modulation, however, exploits these channel variations to improve the spectral efficiency of wireless communications by intelligently changing the modulation scheme based on channel conditions. Necessarily, among the modulation schemes used are spectrally efficient modulation schemes such as quadrature amplitude modulation (QAM) techniques. QAM yields the high spectral efficiency due to its use of amplitude as well as phase modulation and therefore is an effective technique for achieving high channel capacity. The main drawbacks of QAM modulation are its reduced energy efficiency (as compared to standard QPSK) and its sensitivity to channel amplitude variations. Adaptive modulation attempts to address the first drawback by using more energy efficient schemes in low SNR conditions are reserving the use of QAM for high SNR conditions. The second drawback leads to a requirement of high quality channel estimation. Many researchers have studied pilot symbol assisted modulation for compensating the effects of fading at the receiver. A main contribution of this thesis is the investigation of different channel estimation techniques (along with the effect of pilot symbol spacing and Doppler spread) on the performance of adaptive modulation. Another important parameter affecting adaptive modulation is the signal-to-noise ratio. In order to adapt modulation efficiently, it is essential to have accurate knowledge of the channel signal-to-noise ratio. The performance of adaptive modulation depends directly on how well the channel SNR is estimated. The more accurate the estimation of the channel SNR is, the better the choice of modulation scheme becomes, and the better the ability to exploit the variations in the wireless channel is. The second main contribution of this thesis is the investigation of the impact of SNR estimation techniques on the performance and spectral efficiency of adaptive modulation. Further, we investigate the impact of various channel conditions on SNR estimation and the resulting impact on the performance of adaptive modulation. Finally, we investigate long term SNR estimation, its use in adaptive modulation and present a comparison between the two approaches / Master of Science
10

On the effective deployment of current machine translation technology

González Rubio, Jesús 03 June 2014 (has links)
Machine translation is a fundamental technology that is gaining more importance each day in our multilingual society. Companies and particulars are turning their attention to machine translation since it dramatically cuts down their expenses on translation and interpreting. However, the output of current machine translation systems is still far from the quality of translations generated by human experts. The overall goal of this thesis is to narrow down this quality gap by developing new methodologies and tools that improve the broader and more efficient deployment of machine translation technology. We start by proposing a new technique to improve the quality of the translations generated by fully-automatic machine translation systems. The key insight of our approach is that different translation systems, implementing different approaches and technologies, can exhibit different strengths and limitations. Therefore, a proper combination of the outputs of such different systems has the potential to produce translations of improved quality. We present minimum Bayes¿ risk system combination, an automatic approach that detects the best parts of the candidate translations and combines them to generate a consensus translation that is optimal with respect to a particular performance metric. We thoroughly describe the formalization of our approach as a weighted ensemble of probability distributions and provide efficient algorithms to obtain the optimal consensus translation according to the widespread BLEU score. Empirical results show that the proposed approach is indeed able to generate statistically better translations than the provided candidates. Compared to other state-of-the-art systems combination methods, our approach reports similar performance not requiring any additional data but the candidate translations. Then, we focus our attention on how to improve the utility of automatic translations for the end-user of the system. Since automatic translations are not perfect, a desirable feature of machine translation systems is the ability to predict at run-time the quality of the generated translations. Quality estimation is usually addressed as a regression problem where a quality score is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no consensus on which are the features that actually account for it. As a consequence, quality estimation systems for machine translation have to utilize a large number of weak features to predict translation quality. This involves several learning problems related to feature collinearity and ambiguity, and due to the ¿curse¿ of dimensionality. We address these challenges by adopting a two-step training methodology. First, a dimensionality reduction method computes, from the original features, the reduced set of features that better explains translation quality. Then, a prediction model is built from this reduced set to finally predict the quality score. We study various reduction methods previously used in the literature and propose two new ones based on statistical multivariate analysis techniques. More specifically, the proposed dimensionality reduction methods are based on partial least squares regression. The results of a thorough experimentation show that the quality estimation systems estimated following the proposed two-step methodology obtain better prediction accuracy that systems estimated using all the original features. Moreover, one of the proposed dimensionality reduction methods obtained the best prediction accuracy with only a fraction of the original features. This feature reduction ratio is important because it implies a dramatic reduction of the operating times of the quality estimation system. An alternative use of current machine translation systems is to embed them within an interactive editing environment where the system and a human expert collaborate to generate error-free translations. This interactive machine translation approach have shown to reduce supervision effort of the user in comparison to the conventional decoupled post-edition approach. However, interactive machine translation considers the translation system as a passive agent in the interaction process. In other words, the system only suggests translations to the user, who then makes the necessary supervision decisions. As a result, the user is bound to exhaustively supervise every suggested translation. This passive approach ensures error-free translations but it also demands a large amount of supervision effort from the user. Finally, we study different techniques to improve the productivity of current interactive machine translation systems. Specifically, we focus on the development of alternative approaches where the system becomes an active agent in the interaction process. We propose two different active approaches. On the one hand, we describe an active interaction approach where the system informs the user about the reliability of the suggested translations. The hope is that this information may help the user to locate translation errors thus improving the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence of such information in the productivity of an interactive machine translation system. Empirical results show that the proposed active interaction protocol is able to achieve a large reduction in supervision effort while still generating translations of very high quality. On the other hand, we study an active learning framework for interactive machine translation. In this case, the system is not only able to inform the user of which suggested translations should be supervised, but it is also able to learn from the user-supervised translations to improve its future suggestions. We develop a value-of-information criterion to select which automatic translations undergo user supervision. However, given its high computational complexity, in practice we study different selection strategies that approximate this optimal criterion. Results of a large scale experimentation show that the proposed active learning framework is able to obtain better compromises between the quality of the generated translations and the human effort required to obtain them. Moreover, in comparison to a conventional interactive machine translation system, our proposal obtained translations of twice the quality with the same supervision effort. / González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888 / TESIS

Page generated in 0.1056 seconds