1 |
Metodika měření kvality otisků prstu / Methodology of Fingerprint Image Quality MeasurementOravec, Tomáš January 2018 (has links)
This thesis deals with the problem of methodology of fingerprint image quality measurement. The first task was to analyze already existing software used for fingerprint quality measurement called NFIQ (NIST Fingerprint Image Quality), evaluate its performance and identify weaknesses. In order to eliminate discovered NFIQ weaknesses, different fingerprint quality estimation methodology was introduced, and its results were compared to other methodologies.
|
2 |
Automatic Post-editing and Quality Estimation in Machine Translation of Product DescriptionsKukk, Kätriin January 2022 (has links)
As a result of drastically improved machine translation quality in recent years, machine translation followed by manual post-editing is currently a trend in the language industry that is slowly but surely replacing manual translation from scratch. In this thesis, the applicability of machine translation to product descriptions of clothing items is studied. The focus lies on determining whether automatic post-editing is a viable approach for improving baseline translations when new training data becomes available and finding out if there is an existing quality estimation system that could reliably assign quality scores to machine translated texts. It is shown that machine translation is a promising approach for the target domain with the majority of systems experimented with being able to generate translations that on average are of almost publishable quality according to the human evaluation carried out, meaning that only light post-editing is needed before the translations can be published. Automatic post-editing is shown to be able to improve the worst baseline translations but struggles with improving the overall translation quality due to its tendency to overcorrect good translations. Nevertheless, one of the trained post-editing systems is still rated higher than the baseline by human evaluators. A new finding is that training a post-editing model on more data using worse translations leads to better performance compared to training on less but higher-quality data. None of the quality estimation systems experimented with shows a strong correlation with human evaluation results which is why it is suggested not to provide the confidence scores of the baseline model to the human evaluators responsible for correcting and approving translations. The main contributions of this work are showing that the target domain of product descriptions is suitable for integrating machine translation into the translation workflow, proposing an approach for that translation workflow that is more automated than the current one as well as the finding that it is better to use more data and poorer translations compared to less data and higher-quality translations when training an automatic post-editing system.
|
3 |
Automatic Recognition and Classification of Translation Errors in Human Translation / Automatisk igenkänning och klassificering av fel i mänsklig översättningDürlich, Luise January 2020 (has links)
Grading assignments is a time-consuming part of teaching translation. Automatic tools that facilitate this task would allow teachers of professional translation to focus more on other aspects of their job. Within Natural Language Processing, error recognitionhas not been studied for human translation in particular. This thesis is a first attempt at both error recognition and classification with both mono- and bilingual models. BERT– a pre-trained monolingual language model – and NuQE – a model adapted from the field of Quality Estimation for Machine Translation – are trained on a relatively small hand annotated corpus of student translations. Due to the nature of the task, errors are quite rare in relation to correctly translated tokens in the corpus. To account for this,we train the models with both under- and oversampled data. While both models detect errors with moderate success, the NuQE model adapts very poorly to the classification setting. Overall, scores are quite low, which can be attributed to class imbalance and the small amount of training data, as well as some general concerns about the corpus annotations. However, we show that powerful monolingual language models can detect formal, lexical and translational errors with some success and that, depending on the model, simple under- and oversampling approaches can already help a great deal to avoid pure majority class prediction.
|
4 |
Blind Estimation of Perceptual Quality for Modern Speech CommunicationsFalk, Tiago 05 January 2009 (has links)
Modern speech communication technologies expose users to perceptual quality degradations that were not experienced earlier with conventional telephone systems. Since perceived speech quality is a major contributor to the end user's perception of quality of service, speech quality estimation has become an important research field. In this dissertation, perceptual quality estimators are proposed for several emerging speech communication applications, in particular for i) wireless communications with noise suppression capabilities, ii) wireless-VoIP communications, iii) far-field hands-free speech communications, and iv) text-to-speech systems.
First, a general-purpose speech quality estimator is proposed based on statistical models of normative speech behaviour and on innovative techniques to detect multiple signal distortions. The estimators do not depend on a clean reference signal hence are termed ``blind." Quality meters are then distributed along the network chain to allow for both quality degradations and quality enhancements to be handled. In order to improve estimation performance for wireless communications, statistical models of noise-suppressed speech are also incorporated.
Next, a hybrid signal-and-link-parametric quality estimation paradigm is proposed for emerging wireless-VoIP communications. The algorithm uses VoIP connection parameters to estimate a base quality representative of the packet switching network. Signal-based distortions are then detected and quantified in order to adjust the base quality accordingly. The proposed hybrid methodology is shown to overcome the limitations of existing pure signal-based and pure link parametric algorithms.
Temporal dynamics information is then investigated for quality diagnosis for hands-free speech communications. A spectro-temporal signal representation, where speech and reverberation tail components are shown to be separable, is used for blind characterization of room acoustics. In particular, estimators of reverberation time, direct-to-reverberation energy ratio, and reverberant speech quality are developed.
Lastly, perceptual quality estimation for text-to-speech systems is addressed. Text- and speaker-independent hidden Markov models, trained on naturally produced speech, are used to capture normative spectral-temporal information. Deviations from the models, computed by means of a log-likelihood measure, are shown to be reliable indicators of multiple quality attributes including naturalness, fluency, and intelligibility. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2008-12-22 14:54:49.28
|
5 |
Système embarqué autonome en énergie pour objets mobiles communicants / Energy self-sufficient embedded system for mobile communicating objectsChaabane, Chiraz 30 June 2014 (has links)
Le nombre et la complexité croissante des applications qui sont intégrées dans des objets mobiles communicants sans fil (téléphone mobile, PDA, etc.) implique une augmentation de la consommation d'énergie. Afin de limiter l'impact de la pollution due aux déchets des batteries et des émissions de CO2, il est important de procéder à une optimisation de la consommation d'énergie de ces appareils communicants. Cette thèse porte sur l'efficacité énergétique dans les réseaux de capteurs. Dans cette étude, nous proposons de nouvelles approches pour gérer efficacement les objets communicants mobiles. Tout d’abord, nous proposons une architecture globale de réseau de capteurs et une nouvelle approche de gestion de la mobilité économe en énergie pour les appareils terminaux de type IEEE 802.15.4/ZigBee. Cette approche est basée sur l'indicateur de la qualité de lien (LQI) et met en œuvre un algorithme spéculatif pour déterminer le prochain coordinateur. Nous avons ainsi proposé et évalué deux algorithmes spéculatifs différents. Ensuite, nous étudions et évaluons l'efficacité énergétique lors de l'utilisation d'un algorithme d'adaptation de débit prenant en compte les conditions du canal de communication. Nous proposons d'abord une approche mixte combinant un nouvel algorithme d'adaptation de débit et notre approche de gestion de la mobilité. Ensuite, nous proposons et évaluons un algorithme d'adaptation de débit hybride qui repose sur une estimation plus précise du canal de liaison. Les différentes simulations effectuées tout au long de ce travail montrent l’efficacité énergétique des approches proposées ainsi que l’amélioration de la connectivité des nœuds. / The increasing number and complexity of applications that are embedded into wireless mobile communicating devices (mobile phone, PDA, etc.) implies an increase of energy consumption. In order to limit the impact of pollution due to battery waste and CO2 emission, it is important to conduct an optimization of the energy consumption of these communicating end devices. This thesis focuses on energy efficiency in sensor networks. It proposes new approaches to handle mobile communicating objects. First, we propose a global sensor network architecture and a new energy-efficient mobility management approach for IEEE 802.15.4/ZigBee end devices. This new approach is based on the link quality estimator (LQI) and uses a speculative algorithm. We propose and evaluate two different speculative algorithms. Then, we study and evaluate the energy efficiency when using a rate adaptation algorithm that takes into account the communication channel conditions. We first propose a mobility-aware rate adaptation algorithm and evaluate its efficiency in our network architecture. Then, we propose and evaluate a hybrid rate adaptation algorithm that relies on more accurate link channel estimation. Simulations conducted all along this study show the energy-efficiency of our proposed approaches and the improvement of the nodes’ connectivity.
|
6 |
Documents Usability EstimationYaghmaei, Ayoub January 2018 (has links)
The improvements of technical documents quality influence the popularity of its relevant product; as the customers do not like to waste their time in the help desk’s queue, they will be more satisfied if they can independently solve their problems through the technical manuals in an acceptable time. Moreover, the cost of support issues will decrease for the product providers. In addition, the help desk team members could have more time to support the rest of unresolved issues in a better-qualified way. To afford the mentioned benefits, we have done the current thesis to estimate the usability of the documents before publishing them. As the result of such prediction, the technical documentation writers could have a goal-driven approach to improve the quality of their products or services’ manuals. Furthermore, as different structural metrics have been observed in this research, the result of the thesis could create an opportunity to have multi-discipline improvement in Information Quality (IQ) process management.
|
7 |
Word Confidence Estimation and Its Applications in Statistical Machine Translation / Les mesures de confiance au niveau des mots et leurs applications pour la traduction automatique statistiqueLuong, Ngoc Quang 12 November 2014 (has links)
Les systèmes de traduction automatique (TA), qui génèrent automatiquement la phrase de la langue cible pour chaque entrée de la langue source, ont obtenu plusieurs réalisations convaincantes pendant les dernières décennies et deviennent les aides linguistiques efficaces pour la communauté entière dans un monde globalisé. Néanmoins, en raison de différents facteurs, sa qualité en général est encore loin de la perfection, constituant le désir des utilisateurs de savoir le niveau de confiance qu'ils peuvent mettre sur une traduction spécifique. La construction d'une méthode qui est capable d'indiquer des bonnes parties ainsi que d'identifier des erreurs de la traduction est absolument une bénéfice pour non seulement les utilisateurs, mais aussi les traducteurs, post-éditeurs, et les systèmes de TA eux-mêmes. Nous appelons cette méthode les mesures de confiance (MC). Cette thèse se porte principalement sur les méthodes des MC au niveau des mots (MCM). Le système de MCM assigne à chaque mot de la phrase cible un étiquette de qualité. Aujourd'hui, les MCM jouent un rôle croissant dans nombreux aspects de TA. Tout d'abord, elles aident les post-éditeurs d'identifier rapidement les erreurs dans la traduction et donc d'améliorer leur productivité de travail. De plus, elles informent les lecteurs des portions qui ne sont pas fiables pour éviter leur malentendu sur le contenu de la phrase. Troisièmement, elles sélectionnent la meilleure traduction parmi les sorties de plusieurs systèmes de TA. Finalement, et ce qui n'est pas le moins important, les scores MCM peuvent aider à perfectionner la qualité de TA via certains scénarios: ré-ordonnance des listes N-best, ré-décodage du graphique de la recherche, etc. Dans cette thèse, nous visons à renforcer et optimiser notre système de MCM, puis à l'exploiter pour améliorer TA ainsi que les mesures de confiance au niveau des phrases (MCP). Comparer avec les approches précédentes, nos nouvelles contributions étalent sur les points principaux comme suivants. Tout d'abord, nous intégrons différents types des paramètres: ceux qui sont extraits du système TA, avec des caractéristiques lexicales, syntaxiques et sémantiques pour construire le système MCM de base. L'application de différents méthodes d'apprentissage nous permet d'identifier la meilleure (méthode: "Champs conditionnels aléatoires") qui convient le plus nos donnés. En suite, l'efficacité de touts les paramètres est plus profond examinée en utilisant un algorithme heuristique de sélection des paramètres. Troisièmement, nous exploitons l'algorithme Boosting comme notre méthode d'apprentissage afin de renforcer la contribution des sous-ensembles des paramètres dominants du système MCM, et en conséquence d'améliorer la capacité de prédiction du système MCM. En outre, nous enquérons les contributions des MCM vers l'amélioration de la qualité de TA via différents scénarios. Dans le re-ordonnance des liste N-best, nous synthétisons les scores à partir des sorties du système MCM et puis les intégrons avec les autres scores du décodeur afin de recalculer la valeur de la fonction objective, qui nous permet d'obtenir un mieux candidat. D'ailleurs, dans le ré-décodage du graphique de la recherche, nous appliquons des scores de MCM directement aux noeuds contenant chaque mot pour mettre à jour leurs coûts. Une fois la mise à jour se termine, la recherche pour meilleur chemin sur le nouveau graphique nous donne la nouvelle hypothèse de TA. Finalement, les scores de MCM sont aussi utilisés pour renforcer les performances des systèmes de MCP. Au total, notre travail apporte une image perspicace et multidimensionnelle sur des MCM et leurs impacts positifs sur différents secteurs de la TA. Les résultats très prometteurs ouvrent une grande avenue où MCM peuvent exprimer leur rôle, comme: MCM pour la reconnaissance automatique de la parole (RAP), pour la sélection parmi plusieurs systèmes de TA, et pour les systèmes de TA auto-apprentissage. / Machine Translation (MT) systems, which generate automatically the translation of a target language for each source sentence, have achieved impressive gains during the recent decades and are now becoming the effective language assistances for the entire community in a globalized world. Nonetheless, due to various factors, MT quality is still not perfect in general, and the end users therefore expect to know how much should they trust a specific translation. Building a method that is capable of pointing out the correct parts, detecting the translation errors and concluding the overall quality of each MT hypothesis is definitely beneficial for not only the end users, but also for the translators, post-editors, and MT systems themselves. Such method is widely known under the name Confidence Estimation (CE) or Quality Estimation (QE). The motivations of building such automatic estimation methods originate from the actual drawbacks of assessing manually the MT quality: this task is time consuming, effort costly, and sometimes impossible in case where the readers have little or no knowledge of the source language. This thesis mostly focuses on the CE methods at word level (WCE). The WCE classifier tags each word in the MT output a quality label. The WCE working mechanism is straightforward: a classifier trained beforehand by a number of features using ML methods computes the confidence score of each label for each MT output word, then tag this word with highest score label. Nowadays, WCE shows an increasing importance in many aspects of MT. Firstly, it assists the post-editors to quickly identify the translation errors, hence improve their productivity. Secondly, it informs readers of portions of sentence that are not reliable to avoid the misunderstanding about the sentence's content. Thirdly, it selects the best translation among options from multiple MT systems. Last but not least, WCE scores can help to improve the MT quality via some scenarios: N-best list re-ranking, Search Graph Re-decoding, etc. In this thesis, we aim at building and optimizing our baseline WCE system, then exploiting it to improve MT and Sentence Confidence Estimation (SCE). Compare to the previous approaches, our novel contributions spread of these following main points. Firstly, we integrate various types of prediction indicators: system-based features extracted from the MT system, together with lexical, syntactic and semantic features to build the baseline WCE systems. We also apply multiple Machine Learning (ML) models on the entire feature set and then compare their performances to select the optimal one to optimize. Secondly, the usefulness of all features is deeper investigated using a greedy feature selection algorithm. Thirdly, we propose a solution that exploits Boosting algorithm as a learning method in order to strengthen the contribution of dominant feature subsets to the system, thus improve of the system's prediction capability. Lastly, we explore the contributions of WCE in improving MT quality via some scenarios. In N-best list re-ranking, we synthesize scores from WCE outputs and integrate them with decoder scores to calculate again the objective function value, then to re-order the N-best list to choose a better candidate. In the decoder's search graph re-decoding, the proposition is to apply WCE score directly to the nodes containing each word to update its cost regarding on the word quality. Furthermore, WCE scores are used to build useful features, which can enhance the performance of the Sentence Confidence Estimation system. In total, our work brings the insightful and multidimensional picture of word quality prediction and its positive impact on various sectors for Machine Translation. The promising results open up a big avenue where WCE can play its role, such as WCE for Automatic Speech Recognition (ASR) System, WCE for multiple MT selection, and WCE for re-trainable and self-learning MT systems.
|
8 |
Automatické rozpoznání kvality signálů EKG / Automatic ECG signal quality assesmentMalý, Tomáš January 2020 (has links)
This thesis deals with issues of automatic quality estimation of ECG signals. The main aim of this thesis is to implement own algorithm for classifying ECG signals into three classes of quality. Theoretical part of the thesis contains mostly description of recording electrical activity of the heart, anatomy and physiology of the heart, electrocardiography, different types of ECG signals interference and two of the chosen methods for quality estimation. Implementation of the chosen methods is presented in the practical part. Result of this thesis are two implemented algorithms, which are based on methods described in the theoretical part. The first of two is based on detection of R-waves, validation of physiological assumptions and the subsequent calculation of the correlation coefficient between adaptive template and interfered signal. Second is based on calculation of a continuous SNR value over time, which is then thresholded. The robustness of the methods was verified on the three specified real ECG signals, which are all available on UBMI including annotation of specific signal parts. Those 24-hour long signals were recorded by Holter monitor, which is described in the theoretical part of the thesis. Achieved results of individual methods, including their comparison with annotation and statistical evaluation are presented in the conclusion of this thesis.
|
9 |
On the Impact of Channel and Channel Quality Estimation on Adaptive ModulationJain, Payal 20 December 2002 (has links)
The rapid growth in wireless communications has given rise to an increasing demand for channel capacity using limited bandwidth. Wireless channels vary over time due to fading and changing interference conditions. Typical wireless systems are designed by choosing a modulation scheme to meet worst case conditions and thus rely on power control to adapt to changing channel conditions. Adaptive modulation, however, exploits these channel variations to improve the spectral efficiency of wireless communications by intelligently changing the modulation scheme based on channel conditions. Necessarily, among the modulation schemes used are spectrally efficient modulation schemes such as quadrature amplitude modulation (QAM) techniques.
QAM yields the high spectral efficiency due to its use of amplitude as well as phase modulation and therefore is an effective technique for achieving high channel capacity. The main drawbacks of QAM modulation are its reduced energy efficiency (as compared to standard QPSK) and its sensitivity to channel amplitude variations. Adaptive modulation attempts to address the first drawback by using more energy efficient schemes in low SNR conditions are reserving the use of QAM for high SNR conditions. The second drawback leads to a requirement of high quality channel estimation. Many researchers have studied pilot symbol assisted modulation for compensating the effects of fading at the receiver. A main contribution of this thesis is the investigation of different channel estimation techniques (along with the effect of pilot symbol spacing and Doppler spread) on the performance of adaptive modulation.
Another important parameter affecting adaptive modulation is the signal-to-noise ratio. In order to adapt modulation efficiently, it is essential to have accurate knowledge of the channel signal-to-noise ratio. The performance of adaptive modulation depends directly on how well the channel SNR is estimated. The more accurate the estimation of the channel SNR is, the better the choice of modulation scheme becomes, and the better the ability to exploit the variations in the wireless channel is. The second main contribution of this thesis is the investigation of the impact of SNR estimation techniques on the performance and spectral efficiency of adaptive modulation. Further, we investigate the impact of various channel conditions on SNR estimation and the resulting impact on the performance of adaptive modulation. Finally, we investigate long term SNR estimation, its use in adaptive modulation and present a comparison between the two approaches / Master of Science
|
10 |
Towards a Better Human-Machine Collaboration in Statistical Translation : Example of Systematic Medical Reviews / Vers une meilleure collaboration humain-machine en traduction statistique : l'exemple des revues systématiques en médecineIve, Julia 01 September 2017 (has links)
La traduction automatique (TA) a connu des progrès significatifs ces dernières années et continue de s'améliorer. La TA est utilisée aujourd'hui avec succès dans de nombreux contextes, y compris les environnements professionnels de traduction et les scénarios de production. Cependant, le processus de traduction requiert souvent des connaissances plus larges qu'extraites de corpus parallèles. Étant donné qu'une injection de connaissances humaines dans la TA est nécessaire, l'un des moyens possibles d'améliorer TA est d'assurer une collaboration optimisée entre l'humain et la machine. À cette fin, de nombreuses questions sont posées pour la recherche en TA: Comment détecter les passages où une aide humaine devrait être proposée ? Comment faire pour que les machines exploitent les connaissances humaines obtenues afin d'améliorer leurs sorties ? Enfin, comment optimiser l'échange: minimiser l'effort humain impliqué et maximiser la qualité de TA? Diverses solutions sont possibles selon les scénarios de traductions considérés. Dans cette thèse, nous avons choisi de nous concentrer sur la pré-édition, une intervention humaine en TA qui a lieu ex-ante, par opposition à la post-édition, où l'intervention humaine qui déroule ex-post. En particulier, nous étudions des scénarios de pré-édition ciblés où l'humain doit fournir des traductions pour des segments sources difficiles à traduire et choisis avec soin. Les scénarios de la pré-édition impliquant la pré-traduction restent étonnamment peu étudiés dans la communauté. Cependant, ces scénarios peuvent offrir une série d'avantages relativement, notamment, à des scénarios de post-édition non ciblés, tels que : la réduction de la charge cognitive requise pour analyser des phrases mal traduites; davantage de contrôle sur le processus; une possibilité que la machine exploite de nouvelles connaissances pour améliorer la traduction automatique au voisinage des segments pré-traduits, etc. De plus, dans un contexte multilingue, des difficultés communes peuvent être résolues simultanément pour de nombreuses langues. De tels scénarios s'adaptent donc parfaitement aux contextes de production standard, où l'un des principaux objectifs est de réduire le coût de l’intervention humaine et où les traductions sont généralement effectuées à partir d'une langue vers plusieurs langues à la fois. Dans ce contexte, nous nous concentrons sur la TA de revues systématiques en médecine. En considérant cet exemple, nous proposons une méthodologie indépendante du système pour la détection des difficultés de traduction. Nous définissons la notion de difficulté de traduction de la manière suivante : les segments difficiles à traduire sont des segments pour lesquels un système de TA fait des prédictions erronées. Nous formulons le problème comme un problème de classification binaire et montrons que, en utilisant cette méthodologie, les difficultés peuvent être détectées de manière fiable sans avoir accès à des informations spécifiques au système. Nous montrons que dans un contexte multilingue, les difficultés communes sont rares. Une perspective plus prometteuse en vue d'améliorer la qualité réside dans des approches dans lesquelles les traductions dans les différentes langues s’aident mutuellement à résoudre leurs difficultés. Nous intégrons les résultats de notre procédure de détection des difficultés dans un protocole de pré-édition qui permet de résoudre ces difficultés par pré-traduction. Nous évaluons le protocole dans un cadre simulé et montrons que la pré-traduction peut être à la fois utile pour améliorer la qualité de la TA et réaliste en termes d'implication des efforts humains. En outre, les effets indirects sont significatifs. Nous évaluons également notre protocole dans un contexte préliminaire impliquant des interventions humaines. Les résultats de ces expériences pilotes confirment les résultats obtenus dans le cadre simulé et ouvrent des perspectives encourageantes pour des tests ultérieures. / Machine Translation (MT) has made significant progress in the recent years and continues to improve. Today, MT is successfully used in many contexts, including professional translation environments and production scenarios. However, the translation process requires knowledge larger in scope than what can be captured by machines even from a large quantity of translated texts. Since injecting human knowledge into MT is required, one of the potential ways to improve MT is to ensure an optimized human-machine collaboration. To this end, many questions are asked by modern research in MT: How to detect where human assistance should be proposed? How to make machines exploit the obtained human knowledge so that they could improve their output? And, not less importantly, how to optimize the exchange so as to minimize the human effort involved and maximize the quality of MT output? Various solutions have been proposed depending on concrete implementations of the MT process. In this thesis we have chosen to focus on Pre-Edition (PRE), corresponding to a type of human intervention into MT that takes place ex-ante, as opposed to Post-Edition (PE), where human intervention takes place ex-post. In particular, we study targeted PRE scenarios where the human is to provide translations for carefully chosen, difficult-to-translate, source segments. Targeted PRE scenarios involving pre-translation remain surprisingly understudied in the MT community. However, such PRE scenarios can offer a series of advantages as compared, for instance, to non-targeted PE scenarios: i.a., the reduction of the cognitive load required to analyze poorly translated sentences; more control over the translation process; a possibility that the machine will exploit new knowledge to improve the automatic translation of neighboring words, etc. Moreover, in a multilingual setting common difficulties can be resolved at one time and for many languages. Such scenarios thus perfectly fit standard production contexts, where one of the main goals is to reduce the cost of PE and where translations are commonly performed simultaneously from one language into many languages. A representative production context - an automatic translation of systematic medical reviews - is the focus of this work. Given this representative context, we propose a system-independent methodology for translation difficulty detection. We define the notion of translation difficulty as related to translation quality: difficult-to-translate segments are segments for which an MT system makes erroneous predictions. We cast the problem of difficulty detection as a binary classification problem and demonstrate that, using this methodology, difficulties can be reliably detected without access to system-specific information. We show that in a multilingual setting common difficulties are rare, and a better perspective of quality improvement lies in approaches where translations into different languages will help each other in the resolution of difficulties. We integrate the results of our difficulty detection procedure into a PRE protocol that enables resolution of those difficulties by pre-translation. We assess the protocol in a simulated setting and show that pre-translation as a type of PRE can be both useful to improve MT quality and realistic in terms of the human effort involved. Moreover, indirect effects are found to be genuine. We also assess the protocol in a preliminary real-life setting. Results of those pilot experiments confirm the results in the simulated setting and suggest an encouraging beginning of the test phase.
|
Page generated in 0.1328 seconds