• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluation of Compilers for MATLAB- to C-Code Translation

Muellegger, Markus January 2008 (has links)
<p>MATLAB to C code translation is of increasing interest for science and industry. In</p><p>detail two MATLAB to C compilers denoted as Matlab to C Synthesis (MCS) and</p><p>Embedded MATLAB C (EMLC) have been studied. Three aspects of automatic code</p><p>generation have been studied; 1) generation of reference code; 2) target code generation;</p><p>3) floating-to-fixed-point conversion. The benchmark code used aimed to cover</p><p>simple up to more complex code by being viewed from a theoretical as well as practical perspective. A fixed-point filter implementation is demonstrated. EMLC and MCS</p><p>offer several fixed-point design tools. MCS provides a better support for C algorithm</p><p>reference generation, by covering a larger set of the MATLAB language as such. More</p><p>suitable for direct target implementation is code generated from EMLC. As a result</p><p>of the need to guarantee that the EMLC generated C-code allocates memory only</p><p>statically, MATLAB becomes more constraint by EMLC. Functional correctness was</p><p>generally achieved for each automatic translation.</p>
2

Evaluation of Compilers for MATLAB- to C-Code Translation

Muellegger, Markus January 2008 (has links)
MATLAB to C code translation is of increasing interest for science and industry. In detail two MATLAB to C compilers denoted as Matlab to C Synthesis (MCS) and Embedded MATLAB C (EMLC) have been studied. Three aspects of automatic code generation have been studied; 1) generation of reference code; 2) target code generation; 3) floating-to-fixed-point conversion. The benchmark code used aimed to cover simple up to more complex code by being viewed from a theoretical as well as practical perspective. A fixed-point filter implementation is demonstrated. EMLC and MCS offer several fixed-point design tools. MCS provides a better support for C algorithm reference generation, by covering a larger set of the MATLAB language as such. More suitable for direct target implementation is code generated from EMLC. As a result of the need to guarantee that the EMLC generated C-code allocates memory only statically, MATLAB becomes more constraint by EMLC. Functional correctness was generally achieved for each automatic translation.
3

Semantinis teksto transformavimas ir jo taikymas kompiuterinio vertimo sistemose / Semantic text conversion and using it in computerized automatic translation systems

Pavlovas, Andrijanas 04 June 2006 (has links)
Today Lithuania have a real need in having automatic translation system, which can simplificate a process of translation English language to Lithuanian language. But how we can realize this. First of all we must have a text semantic transformation system. It is a main purpose of this work – to create text semantic transformation system. Semantic transformation – process, which can help us in simplificating a sentence structure, but also to save a connections between different parts of sentence and the main means of sentence is not disappear. In my project I selected several trends(realized as functions) how we can transform a text. For example it can be a shorten sentence length or remove from sentence modal verbs, because in described rules this type of verb not need. And Iwill try to realized this my project.
4

Who is afraid of MT?

Schmitt, Peter A. 30 May 2018 (has links)
Machine translation (MT) is experiencing a renaissance. On one hand, machine translation is becoming more common and used in ever larger scale, on the other hand many translators have an almost hostile attitude towards machine translation programs and those translators who use MT as a tool. Either it is assumed that the MT can never be as good as a human translation or machine translation is viewed as the ultimate enemy of the translator and as a job killer. The article discusses with various examples the limits and possibilities of machine translation. It demonstrates that machine translation can be better than human translations – even if they were made by experienced professional translators. The paper also reports the results of a test that showed that translation customers must expect that even well-known and expensive translation service providers deliver a quality that is on par with poor MT. Overall, it is argued that machine translation programs are no more and no less than an additional tool with which the translation industry can satisfy certain requirements. This abstract was also – as the entire article – automatically translated into English.
5

Who is afraid of MT?

Schmitt, Peter A. 12 August 2022 (has links)
Machine translation (MT) is experiencing a renaissance. On one hand, machine translation is becoming more common and used in ever larger scale, on the other hand many translators have an almost hostile attitude towards machine translation programs and those translators who use MT as a tool. Either it is assumed that the MT can never be as good as a human translation or machine translation is viewed as the ultimate enemy of the translator and as a job killer. The article discusses with various examples the limits and possibilities of machine translation. It demonstrates that machine translation can be better than human translations – even if they were made by experienced professional translators. The paper also reports the results of a test that showed that translation customers must expect that even well-known and expensive translation service providers deliver a quality that is on par with poor MT. Overall, it is argued that machine translation programs are no more and no less than an additional tool with which the translation industry can satisfy certain requirements. This abstract was also – as the entire article – automatically translated into English.
6

Google Traduction et le texte idéologique : dans quelle mesure une traduction automatique transmet-elle le contenu idéologique d'un texte? / Google Translation and the ideological text : to what extent does an automatic translation convey the ideological content of an ideological text?

Fränne, Ellen January 2017 (has links)
Automatic translations, or machine translations, get more and more advanced and common. This paper aims to examine how well Google Traduction works for translating an ideological text. To what extent can a computer program interpret such a text, and render the meaning of complex thoughts and ideas into another language ? In order to study this, UNESCOS World Report Investing in Cultural Diversity and Intercultural Dialogue has been translated from french to swedish, first automatically and then manually. Focusing on denotations, connotations, grammar and style, the two versions have been analysed and compared. The conclusion drawn is that while Google Traduction impresses by its speed and possibilites, editing the automatically translated text in order to correctly transmit the mening and the message of the text to the target language reader, would probably be a more time-consuming process than writing a direct translation manually.
7

Translation as Linear Transduction : Models and Algorithms for Efficient Learning in Statistical Machine Translation

Saers, Markus January 2011 (has links)
Automatic translation has seen tremendous progress in recent years, mainly thanks to statistical methods applied to large parallel corpora. Transductions represent a principled approach to modeling translation, but existing transduction classes are either not expressive enough to capture structural regularities between natural languages or too complex to support efficient statistical induction on a large scale. A common approach is to severely prune search over a relatively unrestricted space of transduction grammars. These restrictions are often applied at different stages in a pipeline, with the obvious drawback of committing to irrevocable decisions that should not have been made. In this thesis we will instead restrict the space of transduction grammars to a space that is less expressive, but can be efficiently searched. First, the class of linear transductions is defined and characterized. They are generated by linear transduction grammars, which represent the natural bilingual case of linear grammars, as well as the natural linear case of inversion transduction grammars (and higher order syntax-directed transduction grammars). They are recognized by zipper finite-state transducers, which are equivalent to finite-state automata with four tapes. By allowing this extra dimensionality, linear transductions can represent alignments that finite-state transductions cannot, and by keeping the mechanism free of auxiliary storage, they become much more efficient than inversion transductions. Secondly, we present an algorithm for parsing with linear transduction grammars that allows pruning. The pruning scheme imposes no restrictions a priori, but guides the search to potentially interesting parts of the search space in an informed and dynamic way. Being able to parse efficiently allows learning of stochastic linear transduction grammars through expectation maximization. All the above work would be for naught if linear transductions were too poor a reflection of the actual transduction between natural languages. We test this empirically by building systems based on the alignments imposed by the learned grammars. The conclusion is that stochastic linear inversion transduction grammars learned from observed data stand up well to the state of the art.
8

Vers l'intégration de post-éditions d'utilisateurs pour améliorer les systèmes de traduction automatiques probabilistes / Towards the integration of users' post-editions to improve phrase-based machine translation systems

Potet, Marion 09 April 2013 (has links)
Les technologies de traduction automatique existantes sont à présent vues comme une approche prometteuse pour aider à produire des traductions de façon efficace et à coût réduit. Cependant, l'état de l'art actuel ne permet pas encore une automatisation complète du processus et la coopération homme/machine reste indispensable pour produire des résultats de qualité. Une pratique usuelle consiste à post-éditer les résultats fournis par le système, c'est-à-dire effectuer une vérification manuelle et, si nécessaire, une correction des sorties erronées du système. Ce travail de post-édition effectué par les utilisateurs sur les résultats de traduction automatique constitue une source de données précieuses pour l'analyse et l'adaptation des systèmes. La problématique abordée dans nos travaux s'intéresse à développer une approche capable de tirer avantage de ces retro-actions (ou post-éditions) d'utilisateurs pour améliorer, en retour, les systèmes de traduction automatique. Les expérimentations menées visent à exploiter un corpus d'environ 10 000 hypothèses de traduction d'un système probabiliste de référence, post-éditées par des volontaires, par le biais d'une plateforme en ligne. Les résultats des premières expériences intégrant les post-éditions, dans le modèle de traduction d'une part, et par post-édition automatique statistique d'autre part, nous ont permis d'évaluer la complexité de la tâche. Une étude plus approfondie des systèmes de post-éditions statistique nous a permis d'évaluer l'utilisabilité de tels systèmes ainsi que les apports et limites de l'approche. Nous montrons aussi que les post-éditions collectées peuvent être utilisées avec succès pour estimer la confiance à accorder à un résultat de traduction automatique. Les résultats de nos travaux montrent la difficulté mais aussi le potentiel de l'utilisation de post-éditions d'hypothèses de traduction automatiques comme source d'information pour améliorer la qualité des systèmes probabilistes actuels. / Nowadays, machine translation technologies are seen as a promising approach to help produce low cost translations. However, the current state of the art does not allow the full automation of the process and human intervention remains essential to produce high quality results. To ensure translation quality, system's results are commonly post-edited : the outputs are manually checked and, if necessary, corrected by the user. This user's post-editing work can be a valuable source of data for systems analysis and improvement. Our work focuses on developing an approach able to take advantage of these users' feedbacks to improve and update a statistical machine translation (SMT) system. The conducted experiments aim to exploit a corpus of about 10,000 SMT translation hypotheses post-edited by volunteers through a crowdsourcing platform. The first experiments integrated post-editions into the translation model on the one hand, and on the system outputs by automatic post-editing on another hand, and allowed us to evaluate the complexity of the task. Our further detailed study of automatic statistical post-editions systems evaluate the usability, the benefits and limitations of the approach. We also show that the collected post-editions can be successfully used to estimate the confidence of a given result of automatic translation. The obtained results show that the use of automatic translation hypothese post-editions as a source of information is a difficult but promising way to improve the quality of current probabilistic systems.
9

Comparaison de systèmes de traduction automatique pour la post édition des alertes météorologique d'Environnement Canada

van Beurden, Louis 08 1900 (has links)
Ce mémoire a pour but de déterminer la stratégie de traduction automatique des alertes météorologiques produites par Environnement Canada, qui nécessite le moins d’efforts de postédition de la part des correcteurs du bureau de la traduction. Nous commencerons par constituer un corpus bilingue d’alertes météorologiques représentatives de la tâche de traduction. Ensuite, ces données nous serviront à comparer les performances de différentes approches de traduction automatique, de configurations de mémoires de traduction et de systèmes hybrides. Nous comparerons les résultats de ces différents modèles avec le système WATT, développé par le RALI pour Environnement Canada, ainsi qu’avec les systèmes de l’industrie GoogleTranslate et DeepL. Nous étudierons enfin une approche de postédition automatique. / The purpose of this paper is to determine the strategy for the automatic translation of weather warnings produced by Environment Canada, which requires the least post-editing effort by the proofreaders of the Translation Bureau. We will begin by developing a bilingual corpus of weather warnings representative of this task. Then, this data will be used to compare the performance of different approaches of machine translation, translation memory configurations and hybrid systems. We will compare the results of these models with the system WATT, the latest system provided by RALI for Environment Canada, as well as with the industry systems GoogleTranslate and DeepL. Finaly, we will study an automatic post-edition system.

Page generated in 0.1394 seconds