• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 9
  • 4
  • 3
  • 1
  • Tagged with
  • 42
  • 42
  • 17
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Bimorphism Machine Translation

Quernheim, Daniel 27 April 2017 (has links) (PDF)
The field of statistical machine translation has made tremendous progress due to the rise of statistical methods, making it possible to obtain a translation system automatically from a bilingual collection of text. Some approaches do not even need any kind of linguistic annotation, and can infer translation rules from raw, unannotated data. However, most state-of-the art systems do linguistic structure little justice, and moreover many approaches that have been put forward use ad-hoc formalisms and algorithms. This inevitably leads to duplication of effort, and a separation between theoretical researchers and practitioners. In order to remedy the lack of motivation and rigor, the contributions of this dissertation are threefold: 1. After laying out the historical background and context, as well as the mathematical and linguistic foundations, a rigorous algebraic model of machine translation is put forward. We use regular tree grammars and bimorphisms as the backbone, introducing a modular architecture that allows different input and output formalisms. 2. The challenges of implementing this bimorphism-based model in a machine translation toolkit are then described, explaining in detail the algorithms used for the core components. 3. Finally, experiments where the toolkit is applied on real-world data and used for diagnostic purposes are described. We discuss how we use exact decoding to reason about search errors and model errors in a popular machine translation toolkit, and we compare output formalisms of different generative capacity.
32

The control system in formal language theory and the model monitoring approach for reliability and safety / Systèmes de contrôle dans la théorie des langages et approche par monitoring des modèles pour la sécurité

Chen, Zhe 09 July 2010 (has links)
Cette thèse contribue à l’étude de la fiabilité et de la sécurité-innocuité des systèmes informatisés, modélisés par des systèmes à événements discrets. Les principales contributions concernent la théorie des Systèmes de Contrôle (notés C Systems) et l’approche par Monitoring des modèles.Dans la première partie de la thèse, nous étudions la théorie des Systèmes de Contrôle qui combine et étend de façon significative, les systèmes de réécriture de la théorie des langages et le contrôle supervisé. Un système de contrôle est une structure générique qui contient deux composants : le composant contrôlé et le composant contrôlant qui restreint le comportement du composant contrôlé. Les deux composants sont exprimés en utilisant le même formalisme comme des automates ou des grammaires. Nous considérons différentes classes de systèmes de contrôle basés sur différents formalismes comme, par exemple, les automates, les grammaires, ainsi que leurs versions infinies et concurrentes. Ensuite, une application de cette théorie est présentée. Les systèmes de contrôle basés sur les automates de Büchi sont utilisés pour vérifier par model-checking, des propriétés définissant la correction sur des traces d’exécution spécifiées par une assertion de type nevertrace.Dans la seconde partie de la thèse, nous investiguons l’approche de monitoring des modèles dont la théorie des systèmes de contrôle constitue les fondations formelles. Le principe pivot de cette approche est la «spécification de propriétés comme contrôleur». En d’autres termes, pour un système, les exigences fonctionnelles, d’une part, et des propriétés, d’autre part, sont modélisées et implantées séparément, les propriétés spécifiées contrôlant le comportement issu des exigences fonctionnelles. De cette approche découle ainsi deux techniques alternatives, respectivement nommées monitoring de modèle et génération de modèle. Cette approche peut être utilisée de diverses manières pour améliorer la fiabilité et la sécurité-innocuité de divers types de systèmes. Nous présentons quelques applications qui montrent l’intérêt pratique de cette contribution théorique. Tout d’abord, cette approche aide à prendre en compte les évolutions des spécifications des propriétés. En second lieu, elle fournit une base théorique à la sécurité fonctionnelle, popularisée par la norme IEC 61508. En troisième lieu, l’approche peut être utilisée pour formaliser et vérifier l’application de guides de bonnes pratiques ou des règles de modélisation appliquées par exemple pour des modèles UML.Ces résultats constituent les bases pour des études futures de dispositifs plus perfectionnés, et fournissent une nouvelle voie pour s’assurer de la fiabilité et de la sécurité-innocuité des systèmes / This thesis contributes to the study of reliability and safety of computer and software systems which are modeled as discrete event systems. The major contributions include the theory of Control Systems (C Systems) and the model monitoring approach.In the first part of the thesis, we study the theory of control systems which combines and significantly extends regulated rewriting in formal languages theory and supervisory control. The control system is a generic framework, and contains two components: the controlled component and the controlling component that restricts the behavior of the controlled component. The two components are expressed using the same formalism, e.g., automata or grammars. We consider various classes of control systems based on different formalisms, for example, automaton control systems, grammar control systems, and their infinite versions and concurrent variants. After that, an application of the theory is presented. The Büchi automata based control system is used to model and check correctness properties on execution traces specified by nevertrace claims.In the second part of the thesis, we investigate the model monitoring approach whose theoretical foundation is the theory of control systems. The key principle of the approach is “property specifications as controllers”. In other words, the functional requirements and property specification of a system are separately modeled and implemented, and the latter one controls the behavior of the former one. The model monitoring approach contains two alternative techniques, namely model monitoring and model generating. The approach can be applied in several ways to improve reliability and safety of various classes of systems. We present some typical applications to show its strong power. First, the approach provides better support for the change and evolution of property specifications. Second, it provides the theoretical foundation of safety-related systems in the standard IEC 61508 for ensuring the functional validity. Third, it is used to formalize and check guidelines and consistency rules of UML.These results lay out the foundations for further study of more advanced control mechanisms, and provide a new way for ensuring reliability and safety
33

Translation as Linear Transduction : Models and Algorithms for Efficient Learning in Statistical Machine Translation

Saers, Markus January 2011 (has links)
Automatic translation has seen tremendous progress in recent years, mainly thanks to statistical methods applied to large parallel corpora. Transductions represent a principled approach to modeling translation, but existing transduction classes are either not expressive enough to capture structural regularities between natural languages or too complex to support efficient statistical induction on a large scale. A common approach is to severely prune search over a relatively unrestricted space of transduction grammars. These restrictions are often applied at different stages in a pipeline, with the obvious drawback of committing to irrevocable decisions that should not have been made. In this thesis we will instead restrict the space of transduction grammars to a space that is less expressive, but can be efficiently searched. First, the class of linear transductions is defined and characterized. They are generated by linear transduction grammars, which represent the natural bilingual case of linear grammars, as well as the natural linear case of inversion transduction grammars (and higher order syntax-directed transduction grammars). They are recognized by zipper finite-state transducers, which are equivalent to finite-state automata with four tapes. By allowing this extra dimensionality, linear transductions can represent alignments that finite-state transductions cannot, and by keeping the mechanism free of auxiliary storage, they become much more efficient than inversion transductions. Secondly, we present an algorithm for parsing with linear transduction grammars that allows pruning. The pruning scheme imposes no restrictions a priori, but guides the search to potentially interesting parts of the search space in an informed and dynamic way. Being able to parse efficiently allows learning of stochastic linear transduction grammars through expectation maximization. All the above work would be for naught if linear transductions were too poor a reflection of the actual transduction between natural languages. We test this empirically by building systems based on the alignments imposed by the learned grammars. The conclusion is that stochastic linear inversion transduction grammars learned from observed data stand up well to the state of the art.
34

Modelagem e Construção de uma ferramenta de autoria para um Sistema Tutorial Inteligente / Modeling and Construction of a Tool of Authorship for a Tutorial System Intelligent

Costa, Nilson Santos 01 March 2002 (has links)
Made available in DSpace on 2016-08-17T14:52:42Z (GMT). No. of bitstreams: 1 Nilson Santos Costa.pdf: 955701 bytes, checksum: 32b64aae5f4201f9b4db958e974f82df (MD5) Previous issue date: 2002-03-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / One of the difficullties when building an Intelligent Tutoring System (ITS) is the domain model definition. This must be the result of knowledge acquisition from an experts, usually a teacher. This thesis presents the definition, modelling and implementation of an authoring tool. This tool, named authoring, provides cognitive evaluations to organize a knowledge set within an intelligent tutoring system. Through this tool, the knowledge (domain) of an Expert will be available for Mathnet System. This happens due to the edition or creation of an Expert authoring into a specific domain (knowledge). By the measures, it is possible the automation of a good curricular choice to the next phase to be some proposed by the peer, simulating the teacher s experience, in order to minimize the learning process and provide the use of many pedagogical strategies. Thus, the modelling and implementation of the authoring tool acts as a mechanism of creation and test of knowledge (domain). / Uma das maiores dificuldades na construção de um Sistema Tutorial Inteligente (STI) é a construção do Modelo de Domínio. Este deve ser o resultado da aquisição de conhecimentos a partir dos especialistas, geralmente professores da área e de pedagogia. Este trabalho apresenta a definição, modelagem e implementação de uma ferramenta de autoria. Esta ferramenta, denominada de autoria, terá medidas cognitivas para ordenar um conjunto de conhecimento para um Sistema Tutor Inteligente (STI). Com esta ferramenta, será disponibilizado o conhecimento (domínio) de um especialista para o sistema Mathnet1. Isto ocorrerá mediante a edição ou criação de uma autoria do especialista em um domínio específico (conhecimento). Por meio destas medidas será possível a automação de uma boa escolha curricular para a próxima etapa a ser trabalhada com o aprendiz (aluno), simulando a experiência do professor, a fim de minimizar o tempo de aprendizado e possibilitar a utilização de diversas estratégias pedagógicas. Desta maneira, a modelagem e implementação da ferramenta de autoria servirão como mecanismo de criação e testes de conhecimento (domínio).
35

Bimorphism Machine Translation

Quernheim, Daniel 10 April 2017 (has links)
The field of statistical machine translation has made tremendous progress due to the rise of statistical methods, making it possible to obtain a translation system automatically from a bilingual collection of text. Some approaches do not even need any kind of linguistic annotation, and can infer translation rules from raw, unannotated data. However, most state-of-the art systems do linguistic structure little justice, and moreover many approaches that have been put forward use ad-hoc formalisms and algorithms. This inevitably leads to duplication of effort, and a separation between theoretical researchers and practitioners. In order to remedy the lack of motivation and rigor, the contributions of this dissertation are threefold: 1. After laying out the historical background and context, as well as the mathematical and linguistic foundations, a rigorous algebraic model of machine translation is put forward. We use regular tree grammars and bimorphisms as the backbone, introducing a modular architecture that allows different input and output formalisms. 2. The challenges of implementing this bimorphism-based model in a machine translation toolkit are then described, explaining in detail the algorithms used for the core components. 3. Finally, experiments where the toolkit is applied on real-world data and used for diagnostic purposes are described. We discuss how we use exact decoding to reason about search errors and model errors in a popular machine translation toolkit, and we compare output formalisms of different generative capacity.
36

Algebraic decoder specification: coupling formal-language theory and statistical machine translation: Algebraic decoder specification: coupling formal-language theory and statistical machine translation

Büchse, Matthias 18 December 2014 (has links)
The specification of a decoder, i.e., a program that translates sentences from one natural language into another, is an intricate process, driven by the application and lacking a canonical methodology. The practical nature of decoder development inhibits the transfer of knowledge between theory and application, which is unfortunate because many contemporary decoders are in fact related to formal-language theory. This thesis proposes an algebraic framework where a decoder is specified by an expression built from a fixed set of operations. As yet, this framework accommodates contemporary syntax-based decoders, it spans two levels of abstraction, and, primarily, it encourages mutual stimulation between the theory of weighted tree automata and the application.
37

Vers un langage de haut niveau pour une ingénierie des exigences agile dans le domaine des systèmes embarqués avioniques / Toward a high level language for agile requirements engineering in an aeronautical context

Lebeaupin, Benoit 18 December 2017 (has links)
La complexité des systèmes conçus actuellement devient de plus en plus importante. En effet,afin de rester compétitives, les entreprises concevant des systèmes cherchent à leur rajouter de plus en plusde fonctionnalités. Cette compétitivité introduit aussi une demande de réactivité lors de la conception desystèmes, pour que le système puisse évoluer lors de sa conception et suivre les demandes du marché.Un des éléments identifiés comme empêchant ou diminuant cette capacité à concevoir de manière flexibledes systèmes complexes concerne les spécifications des systèmes, et en particulier l’utilisation de la languenaturelle pour spécifier les systèmes. Tout d’abord, la langue naturelle est intrinsèquement ambiguë et celarisque donc de créer des non-conformités si client et fournisseur d’un système ne sont pas d’accord sur lesens de sa spécification. De plus, la langue naturelle est difficile à traiter automatiquement, par exemple, onpeut difficilement déterminer avec un programme informatique que deux exigences en langue naturelle secontredisent. Cependant, la langue naturelle reste indispensable dans les spécifications que nous étudions,car elle reste un moyen de communication pratique et très répandu.Nous cherchons à compléter ces exigences en langue naturelle avec des éléments permettant à la fois de lesrendre moins ambiguës et de faciliter les traitements automatiques. Ces éléments peuvent faire partie demodèles (d’architecture par exemple) et permettent de définir le lexique et la syntaxe utilisés dans lesexigences. Nous avons testé les principes proposés sur des spécifications industrielles réelles et développéun prototype logiciel permettant de réaliser des tests sur une spécification dotée de ces éléments de syntaxeet de lexique. / Systems are becoming more and more complex, because to stay competitive, companies whichdesign systems search to add more and more functionalities to them. Additionally, this competition impliesthat the design of systems needs to be reactive, so that the system is able to evolve during its conception andfollow the needs of the market.This capacity to design flexibly complex systems is hindered or even prevented by various variouselements, with one of them being the system specifications. In particular, the use of natural language tospecify systems have several drawbacks. First, natural language is inherently ambiguous and this can leadsto non-conformity if customer and supplier of a system disagree on the meaning of its specification.Additionally, natural language is hard to process automatically : for example, it is hard to determine, usingonly a computer program, that two natural language requirements contradict each other. However, naturallanguage is currently unavoidable in the specifications we studied, because it remains very practical, and itis the most common way to communicate.We aim to complete these natural language requirements with elements which allow to make them lessambiguous and facilitate automatic processing. These elements can be parts of models (architectural modelsfor example) and allow to define the vocabulary and the syntax of the requirements. We experimented theproposed principles on real industrial specifications and we developped a software prototype allowing totest a specification enhanced with these vocabulary and syntax elements.
38

Syntaktická analýza založená na multigenerování / Parsing Based on Multigeneration

Kyjovská, Linda January 2008 (has links)
This work deals with syntax analysis problems based on multi-generation. The basic idea is to create computer program, which transforms one input string to n -1 output strings. An Input of this program is some plain text file created by user, which contains n grammar rules. Just one grammar from the input file is marked as an input grammar and others n -1 grammars are output grammars. This program creates list of used input grammar rules for an input string and uses corresponding output grammar rules for the creation of n -1 output strings. The program is written in C++ and Bison
39

Abstract Numeration Systems: Recognizability, Decidability, Multidimensional S-Automatic Words, and Real Numbers

Charlier, Emilie 07 December 2009 (has links)
In this doctoral dissertation, we studied and solved several questions regarding positional and abstract numeration systems. Each particular problem is the focus of a chapter. The first problem concerns the study of the preservation of recognizability under multiplication by a constant in abstract numeration systems built on polynomial regular languages. We obtained several results generalizing those from P. Lecomte and M. Rigo. The second problem we considered is a decidability problem, which was already studied, most notably, by J. Honkala and A. Muchnik. For our part, we studied this problem for two new cases: the linear positional numeration systems and the abstract numeration systems. Next, we focused on the extension to the multidimensional setting of a result of A. Maes and M.~Rigo regarding S-automatic infinite words. We obtained a characterization of multidimensional S-automatic words in terms of multidimensional (non-necessarily uniform) morphisms. This result can be viewed as the analogous of O. Salon's extension of a theorem of A. Cobham. Finally, generalizing results of P. Lecomte and M. Rigo, we proposed a formalism to represent real numbers in the general framework of abstract numeration systems built on languages that are not necessarily regular. This formalism encompasses in particular the rational base numeration systems, which have been recently introduced by S. Akiyama, Ch. Frougny, and J. Sakarovitch. Finally, we ended with a list of open questions in the continuation of this work./Dans cette dissertation, nous étudions et résolvons plusieurs questions autour des systèmes de numération abstraits. Chaque problème étudié fait l'objet d'un chapitre. Le premier concerne l'étude de la conservation de la reconnaissabilité par la multiplication par une constante dans des systèmes de numération abstraits construits sur des langages réguliers polynomiaux. Nous avons obtenus plusieurs résultats intéressants généralisant ceux de P. Lecomte et M. Rigo. Le deuxième problème auquel je me suis intéressée est un problème de décidabilité déjà étudié notamment par J. Honkala et A. Muchnik et ici décliné en deux nouvelles versions : les systèmes de numération de position linéaires et les systèmes de numération abstraits. Ensuite, nous nous penchons sur l'extension au cas multidimensionnel d'un résultat d'A. Maes et de M. Rigo à propos des mots infinis S-automatiques. Nous avons obtenu une caractérisation des mots S-automatiques multidimensionnels en termes de morphismes multidimensionnels (non nécessairement uniformes). Ce résultat peut être vu comme un analogue de l'extension obtenue par O. Salon d'un théorème de A. Cobham. Finalement, nous proposons un formalisme de la représentation des nombres réels dans le cadre général des systèmes de numération abstraits basés sur des langages qui ne sont pas nécessairement réguliers. Ce formalisme englobe notamment le cas des numérations en bases rationnelles introduits récemment par S. Akiyama, Ch. Frougny et J. Sakarovitch. Nous terminons par une liste de questions ouvertes dans la continuité de ce travail.
40

Composition flexible par planification automatique / Flexible composition by automated planning

Martin, Cyrille 04 October 2012 (has links)
Nous nous positionnons dans un contexte d'informatique ambiante dans lequel il arrive que les besoins de l'utilisateur n'aient pas été prévus, notamment en situation exceptionnelle. Dans ce cas, il peut ne pas exister de système préconçu qui réponde exactement à ces besoins. Pour les satisfaire, il faut alors pouvoir composer les systèmes disponibles dans l'environnement, et le système composé doit permettre à l'utilisateur de faire des choix à l'exécution. Ainsi, l'utilisateur a la possibilité d'adapter l'exécution de la composition à son contexte. Cela signifie que la composition intègre des structures de contrôle de l'exécution, destinées à l'utilisateur : la composition est dite flexible. Dans cette thèse, nous proposons de répondre au problème de la composition flexible en contexte d'intelligence ambiante avec un planificateur produisant des plans flexibles. Dans un premier temps, nous proposons une modélisation de la planification flexible. Pour cela, nous définissons les opérateurs de séquence et d'alternative, utilisés pour caractériser les plans flexibles. Nous définissons deux autres opérateurs au moyen de la séquence et de l'alternative : l'entrelacement et l'itération. Nous nous référons à ce cadre théorique pour délimiter la flexibilité traitée par notre planificateur Lambda-Graphplan. L'originalité de Lambda-Graphplan est de produire des itérations en s'appuyant sur une approche par graphe de planification. Nous montrons notamment que Lambda-Graphplan est très performant avec les domaines se prêtant à la construction de structures itératives. / In a context of Ambient Intelligence, some of the user's needs might not be anticipated, e.g. when the user is in an unforeseen situation. In this case, there could exist no system that exactly meets their needs. By composing the available systems, the user could obtain a new system that satisfies their needs. In order to adapt the composition to the context, the composition must allow the user to make choices at runtime. So the composition includes control structures for the user: the composition is flexible. In this thesis, I deal with the problem of the flexible composition by automated planning. I propose a model of flexible planning. The sequence and the choice operators are defined and used to characterize flexible plans. Then, two other operators are derived from the sequence and the choice operators: the interleaving and the iteration operators. I refer to this framework to define the flexibility produced by my planner, Lambda-Graphplan, which is based on the planning graph. The originality of Lambda-Graphplan is to produce iterations. I show that Lambda-Graphplan is very efficient on domains that allow the construction of iterative structures.

Page generated in 0.0775 seconds