• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Distributed representations for compositional semantics

Hermann, Karl Moritz January 2014 (has links)
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches—meaning distributed representations that exploit co-occurrence statistics of large corpora—have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP. Part I focuses on distributed representations and their application. In particular, in Chapter 3 we explore the semantic usefulness of distributed representations by evaluating their use in the task of semantic frame identification. Part II describes the transition from semantic representations for words to compositional semantics. Chapter 4 covers the relevant literature in this field. Following this, Chapter 5 investigates the role of syntax in semantic composition. For this, we discuss a series of neural network-based models and learning mechanisms, and demonstrate how syntactic information can be incorporated into semantic composition. This study allows us to establish the effectiveness of syntactic information as a guiding parameter for semantic composition, and answer questions about the link between syntax and semantics. Following these discoveries regarding the role of syntax, Chapter 6 investigates whether it is possible to further reduce the impact of monolingual surface forms and syntax when attempting to capture semantics. Asking how machines can best approximate human signals of semantics, we propose multilingual information as one method for grounding semantics, and develop an extension to the distributional hypothesis for multilingual representations. Finally, Part III summarizes our findings and discusses future work.
2

Modélisation de la sémantique lexicale dans le cadre de la théorie des types / Modelling lexical semantics in a type-theoretic framework

Mery, Bruno 05 July 2011 (has links)
Le présent manuscrit constitue la partie écrite du travail de thèse réalisé par Bruno Mery sous la direction de Christian Bassac et Christian Retoré entre 2006 et 2011, portant sur le sujet "Modélisation de la sémantique lexicale dans la théorie des types". Il s'agit d'une thèse d'informatique s'inscrivant dans le domaine du traitement automatique des langues, et visant à apporter un cadre formel pour la prise en compte, lors de l'analyse sémantique de la phrase, d'informations apportées par chacun des mots.Après avoir situé le sujet, cette thèse examine les nombreux travaux l'ayant précédée et s'inscrit dans la tradition du lexique génératif. Elle présente des exemples de phénomènes à traiter, et donne une proposition de système de calcul fondée sur la logique du second ordre. Elle examine ensuite la validité de cette proposition par rapport aux exemples et aux autres approches déjà formalisées, et relate une implémentation de ce système. Enfin, elle propose une brève discussion des sujets restant en suspens. / This paper is part of the thesis by Bruno Mery advised by Christian Bassac and Christian Retore in the years 2006-2011, on the topic "Modelling lexical semantics in a type-theoretic framework''. It is a doctoral thesis in computer science, in the area of natural language processing, aiming to bring forth a formal framework that takes into account, in the parsing of the semantics of a sentence, of lexical data.After a discussion of the topic, this thesis reviews the many works perceding it and adopts the tradition of the generative lexicon. It presents samples of data to account for, and gives a proposal for a calculus system based upon a second-order logic. It afterwards reviews the validity of this proposal, coming back to the data samples and the other formal approaches, and gives an implementation of that system. At last, it engages in a short discussion of the remaining questions.
3

What if? : an enquiry into the semantics of natural language conditionals

Hjálmarsson, Guðmundur Andri January 2010 (has links)
This thesis is essentially a portfolio of four disjoint yet thematically related articles that deal with some semantic aspect or another of natural language conditionals. The thesis opens with a brief introductory chapter that offers a short yet opinionated historical overview and a theoretical background of several important semantic issues of conditionals. The second chapter then deals with the issue of truth values and conditions of indicative conditionals. So-called Gibbard Phenomenon cases have been used to argue that indicative conditionals construed in terms of the Ramsey Test cannot have truth values. Since that conclusion is somewhat incredible, several alternative options are explored. Finally, a contextualised revision of the Ramsey Test is offered which successfully avoids the threats of the Gibbard Phenomenon. The third chapter deals with the question of where to draw the so-called indicative/ subjunctive line. Natural language conditionals are commonly believed to be of two semantically distinct types: indicative and subjunctive. Although this distinction is central to many semantic analyses of natural conditionals, there seems to be no consensus on the details of its nature. While trying to uncover the grounds for the distinction, we will argue our way through several plausible proposals found in the literature. Upon discovering that none of these proposals seem entirely suited, we will reconsider our position and make several helpful observations into the nature of conditional sentences. And finally, in light of our observations, we shall propose and argue for plausible grounds for the indicative/subjunctive distinction.distinction. The fourth chapter offers semantics for modal and amodal natural language conditionals based on the distinction proposed in the previous chapter. First, the nature of modal and amodal suppositions will be explored. Armed with an analysis of modal and amodal suppositions, the corresponding conditionals will be examined further. Consequently, the syntax of conditionals in English will be uncovered for the purpose of providing input for our semantics. And finally, compositional semantics in generative grammar will be offered for modal and amodal conditionals. The fifth and final chapter defends Modus Ponens from alleged counterexamples. In particular, the chapter offers a solution to McGee’s infamous counterexamples. First, several solutions offered to the counterexamples hitherto are all argued to be inadequate. After a couple of observations on the counterexamples’ nature, a solution is offered and demonstrated. the solution suggests that the semantics of embedded natural language conditionals is more sophisticated than their surface syntax indicates. The heart of the solution is a translation function from the surface form of natural language conditionals to their logical form. Finally, the thesis ends with a conclusion that briefly summarises the main conclusions drawn in its preceding chapters.

Page generated in 0.0852 seconds