• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 8
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An efficient treatment of quantification in unspecified semantic representations

Willis, Alistair January 2000 (has links)
No description available.
2

The Algorithmic Expansion of Stories

Thomas, Craig Michael 12 October 2010 (has links)
This research examines how the contents and structure of a story may be enriched by computational means. A review of pertinent semantic theory and previous work on the structural analysis of folktales is presented. Merits and limitations of several content-generation systems are discussed. The research develops three mechanisms - elaboration, interpolation, and continuity fixes - to enhance story content, address issues of rigid structure, and fix problems with the logical progression of a story. Elaboration works by adding or modifying information contained within a story to provide detailed descriptions of an event. Interpolation works by adding detail between high-level story elements dictated by a story grammar. Both methods search for appropriate semantic functions contained in a lexicon. Rules are developed to ensure that the selection of functions is consistent with the context of the story. Control strategies for both mechanisms are proposed that restrict the quantity and content of candidate functions. Finally, a method of checking and correcting inconsistencies in story continuity is proposed. Continuity checks are performed using semantic threads that connect an object or character to a sequence of events. Unexplained changes in state or location are fixed with interpolation. The mechanisms are demonstrated with simple examples drawn from folktales, and the effectiveness of each is discussed. While the thesis focuses on folktales, it forms the basis for further work on the generation of more complex stories in the greater realm of fiction. / Thesis (Ph.D, Computing) -- Queen's University, 2010-10-12 11:24:33.536
3

Dynamics of semantic change : Detecting, analyzing and modeling semantic change in corpus in short diachrony / Dynamiques du changement sémantique : Détection, analyse et modélisation du changement sémantique en corpus en diachronie courte

Boussidan, Armelle 27 June 2013 (has links)
Cette thèse vise à élucider les mécanismes du changement sémantique en diachronie courte (ou micro-diachronie) dans des corpus. Pour comprendre, analyser et modéliser la dynamique de ces changements et poser les jalons d’un traitement dynamique du langage, le corpus est segmenté en une série de périodes temporelles d’un mois. Ce travail utilise le modèle ACOM (de H. Ji), qui s’inscrit dans le paradigme des Atlas Sémantiques, un modèle géométrique de représentation du sens basé sur l’analyse factorielle des correspondances et la notion de cliques. Les questions de traitement statistique du langage et du sens, de modélisation et de représentation sont traitées conjointement aux questions d’ordre linguistique, psychologique, et sociologique, dans la perspective d’une analyse multidisciplinaire unifiée, telle que conçue par les sciences cognitives. Une démarche de détection et d’analyse du changement sémantique est proposée, accompagnée d’études de cas qui portent à la fois sur de la détection large et sur des détails précis, proposant différent niveaux de granularité. Le changement sémantique est traité comme un déploiement de la polysémie d’une part, et comme une conséquence des modes de communication liés aux médias actuels et à la diffusion de ceux-ci. Linguistique, sociologie et sciences de l’information se rencontrent dans l’étude de la fabrique de sens nouveaux et de mots nouveaux. L’analyse des réseaux sémantiques des termes étudiés montre la réorganisation constante des sens dans le temps et en capture quelques aspects fondamentaux. Les études de cas portent notamment sur le terme « malbouffe », sur le changement sémantique de l’élément de composition « bio- » et sur le glissement de sens observé pour le terme « mondialisation » par rapport à son quasi-synonyme « globalisation ». Un prototype informatique a été développé pour permettre le traitement de ces études et d’études futures. / This doctoral thesis aims at elucidating the mechanisms of semantic change in short diachrony (or micro-diachrony) in corpus. To understand, analyze and model the dynamics of these changes and lay the groundwork for dynamic language processing, the corpus is divided in a series of time periods of one month. This work uses H. Ji’s ACOM model, which is an extension of the Semantic Atlas, both of which are geometrical models of meaning representation based on correspondence factor analysis and the notion of cliques. Language and meaning statistical processing issues as well as modeling and representation issues are dealt with in conjunction with linguistic, psychological and sociological aspects from a holistic multidisciplinary perspective, as conceived by cognitive sciences. An approach of detection and analysis of semantic change is proposed along with case studies which deal both with large scale and precise detailed phenomena, therefore offering several levels of granularity. On the one hand, semantic change is dealt with as the deployment of polysemy in time, and on the other hand as a consequence of communication methods related to the media and the diffusion of such methods. Linguistics, sociology and information sciences all contribute to the study of the making of new meanings and new words. The analysis of the semantic networks of the studied items show the constant reorganization of meanings in time, and captures a few fundamental aspects of this process. The case studies focus primarily on the French term malbouffe (“junk food”), and on the semantic change of the element of composition bio-, as well as on the connotational drift of the French term mondialisation compared to its near-synonym globalisation (“globalization”). A prototype has been developed for these case studies as well as future studies.
4

Modeling Preferences for Ambiguous Utterance Interpretations / Modélisation de préférences pour l'interprétation d'énoncés ambigus

Mirzapour, Mehdi 28 September 2018 (has links)
Le problème de représentation automatique de la signification logique des énoncés ambigus en langage naturel a suscité l'intérêt des chercheurs dans le domaine de la sémantique computationnelle et de la logique. L'ambiguïté dans le langage naturel peut se manifester au niveau lexical / syntaxique / sémantique de la construction de sens, ou elle peut être causée par d'autres facteurs tels que la grammaticalité et le manque de contexte dans lequel la phrase est effectivement prononcée. L'approche traditionnelle Montagovienne ainsi que ses extensions modernes ont tenté de capturer ce phénomène en fournissant quelques modèles qui permettent la génération automatique de formules logiques. Cependant, il existe un axe de recherche qui n'est pas encore profondément étudié: classer les interprétations d'énoncés ambigus en fonction des préférences réelles des utilisateurs de la langue. Ce manque suggère une nouvelle direction d'étude qui est partiellement explorée dans ce mémoire en modélisant des préférences de sens en alignement avec certaines des théories de performance préférentielles humaines bien étudiées disponibles dans la littérature linguistique et psycholinguistique.Afin d'atteindre cet objectif, nous suggérons d'utiliser / d'étendre les Grammaires catégorielles pour notre analyse syntaxique et les Réseaux catégoriels de preuve comme notre analyse syntaxique. Nous utilisons également le Lexique Génératif Montagovien pour dériver une formule logique multi-triée comme notre représentation de signification sémantique. Cela ouvrirait la voie à nos contributions à cinq volets, à savoir, (i) le classement de la portée du quantificateur multiple au moyen de l'opérateur epsilon de Hilbert sous-spécifié et des réseaux de preuve catégoriels; (ii) modéliser la gradation sémantique dans les phrases qui ont des coercitions implicites dans leurs significations. Nous utilisons un cadre appelé Montagovian Generative Lexicon. Notre tâche est d'introduire une procédure pour incorporer des types et des coercitions en utilisant des données lexicales produites par externalisation ouverte qui sont recueillies par un jeu sérieux appelé JeuxDeMots; (iii) l'introduction de nouvelles métriques sensibles au référent basées sur la localité pour mesurer la complexité linguistique au moyen de réseaux de preuve catégoriels; (iv) l'introduction d'algorithmes pour l'achèvement des phrases avec différentes mesures linguistiquement motivées pour sélectionner les meilleurs candidats; (v)l'intégration de différentes métriques de calcul pour les préférences de classement afin de faire d'elles un modèle unique. / The problem of automatic logical meaning representation for ambiguous natural language utterances has been the subject of interest among the researchers in the domain of computational and logical semantics. Ambiguity in natural language may be caused in lexical/syntactical/semantical level of the meaning construction or it may be caused by other factors such as ungrammaticality and lack of the context in which the sentence is actually uttered. The traditional Montagovian framework and the family of its modern extensions have tried to capture this phenomenon by providing some models that enable the automatic generation of logical formulas as the meaning representation. However, there is a line of research which is not profoundly investigated yet: to rank the interpretations of ambiguous utterances based on the real preferences of the language users. This gap suggests a new direction for study which is partially carried out in this dissertation by modeling meaning preferences in alignment with some of the well-studied human preferential performance theories available in the linguistics and psycholinguistics literature.In order to fulfill this goal, we suggest to use/extend Categorial Grammars for our syntactical analysis and Categorial Proof Nets as our syntactic parse. We also use Montagovian Generative Lexicon for deriving multi-sorted logical formula as our semantical meaning representation. This would pave the way for our five-folded contributions, namely, (i) ranking the multiple-quantifier scoping by means of underspecified Hilbert's epsilon operator and categorial proof nets; (ii) modeling the semantic gradience in sentences that have implicit coercions in their meanings. We use a framework called Montagovian Generative Lexicon. Our task is introducing a procedure for incorporating types and coercions using crowd-sourced lexical data that is gathered by a serious game called JeuxDeMots; (iii) introducing a new locality-based referent-sensitive metrics for measuring linguistic complexity by means of Categorial Proof Nets; (iv) introducing algorithms for sentence completions with different linguistically motivated metrics to select the best candidates; (v) and finally integration of different computational metrics for ranking preferences in order to make them a unique model.
5

LUDI: um framework para desambiguação lexical com base no enriquecimento da semântica de frames

Matos, Ely Edison da Silva 27 June 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-02-05T16:40:06Z No. of bitstreams: 1 elyedisondasilvamatos.pdf: 5520917 bytes, checksum: c9e7d798d96928a6ad4f2ee48d912531 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-02-26T11:51:46Z (GMT) No. of bitstreams: 1 elyedisondasilvamatos.pdf: 5520917 bytes, checksum: c9e7d798d96928a6ad4f2ee48d912531 (MD5) / Made available in DSpace on 2016-02-26T11:51:47Z (GMT). No. of bitstreams: 1 elyedisondasilvamatos.pdf: 5520917 bytes, checksum: c9e7d798d96928a6ad4f2ee48d912531 (MD5) Previous issue date: 2014-06-27 / Enquanto no âmbito da Sintaxe, as técnicas, os algoritmos e as aplicações em Processamento da Língua Natural são bem estudados e já estão relativamente bem estabelecidos, no âmbito da Semântica não é possível observar ainda a mesma maturidade. Visando, então, contribuir para os estudos em Semântica Computacional, este trabalho busca maneiras de implementar algumas das ideias e dos insights propostos pela Linguística Cognitiva, que é, por si, uma alternativa à Linguística Gerativa. A tentativa é reunir algumas das ferramentas disponíveis, seja no viés computacional (Bancos de Dados, Teoria dos Grafos, Ontologias, Mecanismos de inferências, Modelos Conexionistas), seja no viés linguístico (Semântica de Frames e Teoria do Léxico Gerativo), seja no viés de aplicações (FrameNet e ontologia SIMPLE), a fim de abordar as questões semânticas de forma mais flexível. O objeto de estudo é o processo de desambiguação de Unidades Lexicais. O resultado da pesquisa realizada é corporificado na forma de uma aplicação computacional, chamada Framework LUDI (Lexical Unit Discovery through Inference), composta por algoritmos e estruturas de dados usados na desambiguação. O framework é uma aplicação de Compreensão da Língua Natural, que pode ser integrada em ferramentas para recuperação de informação e sumarização, bem como em processos de Etiquetagem de Papéis Semânticos (SRL - Semantic Role Labeling). / While in the field of Syntax techniques, algorithms and applications in Natural Language Processing are well known and relatively well established, the same situation does not hold for the field of Semantics. Aiming at contributing to the studies in Computational Semantics, this work implements ideas and insights offered by Cognitive Linguistics, which is itself an alternative to Generative Linguistics. We attempt to bring together contributions from the computational domain (Databases, Graph Theory, Ontologies, inference mechanisms, Connectionists Models), the linguistic domain (Frame Semantics and the Generative Lexicon), and the application domain (FrameNet and SIMPLE Ontology) in order to address the semantic issues more flexibly. The object of study is the process of disambiguation of Lexical Units. The results of the research are embodied in the form of a computer application, called Framework LUDI (Lexical Unit Discovery through Inference), and composed of algorithms and data structures used for Lexical Unit disambiguation. The framework is an application of Natural Language Understanding, which can be integrated into information retrieval and summarization tools, as well as into processes of Semantic Role Labeling (SRL).
6

Predicting the N400 Component in Manipulated and Unchanged Texts with a Semantic Probability Model

Bjerva, Johannes January 2012 (has links)
Within the field of computational linguistics, recent research has made successful advances in integrating word space models with n-gram models. This is of particular interest when a model that encapsulates both semantic and syntactic information is desirable. A potential application for this can be found in the field of psycholinguistics, where the neural response N400 has been found to occur in contexts with semantic incongruities. Previous research has found correlations between cloze probabilities and N400, while more recent research has found correlations between cloze probabilities and language models. This essay attempts to uncover whether or not a more direct connection between integrated models and N400 can be found, hypothesizing that low probabilities elicit strong N400 responses and vice versa. In an EEG experiment, participants read a text manipulated using a language model, and a text left unchanged. Analysis of the results shows that the manipulations to some extent yielded results supporting the hypothesis. Further results are found when analysing responses to the unchanged text. However, no significant correlations between N400 and the computational model are found. Future research should improve the experimental paradigm, so that a larger scale EEG recording can be used to construct a large EEG corpus. / Innom datalingvistikken har tidligere forskning gjort framsteg når det gjelder å kombinere ordromsmodeller og n-grammodeller. Dette er av spesiell interesse når det er ønskelig å ha en modell som fanger både semantisk og syntaktisk informasjon. Et potensielt bruksområde for en slik modell finnes innom psykolingvistikk, der en neural respons som kalles N400 vist seg å oppstå i kontekster med semantisk inkongruens. Tidligere forskning har oppdaget en sterk korrelasjon mellom cloze probabilities og N400, og nylig forskning har funnet korrelasjoner mellom cloze probabilities og sannsynlighetsmodeller fra datalingvistikk. Denne oppgaven har som mål å undersøke hvorvidt en mer direkte kobling mellom slike kombinerte modeller og N400 finnes, med hypotesen at lave sannsynligheter leder til store N400-responser og omvendt. Et antall forsøkspersoner leste en tekst manipulert ved hjelp av en slik modell, og en naturlig tekst, i et EEG-eksperiment. Resultatsanalysen viser at manipuleringene til en viss grad gav resultat som støtter hypotesen. Tilsvarende resultat ble funnet under resultatanalysen av responsene til den naturlige teksten. Ingen signifikante korrelasjoner ble oppdaget mellom N400 og den kombinerte modellen. Forbedringer for videre forskning involverer å blant annet forbedre eksperimentparadigmet slik at en storstilt EEG-inspilling kan gjennomføres for å konstruere en EEG-korpus. / Inom datalingvistiken har tidigare forskning visat lovande resultat vid kombinering av ordrumsmodeller och n-gramsmodeller. Detta är av speciellt intresse när det är önskvärt att ha en modell som fångar både semantisk och syntaktisk information. Ett potensielt användningsområde för en sådan modell finns inom psykolingvistiken, där en neural respons kallad N400 visat sig uppstå i situationer med semantisk inkongruens. Tidigare forskning har upptäckt en stark korrelation mellan cloze probabilities och N400, medan en nyare studie har upptäckt en korrelation mellan cloze probabilities och sannolikhetsmodeller från datalingvistiken. Denna uppsats har som mål att undersöka huruvida en mer direkt koppling mellan sådana kombinerade modeller och N400 finns, med hypotesen att låga sannolikheter leder till stora N400-responser och vice versa. Ett antal försökspersoner läste en text manipulerad med hjälp av en probabilistisk modell, och en naturlig text, i ett EEG-experiment. Resultatsanalysen visar att manipuleringen till viss grad gav resultat som stödjer hypotesen. Motsvarande resultat hittades under resultatanalysen av responserna till den naturliga texten. Inga signifikanta korrelationer blev upptäckta mellan N400 och den kombinerade modellen. Förbättringar för vidare forskning involverar bland annat att förbättra experimentparadigmet så att en storskalig EEG-inspelning kan genomföras för att konstruera en EEG-korpus.
7

Capturing Domain Semantics with Representation Learning: Applications to Health and Function

Newman-Griffis, Denis R. 29 September 2020 (has links)
No description available.
8

Advancing cyber security with a semantic path merger packet classification algorithm

Thames, John Lane 30 October 2012 (has links)
This dissertation investigates and introduces novel algorithms, theories, and supporting frameworks to significantly improve the growing problem of Internet security. A distributed firewall and active response architecture is introduced that enables any device within a cyber environment to participate in the active discovery and response of cyber attacks. A theory of semantic association systems is developed for the general problem of knowledge discovery in data. The theory of semantic association systems forms the basis of a novel semantic path merger packet classification algorithm. The theoretical aspects of the semantic path merger packet classification algorithm are investigated, and the algorithm's hardware-based implementation is evaluated along with comparative analysis versus content addressable memory. Experimental results show that the hardware implementation of the semantic path merger algorithm significantly outperforms content addressable memory in terms of energy consumption and operational timing.

Page generated in 0.1288 seconds