• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 15
  • 8
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 23
  • 17
  • 15
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Désambiguïsation de corpus monolingues par des approches de type Lesk

Vasilescu, Florentina January 2003 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
12

Quelques modèles de langage statistiques et graphiques lissés avec WordNet

Jauvin, Christian January 2003 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
13

Réduction de dimension pour modèles graphiques probabilistes appliqués à la désambiguïsation sémantique

Boisvert, Maryse January 2004 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
14

Semantic Distance in WordNet: A Simplified and Improved Measure of Semantic Relatedness

Scriver, Aaron January 2006 (has links)
Measures of semantic distance have received a great deal of attention recently in the field of computational lexical semantics. Although techniques for approximating the semantic distance of two concepts have existed for several decades, the introduction of the WordNet lexical database and improvements in corpus analysis have enabled significant improvements in semantic distance measures. <br /><br /> In this study we investigate a special kind of semantic distance, called <em>semantic relatedness</em>. Lexical semantic relatedness measures have proved to be useful for a number of applications, such as word sense disambiguation and real-word spelling error correction. Most relatedness measures rely on the observation that the shortest path between nodes in a semantic network provides a representation of the relationship between two concepts. The strength of relatedness is computed in terms of this path. <br /><br /> This dissertation makes several significant contributions to the study of semantic relatedness. We describe a new measure that calculates semantic relatedness as a function of the shortest path in a semantic network. The proposed measure achieves better results than other standard measures and yet is much simpler than previous models. The proposed measure is shown to achieve a correlation of <em>r</em> = 0. 897 with the judgments of human test subjects using a standard benchmark data set, representing the best performance reported in the literature. We also provide a general formal description for a class of semantic distance measures &mdash; namely, those measures that compute semantic distance from the shortest path in a semantic network. Lastly, we suggest a new methodology for developing path-based semantic distance measures that would limit the possibility of unnecessary complexity in future measures.
15

Semantic Distance in WordNet: A Simplified and Improved Measure of Semantic Relatedness

Scriver, Aaron January 2006 (has links)
Measures of semantic distance have received a great deal of attention recently in the field of computational lexical semantics. Although techniques for approximating the semantic distance of two concepts have existed for several decades, the introduction of the WordNet lexical database and improvements in corpus analysis have enabled significant improvements in semantic distance measures. <br /><br /> In this study we investigate a special kind of semantic distance, called <em>semantic relatedness</em>. Lexical semantic relatedness measures have proved to be useful for a number of applications, such as word sense disambiguation and real-word spelling error correction. Most relatedness measures rely on the observation that the shortest path between nodes in a semantic network provides a representation of the relationship between two concepts. The strength of relatedness is computed in terms of this path. <br /><br /> This dissertation makes several significant contributions to the study of semantic relatedness. We describe a new measure that calculates semantic relatedness as a function of the shortest path in a semantic network. The proposed measure achieves better results than other standard measures and yet is much simpler than previous models. The proposed measure is shown to achieve a correlation of <em>r</em> = 0. 897 with the judgments of human test subjects using a standard benchmark data set, representing the best performance reported in the literature. We also provide a general formal description for a class of semantic distance measures &mdash; namely, those measures that compute semantic distance from the shortest path in a semantic network. Lastly, we suggest a new methodology for developing path-based semantic distance measures that would limit the possibility of unnecessary complexity in future measures.
16

Word Sense Disambiguation Using WordNet and Conceptual Expansion

Guo, Jian-Yi 24 January 2006 (has links)
As a single English word can have several different meanings, a single meaning can be expressed by several different English words. The meaning of a word depends on the sense intended. Thus to select the most appropriate meaning for an ambiguous word within a context is a critical problem for the applications using the technologies of natural language processing. However, at present, most word sense disambiguation methods either disambiguate only restricted parts of speech words such as only nouns or the accuracy in disambiguating word senses is not satisfiable. The ambiguous situation often bothers users. In this study, a new word sense disambiguation method using WordNet lexicon database, SemCor text files, and the Web is presented. In addition to nouns, the proposed method also attempts to disambiguate verbs, adjectives, and adverbs in sentences. The text files and sentences investigated in the experiments were randomly selected from SemCor. The semantic similarity between the senses of individually semantically ambiguous words in a word pair is measured to select the applicable candidate senses of a target word in that word pair. By a synonym weighting method, the possible sense diversity in synonym sets is considered based on the synonym sets WordNet provides. Thus corresponding synonym sets of the candidate senses are determined. The candidate senses expanded with the senses in the corresponding synonym sets, and enhanced by the context window technique form new queries. After the new queries are submitted to a search engine to search for the matching documents on the Web, the candidate senses are ranked by the number of the matching documents found. The first sense in the list of the ranked candidate senses is viewed as the most appropriate sense of the target word. The proposed method as well as Stetina et al.¡¦s and Mihalcea et al.¡¦s methods are evaluated based on the SemCor text files. The experimental results show that for the top sense selected this method having the average accuracy of disambiguating word senses with 81.3% for nouns, verbs, adjectives, and adverbs is slightly better than Stetina et al.¡¦s method of 80% and Mihalcea et al.¡¦s method of 80.1%. Furthermore, the proposed method is the only method with the accuracy of disambiguating word senses for verbs achieving 70% for the top one sense selected. Moreover, for the top three senses selected this method is superior to the other two methods by an average accuracy of the four parts of speech exceeding 96%. It is expected that the proposed method can improve the performance of the word sense disambiguation applications in machine translation, document classification, or information retrieval.
17

MERGE MAPS : um mecanismo computacional para mesclagem de mapas conceituais

Vassoler, Geraldo Angelo 04 September 2014 (has links)
Submitted by Maykon Nascimento (maykon.albani@hotmail.com) on 2016-06-21T18:33:30Z No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertacao Geraldo Azevedo Vassoler.pdf: 2335913 bytes, checksum: e840199d74841a1ee3bde19fa5533a08 (MD5) / Approved for entry into archive by Patricia Barros (patricia.barros@ufes.br) on 2016-07-01T13:52:40Z (GMT) No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertacao Geraldo Azevedo Vassoler.pdf: 2335913 bytes, checksum: e840199d74841a1ee3bde19fa5533a08 (MD5) / Made available in DSpace on 2016-07-01T13:52:40Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertacao Geraldo Azevedo Vassoler.pdf: 2335913 bytes, checksum: e840199d74841a1ee3bde19fa5533a08 (MD5) / Mapas Conceituais são representações gráficas do conhecimento acerca de um dado domínio frequentemente utilizadas em abordagens pedagógicas com finalidade de promover aprendizagens significativas e de representar e organizar um conjunto de significados em uma estrutura proposicional. Nesse processo de aprendizagem, os mapas conceituais podem ser considerados como meios para identificar conceitos e seus significados, dando origem ao conhecimento de forma explícita. Porém, o acompanhamento desse aprendizado é realizado de forma lenta e individual, no qual num contexto coletivo, não se pode mensurar o conhecimento de uma turma em um dado conhecimento. Esta dissertação propõe uma abordagem focada na importância da mesclagem de mapas conceituais e suas implicações no acompanhamento e na avaliação de desempenho de turmas em contextos gerais. Uma solução computacional foi desenvolvida para realizar automaticamente a mesclagem, a fim de promover uma melhor avaliação coletiva por parte dos docentes. / Concept maps are graphical representations of knowledge about a given domain often used in pedagogical approaches with the purpose of promoting meaningful learning and to represent and organize a set of meanings in a propositional framework. In this learning process, concept maps can be considered as a means to identify concepts and their meanings, giving rise to knowledge explicitly. However, monitoring of this learning is performed slowly and individually, in which a collective context, one cannot measure knowledge of a class in a given knowledge. This dissertation proposes a focus on the importance of the fusion of concept maps and their implications in monitoring and evaluating the performance of groups in general contexts approach. A computational solution is designed to automatically perform the merger in order to promote a better collective assessment by teachers. Keywords: Concept maps, merging of maps, assessment of Learning e WordNet.
18

Propriedades semânticas e alternâncias sintáticas do verbo: um exercício exploratório de delimitação do significado

Ávila, Maria Carolina [UNESP] 20 March 2006 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:50Z (GMT). No. of bitstreams: 0 Previous issue date: 2006-03-20Bitstream added on 2014-06-13T18:30:41Z : No. of bitstreams: 1 avila_mc_me_ararafcl.pdf: 823198 bytes, checksum: 52b31eb7eed7746f26c518610ddd85b7 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A partir da hipótese de que a estrutura de argumentos projetada pelo verbo reflete aspectos da sua estrutura conceitual, esta dissertação investiga um conjunto de teorias que analisam essa interface sintaxe-semântica com o objetivo de recortar uma representação das dimensões sintática e semântica para essa classe lexical. Do ponto de vista lingüístico, analisam-se as propriedades léxico-semânticas de um conjunto de verbos do português do Brasil, extraído da base de verbos da rede WordNet.Br e nocionalmente correspondente à classe semântica dos verbos do inglês Verbos de Destitução de Posse-Verbos do Tipo 'Roubar', isolada por Levin (1993). As representações léxico-semântica e léxico-sintática fundamentam-se, respectivamente, na teoria sobre as Estruturas Conceituais de Jackendoff (1990, 2002) e na teoria sobre a Estrutura de Argumentos de Hale e Keyser (2002). Do ponto de vista lingüístico-computacional, desenvolvem-se uma estratégia de construção e refinamento dos synsets de verbos da rede WordNet.Br e uma proposta de representação formal das dimensões sintáticas e conceitual para os verbos. / This thesis presents an inquiry on the lexical-syntactic and the lexical-semantic representation of verbs from the perspective that aspects of verb's argument structure reflect its conceptual structure. In the linguistic domain, the thesis investigates both the lexical-semantic and lexical-syntactic properties of the synset of Brazilian Portuguese verbs extracted from the WordNet.Br lexical database that corresponds to Levin's (1993) class of verbs of Possessional Deprivation-Steal/Rob Verbs. The lexical-semantic and lexical-syntactic representation are grounded in Jackendoff's (1990, 2002) Semantic Structures Theory and Hale and Keyser's (2002) Argument Structure Theory, respectively. In the computational-linguistic domain, it presents both a strategy for constructing and refining the WordNet.Br verb synsets and a formal representation for describing the syntactic and conceptual dimensions of verbs.
19

Polyset : modelo linguístico-computacional para a estruturação de redes de polissemia de nominais /

Alves, Isa Mara da Rosa. January 2009 (has links)
Resumo: Esta pesquisa visa a propor uma representação da polissemia de nominais compatível com sistemas computacionais; mais especificamente, o objetivo deste trabalho é incluir a especificação de relações de polissemia em bases wordnets, em especial, na WordNet.Br (DIAS-DA-SILVA, 1996, 1998, 2003). A metodologia do trabalho está baseada em Diasda- Silva (1996, 1998, 2003, 2006), abrangendo três domínios mutuamente complementares: o linguístico, o linguístico-computacional e o computacional. O domínio linguístico-computacional forneceu o tema para esta pesquisa e articulou a relação entre os domínios linguístico e computacional. Das investigações realizadas no cenário linguístico-computacional, destacamos a relevância da introdução de níveis distintos de generalidade entre os sentidos em uma base de dados de modo a otimizar o processamento lexical a ser realizada pelo sistema. Percebe-se que esse tipo de tarefa é ainda um desafio para as wordnets. Do cenário linguístico, destacamos que a Semântica Lexical Cognitiva foi considerada a teoria mais adequada aos propósitos desta tese. Olhar para o fenômeno do significado múltiplo sob o viés cognitivo possibilitou descrever os sentidos como uma entidade complexa, estruturada em termos de redes. As redes de polissemia sincrônicas, em sua configuração livre e multidimensional, conforme propõem Blank (2003) e Geeraerts (2006), demonstraram ser a estratégia descritiva mais adequada à representação da flexibilidade do sentido para os propósitos desta tese. Respondendo à fase aplicada dos domínios linguístico e linguísticocomputacional, propomos um modelo de representação denominado polyset. Os polysets são constructos estruturados em termos de redes de polissemia, de maneira que possibilitam representar diferentes níveis de generalidade entre os sentidos, diferentes graus de saliência e diferentes tipos... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: This research aims at representing noun polysemy so that it can be useful to computational systems; more specifically, the subject of this work is the inclusion of specification of polysemy relations in wordnet bases, particularly in WordNet.Br (DIASDA- SILVA, 1996, 1998, 2003). The methodology has been based on Dias-da-Silva (1996, 1998, 2003, 2006), comprehending three mutually complementary domains: linguistic, computational-linguistic, and computational ones. The computational-linguistic domain has both provided the subject for this research and articulated the relationship between the linguistic domain and the computational domain. From the investigations carried out in the computational-linguistic scene, we have highlighted the relevance of the introduction of distinct levels of generality among meanings in a database, so as to reduce the amount of lexical processing to be carried out by the system. At the same time, that multiple representation provides the necessary information for a system that needs a higher degree of meaning detailing. This kind of task is still a challenge to wordnets. From the linguistic scene, we have highlighted that Cognitive Lexical Semantics has shown to be the most suitable theory for the purposes of this thesis. Regarding the phenomenon of the multiple meaning from the cognitive perspective has allowed for describing meanings as a complex entity, structured in terms of nets. The nets of synchronic polysemy, in their free, multidimensional configuration, as Blank (2003) and Geeraerts (2006) have proposed, have shown to be the most suitable descriptive strategy for the representation of the meaning flexibility for the purposes of this thesis. Answering to the applied phase of both the linguistic and computationallinguistic domains we have proposed a representation model called polyset. Polysets are constructs structured in terms of polysemy nets, allowing... (Complete abstract click electronic access below) / Orientador: Bento Carlos Dias da Silva / Coorientador: Rove Luiza de Oliveira Chishman / Banca: Beatriz Nunes de Oliveira Longo / Banca: Gladis Maria de Barcellos Almeida / Banca: Thiago A. S. Prado / Banca: Heronides M. M. Moura / Doutor
20

Information Retrieval Using Lucene and WordNet

Whissel, Jhon F. 23 December 2009 (has links)
No description available.

Page generated in 0.0296 seconds