• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 527
  • 43
  • 39
  • 18
  • 13
  • 11
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 778
  • 778
  • 540
  • 317
  • 302
  • 296
  • 296
  • 238
  • 200
  • 190
  • 126
  • 119
  • 115
  • 98
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Στατιστική αναγνώριση είδους κειμένου και συγγραφέα σε νεοελληνικά κείμενα χωρίς περιορισμούς

Σταματάτος, Ευστάθιος 22 September 2009 (has links)
- / -
72

Underspecified quantification

Herbelot, Aurelie January 2010 (has links)
No description available.
73

The Algorithmic Expansion of Stories

Thomas, Craig Michael 12 October 2010 (has links)
This research examines how the contents and structure of a story may be enriched by computational means. A review of pertinent semantic theory and previous work on the structural analysis of folktales is presented. Merits and limitations of several content-generation systems are discussed. The research develops three mechanisms - elaboration, interpolation, and continuity fixes - to enhance story content, address issues of rigid structure, and fix problems with the logical progression of a story. Elaboration works by adding or modifying information contained within a story to provide detailed descriptions of an event. Interpolation works by adding detail between high-level story elements dictated by a story grammar. Both methods search for appropriate semantic functions contained in a lexicon. Rules are developed to ensure that the selection of functions is consistent with the context of the story. Control strategies for both mechanisms are proposed that restrict the quantity and content of candidate functions. Finally, a method of checking and correcting inconsistencies in story continuity is proposed. Continuity checks are performed using semantic threads that connect an object or character to a sequence of events. Unexplained changes in state or location are fixed with interpolation. The mechanisms are demonstrated with simple examples drawn from folktales, and the effectiveness of each is discussed. While the thesis focuses on folktales, it forms the basis for further work on the generation of more complex stories in the greater realm of fiction. / Thesis (Ph.D, Computing) -- Queen's University, 2010-10-12 11:24:33.536
74

Automatic encoding of natural language medical problems

Hansard, Martha Snyder 12 1900 (has links)
No description available.
75

Parallel parsing of context-free languages on an array of processors

Langlois, Laurent Chevalier January 1988 (has links)
Kosaraju [Kosaraju 69] and independently ten years later, Guibas, Kung and Thompson [Guibas 79] devised an algorithm (K-GKT) for solving on an array of processors a class of dynamic programming problems of which general context-free language (CFL) recognition is a member. I introduce an extension to K-GKT which allows parsing as well as recognition. The basic idea of the extension is to add counters to the processors. These act as pointers to other processors. The extended algorithm consists of three phases which I call the recognition phase, the marking phase and the parse output phase. I first consider the case of unambiguous grammars. I show that in that case, the algorithm has O(n2log n) space complexity and a linear time complexity. To obtain these results I rely on a counter implementation that allows the execution in constant time of each of the operations: set to zero, test if zero, increment by 1 and decrement by 1. I provide a proof of correctness of this implementation. I introduce the concept of efficient grammars. One factor in the multiplicative constant hidden behind the O(n2log n) space complexity measure for the algorithm is related to the number of non-terminals in the (unambiguous) grammar used. I say that a grammar is k-efficient if it allows the processors to store not more than k pointer pairs. I call a 1-efficient grammar an efficient grammar. I show that two properties that I call nt-disjunction and rhsdasjunction together with unambiguity are sufficient but not necessary conditions for grammar efficiency. I also show that unambiguity itself is not a necessary condition for efficiency. I then consider the case of ambiguous grammars. I present two methods for outputting multiple parses. Both output each parse in linear time. One method has O(n3log n) space complexity while the other has O(n2log n) space complexity. I then address the issue of problem decomposition. I show how part of my extension can be adapted, using a standard technique, to process inputs that would be too large for an array of some fixed size. I then discuss briefly some issues related to implementation. I report on an actual implementation on the I.C.L. DAP. Finally, I show how another systolic CFL parsing algorithm, by Chang, Ibarra and Palis [Chang 87], can be generalized to output parses in preorder and inorder.
76

Modelling the acquisition of natural language categories

Fountain, Trevor Michael January 2013 (has links)
The ability to reason about categories and category membership is fundamental to human cognition, and as a result a considerable amount of research has explored the acquisition and modelling of categorical structure from a variety of perspectives. These range from feature norming studies involving adult participants (McRae et al. 2005) to long-term infant behavioural studies (Bornstein and Mash 2010) to modelling experiments involving artificial stimuli (Quinn 1987). In this thesis we focus on the task of natural language categorisation, modelling the cognitively plausible acquisition of semantic categories for nouns based on purely linguistic input. Focusing on natural language categories and linguistic input allows us to make use of the tools of distributional semantics to create high-quality representations of meaning in a fully unsupervised fashion, a property not commonly seen in traditional studies of categorisation. We explore how natural language categories can be represented using distributional models of semantics; we construct concept representations for corpora and evaluate their performance against psychological representations based on human-produced features, and show that distributional models can provide a high-quality substitute for equivalent feature representations. Having shown that corpus-based concept representations can be used to model category structure, we turn our focus to the task of modelling category acquisition and exploring how category structure evolves over time. We identify two key properties necessary for cognitive plausibility in a model of category acquisition, incrementality and non-parametricity, and construct a pair of models designed around these constraints. Both models are based on a graphical representation of semantics in which a category represents a densely connected subgraph. The first model identifies such subgraphs and uses these to extract a flat organisation of concepts into categories; the second uses a generative approach to identify implicit hierarchical structure and extract an hierarchical category organisation. We compare both models against existing methods of identifying category structure in corpora, and find that they outperform their counterparts on a variety of tasks. Furthermore, the incremental nature of our models allows us to predict the structure of categories during formation and thus to more accurately model category acquisition, a task to which batch-trained exemplar and prototype models are poorly suited.
77

Some PL/1 subroutines for natural language analysis

Fink, John William January 1973 (has links)
The purpose of this dissertation was to write and make available a small set of PL/1 computer subroutines that can be used in other computer programs attempting to do any kind of analysis of natural language data. The subroutines present in the dissertation are for some of the housekeeping, that is the jobs that must be done before analysis can begin.Four subroutines were written and tested: a subroutine called FINDONE (find one) that isolateswords in an input string of characters, and three subroutines, called the LAGADOs, that find words or word parts on lists of words or word parts. The reliability of the subroutines was tested in small testing programs and in a larger lexical diversity program that was modified to use the subroutines.FINDONE finds graphemic words and punctuation marks in an input character string. In addition, it truncates the input string from the left so that repeated calls of the subroutine finds the words in the input string in sequence. FINDONE takes as parameters the name of the input string and a name to be associated with the word found.The three LAGADO functions search for words on lists of words. Each of the functions is designed to search a list of a certain structure. LAGADO1 searches an alphabetized list where to length of the list is known. It uses the economical binary search technique. LAGADO1 takes as parameters the name of the word searched for, the name of the list to be searched and the length of the list to be searched.LAGADO2 searches a list in any order that is alphabetically indexed by an indexing array. LAGADO2 takes as parameters the name of the word being searched for, the name of the list being searched, the name of the indexing array, and the length of the list being searched.LAGAD03 searches any list that has an end-of-list symbol. LAGADO3 uses a linear search technique and looks at each element of the list being searched in order until it either finds the word being searched for or the final boundary symbols. LAGADO3 takes as parameters the name of the word searced for, the name of the list being searched, and the name of the end-of-list symbol.Each of the LAGADO functions returns a positive value equal to the subscript of the list element that matches the input word if the input word is matched, or a negative number whose absolute value is the subscript of the location of the cell where the input word would have to be inserted into the list if the input word is not matched.Two of the subroutines, FINDONE and LAGADO2, were tested by being incorporated into SUPRFRQ, a lexical diversity program developed from an earlier program written by Robert Wachal. An Appendix includes the documented texts of he subroutines and of the lexical diversity program. In addition, the appendix includes the result of a run of SUPRFQ on for short dialect texts collected, by Charles Houck in Leeds, England.
78

Using corpus linguistics to address some questions of Phoenician grammar and syntax found in the Kulamuwa inscription : identifying the presence and function of the infinitive absolute, the suffixed conjunction, and the WAW /

Booth, Scott W. January 2007 (has links) (PDF)
Thesis (M. A.)--Trinity International University, 2007. / Includes bibliographical references (leaves 217-228).
79

Learning bilingual semantic frames /

Wu, Zhaojun. January 2008 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2008. / Includes bibliographical references (leaves 70-75). Also available in electronic version.
80

Using corpus linguistics to address some questions of Phoenician grammar and syntax found in the Kulamuwa inscription identifying the presence and function of the infinitive absolute, the suffixed conjugation, and the WAW /

Booth, Scott W. January 2007 (has links)
Thesis (M.A.)--Trinity International University, 2007. / Includes bibliographical references (leaves 217-228).

Page generated in 0.1589 seconds