Spelling suggestions: "subject:"[een] COMPUTATIONAL LINGUISTICS"" "subject:"[enn] COMPUTATIONAL LINGUISTICS""
21 |
Traitement automatique d'un dictionnaire des synonymes étude de sa structure, méthode de contrôle et de perfectionnement /Kahlmann, André, January 1975 (has links)
Thesis--Stockholm. / Includes bibliographical references (p. 126-128).
|
22 |
Traitement automatique d'un dictionnaire des synonymes étude de sa structure, méthode de contrôle et de perfectionnement /Kahlmann, André, January 1975 (has links)
Thesis--Stockholm. / Includes bibliographical references (p. 126-128).
|
23 |
Tree templates and subtree transformational grammarsKron, Hans Hermann. January 1975 (has links)
Thesis (Ph. D.)--University of California, Santa Cruz, 1975. / Typescript. Includes bibliographical references (leaves 155-159).
|
24 |
Stochastic transduction for English grapheme-to-phoneme conversionLuk, Robert Wing Pong January 1992 (has links)
No description available.
|
25 |
Generating referring expressions in a domain of objects and processesDale, Robert January 1989 (has links)
No description available.
|
26 |
Constraint-based phonologyBird, Steven January 1991 (has links)
No description available.
|
27 |
Systematic parameterized complexity analysis in computational phonologyWareham, Harold 20 November 2017 (has links)
Many computational problems are NP-hard and hence probably do not have fast, i.e., polynomial time, algorithms. Such problems may yet have non-polynomial time algorithms, and the non-polynomial time complexities of these algorithm will be functions of particular aspects of that problem, i.e., the algorithm's running time is upper bounded by f (k) |x|ᶜ, where f is an arbitrary function, |x| is the size of the input x to the algorithm, k is an aspect of the problem, and c is a constant independent of |x| and k. Given such algorithms, it may still be possible to obtain optimal solutions for large instances of NP-hard problems for which the appropriate aspects are of small size or value. Questions about the existence of such algorithms are most naturally addressed within the theory of parameterized computational complexity developed by Downey and Fellows.
This thesis considers the merits of a systematic parameterized complexity analysis in which results are derived relative to all subsets of a specified set of aspects of a given NP-hard problem. This set of results defines an “intractability map” that shows relative to which sets of aspects algorithms whose non-polynomial time complexities are purely functions of those aspects do and do not exist for that problem. Such maps are useful not only for delimiting the set of possible algorithms for an NP-hard problem but also for highlighting those aspects that are responsible for this NP-hardness.
These points will be illustrated by systematic parameterized complexity analyses of problems associated with five theories of phonological processing in natural languages—namely, Simplified Segmental Grammars, finite-state transducer based rule systems, the KIMMO system, Declarative Phonology, and Optimality Theory. The aspects studied in these analyses broadly characterize the representations and mechanisms used by these theories. These analyses suggest that the computational complexity of phonological processing depends not on such details as whether a theory uses rules or constraints or has one, two, or many levels of representation but rather on the structure of the representation-relations encoded in individual mechanisms and the internal structure of the representations. / Graduate
|
28 |
An incremental parser for government-binding theoryMacias, Benjamin January 1991 (has links)
No description available.
|
29 |
A default logic approach to the derivation of natural language presuppositionsMercer, Robert Ernest January 1987 (has links)
A hearer's interpretation of the meaning of an utterance consists of more than what is conveyed
by just the sentence itself. Other parts of the meaning are produced as inferences from three knowledge sources: the sentence itself, knowledge about the world, and knowledge about language use. One inference of this type is the natural language presupposition. This category of inference is distinguished by a number of features: the inferences are generated only, but not necessarily, if certain lexical or syntactic environments are present in the uttered sentence; normal interpretations of these presuppositional environments in the scope of a negation in a simple sentence produce the same inferences as the unnegated environment; and the inference can be cancelled by information in the conversational context.
We propose a method for deriving presuppositions of natural language sentences that has its foundations in an inference-based concept of meaning. Whereas standard (monotonic) forms of reasoning are able to capture portions of a sentence's meaning, such as its entailments, non-monotonic forms of reasoning are required to derive its presuppositions. Gazdar's idea of presuppositions being consistent with the context, and the usual connection of presuppositions with lexical and syntactic environments motivates the use of Default Logic as the formal nonmonotonic
reasoning system. Not only does the default logic approach provide a natural means to represent presuppositions, but also a single (slightly restricted) default proof procedure is all that is required to generate the presuppositions. The naturalness and simplicity of this method contrasts with the traditional projection methods. Also available to the logical approach is the proper treatment of 'or' and 'if ... then ...' which is not available to any of the projection methods.
The default logic approach is compared with four others, three projection methods and one non-projection method. As well as serving the function of demonstrating empirical and methodological difficulties with the other methods, the detailed investigation also provides the motivation for the topics discussed in connection with default logic approach. Some of the difficulties have been solved using the default logic method, while possible solutions for others have only been sketched.
A brief discussion of a new method for providing corrective answers to questions is presented.
The novelty of this method is that the corrective answers are viewed as correcting presuppositions of the answer rather than of the question. / Science, Faculty of / Computer Science, Department of / Graduate
|
30 |
Multi-dynamic Bayesian networks for machine translation and NLP /Filali, Karim. January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 174-191).
|
Page generated in 0.0573 seconds