Spelling suggestions: "subject:"batural language"" "subject:"datural language""
71 |
Mixed-initiative natural language dialogue with variable communicative modesIshizaki, Masato January 1997 (has links)
As speech and natural language processing technology advance, it now reaches a stage where the dialogue control or initiative can be studied to realise usable and friendly human computer interface programs such as computer dialogue systems. One of the major problems concerning dialogue initiative is who should take the dialogue initiative when. This thesis tackles this dialogue initiative problem using the following approaches: 1. Human dialogue data is examined for their local dialogue structures; 2. A dialogue manager is proposed and implemented, which handles variations of human dialogue data concerning the dialogue initiative, and experimental results are obtained by having the implemented dialogue managers working with a parser and a generator exchange natural language messages with each other; and 3. A mathematical model is constructed and used to analyse who should take the dialogue initiative when. The first study shows that human dialogue data varies concerning the number of utterance units in a turn and utterance types independently of the difference of the dialogue initiative. The second study shows that the dialogues in which the dialogue initiative constantly alters (mixed-initiative dialogues) are not always more efficient than those in which the dialogue initiative does not change (non mixed-initiative dialogues). The third study concludes that under the assumption that both speakers solve a problem under similar situations, mixed-initiative dialogues are more efficient than non-mixed-initiative dialogues when initiating utterances can reduce a problem search space more efficiently than responding utterances. The above conclusion can be simplified to the condition that the agent should take the dialogue initiative when s/he can make an effective utterance like in the situations where s/he has more knowledge than the partner with respect to the current goal.
|
72 |
Probabilistic grammar induction from sentences and structured meaningsKwiatkowski, Thomas Mieczyslaw January 2012 (has links)
The meanings of natural language sentences may be represented as compositional logical-forms. Each word or lexicalised multiword-element has an associated logicalform representing its meaning. Full sentential logical-forms are then composed from these word logical-forms via a syntactic parse of the sentence. This thesis develops two computational systems that learn both the word-meanings and parsing model required to map sentences onto logical-forms from an example corpus of (sentence, logical-form) pairs. One of these systems is designed to provide a general purpose method of inducing semantic parsers for multiple languages and logical meaning representations. Semantic parsers map sentences onto logical representations of their meanings and may form an important part of any computational task that needs to interpret the meanings of sentences. The other system is designed to model the way in which a child learns the semantics and syntax of their first language. Here, logical-forms are used to represent the potentially ambiguous context in which childdirected utterances are spoken and a psycholinguistically plausible training algorithm learns a probabilistic grammar that describes the target language. This computational modelling task is important as it can provide evidence for or against competing theories of how children learn their first language. Both of the systems presented here are based upon two working hypotheses. First, that the correct parse of any sentence in any language is contained in a set of possible parses defined in terms of the sentence itself, the sentence’s logical-form and a small set of combinatory rule schemata. The second working hypothesis is that, given a corpus of (sentence, logical-form) pairs that each support a large number of possible parses according to the schemata mentioned above, it is possible to learn a probabilistic parsing model that accurately describes the target language. The algorithm for semantic parser induction learns Combinatory Categorial Grammar (CCG) lexicons and discriminative probabilistic parsing models from corpora of (sentence, logical-form) pairs. This system is shown to achieve at or near state of the art performance across multiple languages, logical meaning representations and domains. As the approach is not tied to any single natural or logical language, this system represents an important step towards widely applicable black-box methods for semantic parser induction. This thesis also develops an efficient representation of the CCG lexicon that separately stores language specific syntactic regularities and domain specific semantic knowledge. This factorised lexical representation improves the performance of CCG based semantic parsers in sparse domains and also provides a potential basis for lexical expansion and domain adaptation for semantic parsers. The algorithm for modelling child language acquisition learns a generative probabilistic model of CCG parses from sentences paired with a context set of potential logical-forms containing one correct entry and a number of distractors. The online learning algorithm used is intended to be psycholinguistically plausible and to assume as little information specific to the task of language learning as is possible. It is shown that this algorithm learns an accurate parsing model despite making very few initial assumptions. It is also shown that the manner in which both word-meanings and syntactic rules are learnt is in accordance with observations of both of these learning tasks in children, supporting a theory of language acquisition that builds upon the two working hypotheses stated above.
|
73 |
Extrapolating Subjectivity Research to Other LanguagesBanea, Carmen 05 1900 (has links)
Socrates articulated it best, "Speak, so I may see you." Indeed, language represents an invisible probe into the mind. It is the medium through which we express our deepest thoughts, our aspirations, our views, our feelings, our inner reality. From the beginning of artificial intelligence, researchers have sought to impart human like understanding to machines. As much of our language represents a form of self expression, capturing thoughts, beliefs, evaluations, opinions, and emotions which are not available for scrutiny by an outside observer, in the field of natural language, research involving these aspects has crystallized under the name of subjectivity and sentiment analysis. While subjectivity classification labels text as either subjective or objective, sentiment classification further divides subjective text into either positive, negative or neutral. In this thesis, I investigate techniques of generating tools and resources for subjectivity analysis that do not rely on an existing natural language processing infrastructure in a given language. This constraint is motivated by the fact that the vast majority of human languages are scarce from an electronic point of view: they lack basic tools such as part-of-speech taggers, parsers, or basic resources such as electronic text, annotated corpora or lexica. This severely limits the implementation of techniques on par with those developed for English, and by applying methods that are lighter in the usage of text processing infrastructure, we are able to conduct multilingual subjectivity research in these languages as well. Since my aim is also to minimize the amount of manual work required to develop lexica or corpora in these languages, the techniques proposed employ a lever approach, where English often acts as the donor language (the fulcrum in a lever) and allows through a relatively minimal amount of effort to establish preliminary subjectivity research in a target language.
|
74 |
Iterative parameter mixing for distributed large-margin training of structured predictors for natural language processingCoppola, Gregory Francis January 2015 (has links)
The development of distributed training strategies for statistical prediction functions is important for applications of machine learning, generally, and the development of distributed structured prediction training strategies is important for natural language processing (NLP), in particular. With ever-growing data sets this is, first, because, it is easier to increase computational capacity by adding more processor nodes than it is to increase the power of individual processor nodes, and, second, because data sets are often collected and stored in different locations. Iterative parameter mixing (IPM) is a distributed training strategy in which each node in a network of processors optimizes a regularized average loss objective on its own subset of the total available training data, making stochastic (per-example) updates to its own estimate of the optimal weight vector, and communicating with the other nodes by periodically averaging estimates of the optimal vector across the network. This algorithm has been contrasted with a close relative, called here the single-mixture optimization algorithm, in which each node stochastically optimizes an average loss objective on its own subset of the training data, operating in isolation until convergence, at which point the average of the independently created estimates is returned. Recent empirical results have suggested that this IPM strategy produces better models than the single-mixture algorithm, and the results of this thesis add to this picture. The contributions of this thesis are as follows. The first contribution is to produce and analyze an algorithm for decentralized stochastic optimization of regularized average loss objective functions. This algorithm, which we call the distributed regularized dual averaging algorithm, improves over prior work on distributed dual averaging by providing a simpler algorithm (used in the rest of the thesis), better convergence bounds for the case of regularized average loss functions, and certain technical results that are used in the sequel. The central contribution of this thesis is to give an optimization-theoretic justification for the IPM algorithm. While past work has focused primarily on its empirical test-time performance, we give a novel perspective on this algorithm by showing that, in the context of the distributed dual averaging algorithm, IPM constitutes a convergent optimization algorithm for arbitrary convex functions, while the single-mixture distribution algorithm is not. Experiments indeed confirm that the superior test-time performance of models trained using IPM, compared to single-mixture, correlates with better optimization of the objective value on the training set, a fact not previously reported. Furthermore, our analysis of general non-smooth functions justifies the use of distributed large-margin (support vector machine [SVM]) training of structured predictors, which we show yields better test performance than the IPM perceptron algorithm, the only version of the IPM to have previously been given a theoretical justification. Our results confirm that IPM training can reach the same level of test performance as a sequentially trained model and can reach better accuracies when one has a fixed budget of training time. Finally, we use the reduction in training time that distributed training allows to experiment with adding higher-order dependency features to a state-of-the-art phrase-structure parsing model. We demonstrate that adding these features improves out-of-domain parsing results of even the strongest phrase-structure parsing models, yielding a new state-of-the-art for the popular train-test pairs considered. In addition, we show that a feature-bagging strategy, in which component models are trained separately and later combined, is sometimes necessary to avoid feature under-training and get the best performance out of large feature sets.
|
75 |
Aspectos do processamento de interfaces em linguagem natural. / Sem título em inglês.Camargo Júnior, João Batista 05 September 1989 (has links)
Esta dissertação apresenta alguns formalismos usados no tratamento computacional de linguagens naturais, bem como uma proposta de método de processamento para as mesmas, envolvendo as fases de tradução, planejamento e execução. A etapa de tradução consiste da análise, interpretação e determinação do escopo de sentenças interrogativas. Esta etapa traduz sentenças em linguagem natural para uma forma lógica que representa sua semântica. Na etapa de planejamento, a forma lógica, obtida na etapa de tradução, é convertida em uma regra Prolog a se interpretada durante a etapa de execução. A principal etapa no processamento de linguagem natural é a etapa de tradução. Alguns formalismos, tais como a Gramática de Cláusulas Definidas - DCG, e a Gramática de Extraposição - XG, são discutidos em detalhe, para ilustrar os processos usados durante a tradução. Em seguida é apresentado um protótipo que implementa o interfaceamento de uma base de dados em linguagem natural, no caso um sub-conjunto restrito da língua portuguesa. Finalmente são feitos alguns comentários sobre a perspectiva da utilização da linguagem natural em diversos campos da computação, tais como entendimento de texto, programação automática e engenharia de software. / This work presents a methodology and some formalisms to be used in natural language processing. The present proposal manipulates natural languages by appling three processing steps translation, planning and execution. The translation step consists of parsing, interpreting, and determining the scope of the sentences. This step maps natural language sentences into some logical form that represents its semantics. In the planning step the logical form, obtained in the translation step, is converted into a Prolog rule to be interpreted during the execution step. The most important phase of natural language processing is the translation step. Some formalisms, like Definitive Clause Grammar - DCG and Extraposition Grammar - XG are discussed in detail to illustrate the methods used by the translation step. Next, is presented a prototype that implements a natural language interface to a database, by using a restrict subset of Portuguese language. Finally, some comments are made about the perspectives of using natural language in some fields of computation, such as text understanding, automatic programming and software engineering .
|
76 |
Evaluating distributional models of compositional semanticsBatchkarov, Miroslav Manov January 2016 (has links)
Distributional models (DMs) are a family of unsupervised algorithms that represent the meaning of words as vectors. They have been shown to capture interesting aspects of semantics. Recent work has sought to compose word vectors in order to model phrases and sentences. The most commonly used measure of a compositional DM's performance to date has been the degree to which it agrees with human-provided phrase similarity scores. The contributions of this thesis are three-fold. First, I argue that existing intrinsic evaluations are unreliable as they make use of small and subjective gold-standard data sets and assume a notion of similarity that is independent of a particular application. Therefore, they do not necessarily measure how well a model performs in practice. I study four commonly used intrinsic datasets and demonstrate that all of them exhibit undesirable properties. Second, I propose a novel framework within which to compare word- or phrase-level DMs in terms of their ability to support document classification. My approach couples a classifier to a DM and provides a setting where classification performance is sensitive to the quality of the DM. Third, I present an empirical evaluation of several methods for building word representations and composing them within my framework. I find that the determining factor in building word representations is data quality rather than quantity; in some cases only a small amount of unlabelled data is required to reach peak performance. Neural algorithms for building single-word representations perform better than counting-based ones regardless of what composition is used, but simple composition algorithms can outperform more sophisticated competitors. Finally, I introduce a new algorithm for improving the quality of distributional thesauri using information from repeated runs of the same non deterministic algorithm.
|
77 |
Graph-based approaches to word sense inductionHope, David Richard January 2015 (has links)
This thesis is a study of Word Sense Induction (WSI), the Natural Language Processing (NLP) task of automatically discovering word meanings from text. WSI is an open problem in NLP whose solution would be of considerable benefit to many other NLP tasks. It has, however, has been studied by relatively few NLP researchers and often in set ways. Scope therefore exists to apply novel methods to the problem, methods that may improve upon those previously applied. This thesis applies a graph-theoretic approach to WSI. In this approach, word senses are identifed by finding particular types of subgraphs in word co-occurrence graphs. A number of original methods for constructing, analysing, and partitioning graphs are introduced, with these methods then incorporated into graphbased WSI systems. These systems are then shown, in a variety of evaluation scenarios, to return results that are comparable to those of the current best performing WSI systems. The main contributions of the thesis are a novel parameter-free soft clustering algorithm that runs in time linear in the number of edges in the input graph, and novel generalisations of the clustering coeficient (a measure of vertex cohesion in graphs) to the weighted case. Further contributions of the thesis include: a review of graph-based WSI systems that have been proposed in the literature; analysis of the methodologies applied in these systems; analysis of the metrics used to evaluate WSI systems, and empirical evidence to verify the usefulness of each novel method introduced in the thesis for inducing word senses.
|
78 |
Paraphrase identification using knowledge-lean techniquesEyecioglu Ozmutlu, Asli January 2016 (has links)
This research addresses the problem of identification of sentential paraphrases; that is, the ability of an estimator to predict well whether two sentential text fragments are paraphrases. The paraphrase identification task has practical importance in the Natural Language Processing (NLP) community because of the need to deal with the pervasive problem of linguistic variation. Accurate methods for identifying paraphrases should help to improve the performance of NLP systems that require language understanding. This includes key applications such as machine translation, information retrieval and question answering amongst others. Over the course of the last decade, a growing body of research has been conducted on paraphrase identification and it has become an individual working area of NLP. Our objective is to investigate whether techniques concentrating on automated understanding of text requiring less resource may achieve results comparable to methods employing more sophisticated NLP processing tools and other resources. These techniques, which we call “knowledge-lean”, range from simple, shallow overlap methods based on lexical items or n-grams through to more sophisticated methods that employ automatically generated distributional thesauri. The work begins by focusing on techniques that exploit lexical overlap and text-based statistical techniques that are much less in need of NLP tools. We investigate the question “To what extent can these methods be used for the purpose of a paraphrase identification task?” For the two gold standard data, we obtained competitive results on the Microsoft Research Paraphrase Corpus (MSRPC) and reached the state-of-the-art results on the Twitter Paraphrase Corpus, using only n-gram overlap features in conjunction with support vector machines (SVMs). These techniques do not require any language specific tools or external resources and appear to perform well without the need to normalise colloquial language such as that found on Twitter. It was natural to extend the scope of the research and to consider experimenting on another language, which is poor in resources. The scarcity of available paraphrase data led us to construct our own corpus; we have constructed a paraphrasecorpus in Turkish. This corpus is relatively small but provides a representative collection, including a variety of texts. While there is still debate as to whether a binary or fine-grained judgement satisfies a paraphrase corpus, we chose to provide data for a sentential textual similarity task by agreeing on fine-grained scoring, knowing that this could be converted to binary scoring, but not the other way around. The correlation between the results from different corpora is promising. Therefore, it can be surmised that languages poor in resources can benefit from knowledge-lean techniques. Discovering the strengths of knowledge-lean techniques extended with a new perspective to techniques that use distributional statistical features of text by representing each word as a vector (word2vec). While recent research focuses on larger fragments of text with word2vec, such as phrases, sentences and even paragraphs, a new approach is presented by introducing vectors of character n-grams that carry the same attributes as word vectors. The proposed method has the ability to capture syntactic relations as well as semantic relations without semantic knowledge. This is proven to be competitive on Twitter compared to more sophisticated methods.
|
79 |
Aspectos do processamento de interfaces em linguagem natural. / Sem título em inglês.João Batista Camargo Júnior 05 September 1989 (has links)
Esta dissertação apresenta alguns formalismos usados no tratamento computacional de linguagens naturais, bem como uma proposta de método de processamento para as mesmas, envolvendo as fases de tradução, planejamento e execução. A etapa de tradução consiste da análise, interpretação e determinação do escopo de sentenças interrogativas. Esta etapa traduz sentenças em linguagem natural para uma forma lógica que representa sua semântica. Na etapa de planejamento, a forma lógica, obtida na etapa de tradução, é convertida em uma regra Prolog a se interpretada durante a etapa de execução. A principal etapa no processamento de linguagem natural é a etapa de tradução. Alguns formalismos, tais como a Gramática de Cláusulas Definidas - DCG, e a Gramática de Extraposição - XG, são discutidos em detalhe, para ilustrar os processos usados durante a tradução. Em seguida é apresentado um protótipo que implementa o interfaceamento de uma base de dados em linguagem natural, no caso um sub-conjunto restrito da língua portuguesa. Finalmente são feitos alguns comentários sobre a perspectiva da utilização da linguagem natural em diversos campos da computação, tais como entendimento de texto, programação automática e engenharia de software. / This work presents a methodology and some formalisms to be used in natural language processing. The present proposal manipulates natural languages by appling three processing steps translation, planning and execution. The translation step consists of parsing, interpreting, and determining the scope of the sentences. This step maps natural language sentences into some logical form that represents its semantics. In the planning step the logical form, obtained in the translation step, is converted into a Prolog rule to be interpreted during the execution step. The most important phase of natural language processing is the translation step. Some formalisms, like Definitive Clause Grammar - DCG and Extraposition Grammar - XG are discussed in detail to illustrate the methods used by the translation step. Next, is presented a prototype that implements a natural language interface to a database, by using a restrict subset of Portuguese language. Finally, some comments are made about the perspectives of using natural language in some fields of computation, such as text understanding, automatic programming and software engineering .
|
80 |
Abstracting over semantic theoriesHolt, Alexander G. B. January 1993 (has links)
The topic of this thesis is abstraction over theories of formal semantics for natural language. It is motivated by the belief that a metatheoretical perspective can contribute both to a better theoretical understanding of semantic theories, and to improved practical mechanisms for developing theories of semantics and combining them with theories of syntax. The argument for a new way to understand semantic theories rest spartly on the present difficulty of accurately comparing and clasifying theories, aswell as on the desire to easily combine theories that concentrate on different areas of semantics. There is a strong case for encouraging more modularity in the structure of semantic theories, to promote a division of labour, and potentially the development of reusable semantic modules. A more abstract approach to the syntax-semantics interface holds out the hope of further benefits, notably a degree of guaranteed semantic coherence via typesor constraints. Two case studies of semantic abstraction are presented. First,alternative characterizations of intensional abstraction and predication are developed with respect to three different semantic theories, but in a theory-independent fashion. Second,an approach to semantic abstraction recently proposed by Johnson and Kayis analyzed in detail,and the nature of its abstraction described with formal specifications. Finaly, a programme for modular semantic specifications is described, and applied to the area of quantification and anaphora,demonstrating succesfuly that theory-independent devices can be used to simultaneously abstract across both semantic theories and syntax-semantics interfaces.
|
Page generated in 0.0514 seconds