Spelling suggestions: "subject:"batural anguage aprocessing"" "subject:"batural anguage eprocessing""
41 |
Development of a hybrid symbolic/connectionist system for word sense disambiguationWu, Xinyu January 1995 (has links)
No description available.
|
42 |
Extrapolating Subjectivity Research to Other LanguagesBanea, Carmen 05 1900 (has links)
Socrates articulated it best, "Speak, so I may see you." Indeed, language represents an invisible probe into the mind. It is the medium through which we express our deepest thoughts, our aspirations, our views, our feelings, our inner reality. From the beginning of artificial intelligence, researchers have sought to impart human like understanding to machines. As much of our language represents a form of self expression, capturing thoughts, beliefs, evaluations, opinions, and emotions which are not available for scrutiny by an outside observer, in the field of natural language, research involving these aspects has crystallized under the name of subjectivity and sentiment analysis. While subjectivity classification labels text as either subjective or objective, sentiment classification further divides subjective text into either positive, negative or neutral. In this thesis, I investigate techniques of generating tools and resources for subjectivity analysis that do not rely on an existing natural language processing infrastructure in a given language. This constraint is motivated by the fact that the vast majority of human languages are scarce from an electronic point of view: they lack basic tools such as part-of-speech taggers, parsers, or basic resources such as electronic text, annotated corpora or lexica. This severely limits the implementation of techniques on par with those developed for English, and by applying methods that are lighter in the usage of text processing infrastructure, we are able to conduct multilingual subjectivity research in these languages as well. Since my aim is also to minimize the amount of manual work required to develop lexica or corpora in these languages, the techniques proposed employ a lever approach, where English often acts as the donor language (the fulcrum in a lever) and allows through a relatively minimal amount of effort to establish preliminary subjectivity research in a target language.
|
43 |
Iterative parameter mixing for distributed large-margin training of structured predictors for natural language processingCoppola, Gregory Francis January 2015 (has links)
The development of distributed training strategies for statistical prediction functions is important for applications of machine learning, generally, and the development of distributed structured prediction training strategies is important for natural language processing (NLP), in particular. With ever-growing data sets this is, first, because, it is easier to increase computational capacity by adding more processor nodes than it is to increase the power of individual processor nodes, and, second, because data sets are often collected and stored in different locations. Iterative parameter mixing (IPM) is a distributed training strategy in which each node in a network of processors optimizes a regularized average loss objective on its own subset of the total available training data, making stochastic (per-example) updates to its own estimate of the optimal weight vector, and communicating with the other nodes by periodically averaging estimates of the optimal vector across the network. This algorithm has been contrasted with a close relative, called here the single-mixture optimization algorithm, in which each node stochastically optimizes an average loss objective on its own subset of the training data, operating in isolation until convergence, at which point the average of the independently created estimates is returned. Recent empirical results have suggested that this IPM strategy produces better models than the single-mixture algorithm, and the results of this thesis add to this picture. The contributions of this thesis are as follows. The first contribution is to produce and analyze an algorithm for decentralized stochastic optimization of regularized average loss objective functions. This algorithm, which we call the distributed regularized dual averaging algorithm, improves over prior work on distributed dual averaging by providing a simpler algorithm (used in the rest of the thesis), better convergence bounds for the case of regularized average loss functions, and certain technical results that are used in the sequel. The central contribution of this thesis is to give an optimization-theoretic justification for the IPM algorithm. While past work has focused primarily on its empirical test-time performance, we give a novel perspective on this algorithm by showing that, in the context of the distributed dual averaging algorithm, IPM constitutes a convergent optimization algorithm for arbitrary convex functions, while the single-mixture distribution algorithm is not. Experiments indeed confirm that the superior test-time performance of models trained using IPM, compared to single-mixture, correlates with better optimization of the objective value on the training set, a fact not previously reported. Furthermore, our analysis of general non-smooth functions justifies the use of distributed large-margin (support vector machine [SVM]) training of structured predictors, which we show yields better test performance than the IPM perceptron algorithm, the only version of the IPM to have previously been given a theoretical justification. Our results confirm that IPM training can reach the same level of test performance as a sequentially trained model and can reach better accuracies when one has a fixed budget of training time. Finally, we use the reduction in training time that distributed training allows to experiment with adding higher-order dependency features to a state-of-the-art phrase-structure parsing model. We demonstrate that adding these features improves out-of-domain parsing results of even the strongest phrase-structure parsing models, yielding a new state-of-the-art for the popular train-test pairs considered. In addition, we show that a feature-bagging strategy, in which component models are trained separately and later combined, is sometimes necessary to avoid feature under-training and get the best performance out of large feature sets.
|
44 |
Aspectos do processamento de interfaces em linguagem natural. / Sem título em inglês.Camargo Júnior, João Batista 05 September 1989 (has links)
Esta dissertação apresenta alguns formalismos usados no tratamento computacional de linguagens naturais, bem como uma proposta de método de processamento para as mesmas, envolvendo as fases de tradução, planejamento e execução. A etapa de tradução consiste da análise, interpretação e determinação do escopo de sentenças interrogativas. Esta etapa traduz sentenças em linguagem natural para uma forma lógica que representa sua semântica. Na etapa de planejamento, a forma lógica, obtida na etapa de tradução, é convertida em uma regra Prolog a se interpretada durante a etapa de execução. A principal etapa no processamento de linguagem natural é a etapa de tradução. Alguns formalismos, tais como a Gramática de Cláusulas Definidas - DCG, e a Gramática de Extraposição - XG, são discutidos em detalhe, para ilustrar os processos usados durante a tradução. Em seguida é apresentado um protótipo que implementa o interfaceamento de uma base de dados em linguagem natural, no caso um sub-conjunto restrito da língua portuguesa. Finalmente são feitos alguns comentários sobre a perspectiva da utilização da linguagem natural em diversos campos da computação, tais como entendimento de texto, programação automática e engenharia de software. / This work presents a methodology and some formalisms to be used in natural language processing. The present proposal manipulates natural languages by appling three processing steps translation, planning and execution. The translation step consists of parsing, interpreting, and determining the scope of the sentences. This step maps natural language sentences into some logical form that represents its semantics. In the planning step the logical form, obtained in the translation step, is converted into a Prolog rule to be interpreted during the execution step. The most important phase of natural language processing is the translation step. Some formalisms, like Definitive Clause Grammar - DCG and Extraposition Grammar - XG are discussed in detail to illustrate the methods used by the translation step. Next, is presented a prototype that implements a natural language interface to a database, by using a restrict subset of Portuguese language. Finally, some comments are made about the perspectives of using natural language in some fields of computation, such as text understanding, automatic programming and software engineering .
|
45 |
Evaluating distributional models of compositional semanticsBatchkarov, Miroslav Manov January 2016 (has links)
Distributional models (DMs) are a family of unsupervised algorithms that represent the meaning of words as vectors. They have been shown to capture interesting aspects of semantics. Recent work has sought to compose word vectors in order to model phrases and sentences. The most commonly used measure of a compositional DM's performance to date has been the degree to which it agrees with human-provided phrase similarity scores. The contributions of this thesis are three-fold. First, I argue that existing intrinsic evaluations are unreliable as they make use of small and subjective gold-standard data sets and assume a notion of similarity that is independent of a particular application. Therefore, they do not necessarily measure how well a model performs in practice. I study four commonly used intrinsic datasets and demonstrate that all of them exhibit undesirable properties. Second, I propose a novel framework within which to compare word- or phrase-level DMs in terms of their ability to support document classification. My approach couples a classifier to a DM and provides a setting where classification performance is sensitive to the quality of the DM. Third, I present an empirical evaluation of several methods for building word representations and composing them within my framework. I find that the determining factor in building word representations is data quality rather than quantity; in some cases only a small amount of unlabelled data is required to reach peak performance. Neural algorithms for building single-word representations perform better than counting-based ones regardless of what composition is used, but simple composition algorithms can outperform more sophisticated competitors. Finally, I introduce a new algorithm for improving the quality of distributional thesauri using information from repeated runs of the same non deterministic algorithm.
|
46 |
Graph-based approaches to word sense inductionHope, David Richard January 2015 (has links)
This thesis is a study of Word Sense Induction (WSI), the Natural Language Processing (NLP) task of automatically discovering word meanings from text. WSI is an open problem in NLP whose solution would be of considerable benefit to many other NLP tasks. It has, however, has been studied by relatively few NLP researchers and often in set ways. Scope therefore exists to apply novel methods to the problem, methods that may improve upon those previously applied. This thesis applies a graph-theoretic approach to WSI. In this approach, word senses are identifed by finding particular types of subgraphs in word co-occurrence graphs. A number of original methods for constructing, analysing, and partitioning graphs are introduced, with these methods then incorporated into graphbased WSI systems. These systems are then shown, in a variety of evaluation scenarios, to return results that are comparable to those of the current best performing WSI systems. The main contributions of the thesis are a novel parameter-free soft clustering algorithm that runs in time linear in the number of edges in the input graph, and novel generalisations of the clustering coeficient (a measure of vertex cohesion in graphs) to the weighted case. Further contributions of the thesis include: a review of graph-based WSI systems that have been proposed in the literature; analysis of the methodologies applied in these systems; analysis of the metrics used to evaluate WSI systems, and empirical evidence to verify the usefulness of each novel method introduced in the thesis for inducing word senses.
|
47 |
Paraphrase identification using knowledge-lean techniquesEyecioglu Ozmutlu, Asli January 2016 (has links)
This research addresses the problem of identification of sentential paraphrases; that is, the ability of an estimator to predict well whether two sentential text fragments are paraphrases. The paraphrase identification task has practical importance in the Natural Language Processing (NLP) community because of the need to deal with the pervasive problem of linguistic variation. Accurate methods for identifying paraphrases should help to improve the performance of NLP systems that require language understanding. This includes key applications such as machine translation, information retrieval and question answering amongst others. Over the course of the last decade, a growing body of research has been conducted on paraphrase identification and it has become an individual working area of NLP. Our objective is to investigate whether techniques concentrating on automated understanding of text requiring less resource may achieve results comparable to methods employing more sophisticated NLP processing tools and other resources. These techniques, which we call “knowledge-lean”, range from simple, shallow overlap methods based on lexical items or n-grams through to more sophisticated methods that employ automatically generated distributional thesauri. The work begins by focusing on techniques that exploit lexical overlap and text-based statistical techniques that are much less in need of NLP tools. We investigate the question “To what extent can these methods be used for the purpose of a paraphrase identification task?” For the two gold standard data, we obtained competitive results on the Microsoft Research Paraphrase Corpus (MSRPC) and reached the state-of-the-art results on the Twitter Paraphrase Corpus, using only n-gram overlap features in conjunction with support vector machines (SVMs). These techniques do not require any language specific tools or external resources and appear to perform well without the need to normalise colloquial language such as that found on Twitter. It was natural to extend the scope of the research and to consider experimenting on another language, which is poor in resources. The scarcity of available paraphrase data led us to construct our own corpus; we have constructed a paraphrasecorpus in Turkish. This corpus is relatively small but provides a representative collection, including a variety of texts. While there is still debate as to whether a binary or fine-grained judgement satisfies a paraphrase corpus, we chose to provide data for a sentential textual similarity task by agreeing on fine-grained scoring, knowing that this could be converted to binary scoring, but not the other way around. The correlation between the results from different corpora is promising. Therefore, it can be surmised that languages poor in resources can benefit from knowledge-lean techniques. Discovering the strengths of knowledge-lean techniques extended with a new perspective to techniques that use distributional statistical features of text by representing each word as a vector (word2vec). While recent research focuses on larger fragments of text with word2vec, such as phrases, sentences and even paragraphs, a new approach is presented by introducing vectors of character n-grams that carry the same attributes as word vectors. The proposed method has the ability to capture syntactic relations as well as semantic relations without semantic knowledge. This is proven to be competitive on Twitter compared to more sophisticated methods.
|
48 |
Aspectos do processamento de interfaces em linguagem natural. / Sem título em inglês.João Batista Camargo Júnior 05 September 1989 (has links)
Esta dissertação apresenta alguns formalismos usados no tratamento computacional de linguagens naturais, bem como uma proposta de método de processamento para as mesmas, envolvendo as fases de tradução, planejamento e execução. A etapa de tradução consiste da análise, interpretação e determinação do escopo de sentenças interrogativas. Esta etapa traduz sentenças em linguagem natural para uma forma lógica que representa sua semântica. Na etapa de planejamento, a forma lógica, obtida na etapa de tradução, é convertida em uma regra Prolog a se interpretada durante a etapa de execução. A principal etapa no processamento de linguagem natural é a etapa de tradução. Alguns formalismos, tais como a Gramática de Cláusulas Definidas - DCG, e a Gramática de Extraposição - XG, são discutidos em detalhe, para ilustrar os processos usados durante a tradução. Em seguida é apresentado um protótipo que implementa o interfaceamento de uma base de dados em linguagem natural, no caso um sub-conjunto restrito da língua portuguesa. Finalmente são feitos alguns comentários sobre a perspectiva da utilização da linguagem natural em diversos campos da computação, tais como entendimento de texto, programação automática e engenharia de software. / This work presents a methodology and some formalisms to be used in natural language processing. The present proposal manipulates natural languages by appling three processing steps translation, planning and execution. The translation step consists of parsing, interpreting, and determining the scope of the sentences. This step maps natural language sentences into some logical form that represents its semantics. In the planning step the logical form, obtained in the translation step, is converted into a Prolog rule to be interpreted during the execution step. The most important phase of natural language processing is the translation step. Some formalisms, like Definitive Clause Grammar - DCG and Extraposition Grammar - XG are discussed in detail to illustrate the methods used by the translation step. Next, is presented a prototype that implements a natural language interface to a database, by using a restrict subset of Portuguese language. Finally, some comments are made about the perspectives of using natural language in some fields of computation, such as text understanding, automatic programming and software engineering .
|
49 |
Lewisian Properties and Natural Language Processing: Computational Linguistics from a Philosophical PerspectiveBerman, Lucy 01 January 2019 (has links)
Nothing seems more obvious than that our words have meaning. When people speak to each other, they exchange information through the use of a particular set of words. The words they say to each other, moreover, are about something. Yet this relation of “aboutness,” known as “reference,” is not quite as simple as it appears. In this thesis I will present two opposing arguments about the nature of our words and how they relate to the things around us. First, I will present Hilary Putnam’s argument, in which he examines the indeterminacy of reference, forcing us to conclude that we must abandon metaphysical realism. While Putnam considers his argument to be a refutation of non-epistemicism, David Lewis takes it to be a reductio, claiming Putnam’s conclusion is incredible. I will present Lewis’s response to Putnam, in which he accepts the challenge of demonstrating how Putnam’s argument fails and rescuing us from the abandonment of realism. In order to explain the determinacy of reference, Lewis introduces the concept of “natural properties.” In the final chapter of this thesis, I will propose another use for Lewisian properties. Namely, that of helping to minimize the gap between natural language processing and human communication.
|
50 |
A framework and evaluation of conversation agentsos.goh@murdoch.edu.au, Ong Sing Goh January 2008 (has links)
This project details the development of a novel and practical framework for the development of conversation agents (CAs), or conversation robots. CAs, are software programs which can be used to provide a natural interface between human and computers. In this study, conversation refers to real-time dialogue exchange between human and machine which may range from web chatting to on-the-go conversation through mobile devices. In essence, the project proposes a smart and effective communication technology where an autonomous agent is able to carry out simulated human conversation via multiple channels. The CA developed in this project is termed Artificial Intelligence Natural-language Identity (AINI) and AINI is used to illustrate the implementation and testing carried out in this project. Up to now, most CAs have been developed with a short term objective to serve as tools to convince users that they are talking with real humans as in the case of the Turing Test. The traditional designs have mainly relied on ad-hoc approach and hand-crafted domain knowledge. Such approaches make it difficult for a fully integrated system to be developed and modified for other domain applications and tasks. The proposed framework in this thesis addresses such limitations. Overcoming the weaknesses of previous systems have been the key challenges in this study. The research in this study has provided a better understanding of the system requirements and the development of a systematic approach for the construction of intelligent CAs based on agent architecture using a modular N-tiered approach. This study demonstrates an effective implementation and exploration of the new paradigm of Computer Mediated Conversation (CMC) through CAs. The most significant aspect of the proposed framework is its ability to re-use and encapsulate expertise such as domain knowledge, natural language query and human-computer interface through plug-in components. As a result, the developer does not need to change the framework implementation for different applications. This proposed system provides interoperability among heterogeneous systems and it has the flexibility to be adapted for other languages, interface designs and domain applications. A modular design of knowledge representation facilitates the creation of the CA knowledge bases. This enables easier integration of open-domain and domain-specific knowledge with the ability to provide answers for broader queries. In order to build the knowledge base for the CAs, this study has also proposed a mechanism to gather information from commonsense collaborative knowledge and online web documents. The proposed Automated Knowledge Extraction Agent (AKEA) has been used for the extraction of unstructured knowledge from the Web. On the other hand, it is also realised that it is important to establish the trustworthiness of the sources of information. This thesis introduces a Web Knowledge Trust Model (WKTM) to establish the trustworthiness of the sources.
In order to assess the proposed framework, relevant tools and application modules have been developed and an evaluation of their effectiveness has been carried out to validate the performance and accuracy of the system. Both laboratory and public experiments with online users in real-time have been carried out. The results have shown that the proposed system is effective. In addition, it has been demonstrated that the CA could be implemented on the Web, mobile services and Instant Messaging (IM). In the real-time human-machine conversation experiment, it was shown that AINI is able to carry out conversations with human users by providing spontaneous interaction in an unconstrained setting. The study observed that AINI and humans share common properties in linguistic features and paralinguistic cues. These human-computer interactions have been analysed and contributed to the understanding of how the users interact with CAs. Such knowledge is also useful for the development of conversation systems utilising the commonalities found in these interactions. While AINI is found having difficulties in responding to some forms of paralinguistic cues, this could lead to research directions for further work to improve the CA performance in the future.
|
Page generated in 0.0927 seconds