Spelling suggestions: "subject:"batural language aprocessing"" "subject:"batural language eprocessing""
401 |
The use of prosodic features in Chinese speech recognition and spoken language processing /Wong, Jimmy Pui Fung. January 2003 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2003. / Includes bibliographical references (leaves 97-101). Also available in electronic version. Access restricted to campus users.
|
402 |
Learning language from ambiguous perceptual contextChen, David Lieh-Chiang 05 July 2012 (has links)
Building a computer system that can understand human languages has been one of the long-standing goals of artificial intelligence. Currently, most state-of-the-art natural language processing (NLP) systems use statistical machine learning methods to extract linguistic knowledge from large, annotated corpora. However, constructing such corpora can be expensive and time-consuming due to the expertise it requires to annotate such data. In this thesis, we explore alternative ways of learning which do not rely on direct human supervision. In particular, we draw our inspirations from the fact that humans are able to learn language through exposure to linguistic inputs in the context of a rich, relevant, perceptual environment.
We first present a system that learned to sportscast for RoboCup simulation games by observing how humans commentate a game. Using the simple assumption that people generally talk about events that have just occurred, we pair each textual comment with a set of events that it could be referring to. By applying an EM-like algorithm, the system simultaneously learns a grounded language model and aligns each description to the corresponding event. The system does not use any prior language knowledge and was able to learn to sportscast in both English and Korean. Human evaluations of the generated commentaries indicate they are of reasonable quality and in some cases even on par with those produced by humans.
For the sportscasting task, while each comment could be aligned to one of several events, the level of ambiguity was low enough that we could enumerate all the possible alignments. However, it is not always possible to restrict the set of possible alignments to such limited numbers. Thus, we present another system that allows each sentence to be aligned to one of exponentially many connected subgraphs without explicitly enumerating them. The system first learns a lexicon and uses it to prune the nodes in the graph that are unrelated to the words in the sentence. By only observing how humans follow navigation instructions, the system was able to infer the corresponding hidden navigation plans and parse previously unseen instructions in new environments for both English and Chinese data. With the rise in popularity of crowdsourcing, we also present results on collecting additional training data using Amazon’s Mechanical Turk. Since our system only needs supervision in the form of language being used in relevant contexts, it is easy for virtually anyone to contribute to the training data. / text
|
403 |
Grounded language learning models for ambiguous supervisionKim, Joo Hyun, active 2013 30 January 2014 (has links)
Communicating with natural language interfaces is a long-standing, ultimate goal for artificial intelligence (AI) agents to pursue, eventually. One core issue toward this goal is "grounded" language learning, a process of learning the semantics of natural language with respect to relevant perceptual inputs. In order to ground the meanings of language in a real world situation, computational systems are trained with data in the form of natural language sentences paired with relevant but ambiguous perceptual contexts. With such ambiguous supervision, it is required to resolve the ambiguity between a natural language (NL) sentence and a corresponding set of possible logical meaning representations (MR).
In this thesis, we focus on devising effective models for simultaneously disambiguating such supervision and learning the underlying semantics of language to map NL sentences into proper logical MRs. We present probabilistic generative models for learning such correspondences along with a reranking model to improve the performance further.
First, we present a probabilistic generative model that learns the mappings from NL sentences into logical forms where the true meaning of each NL sentence is one of a handful of candidate logical MRs. It simultaneously disambiguates the meaning of each sentence in the training data and learns to probabilistically map an NL sentence to its corresponding MR form depicted in a single tree structure. We perform evaluations on the RoboCup sportscasting corpus, proving that our model is more effective than those proposed by previous researchers.
Next, we describe two PCFG induction models for grounded language learning that extend the previous grounded language learning model of Börschinger, Jones, and Johnson (2011). Börschinger et al.’s approach works well in situations of limited ambiguity, such as in the sportscasting task. However, it does not scale well to highly ambiguous situations when there are large sets of potential meaning possibilities for each sentence, such as in the navigation instruction following task first studied by Chen and Mooney (2011). The two models we present overcome such limitations by employing a learned semantic lexicon as a basic correspondence unit between NL and MR for PCFG rule generation.
Finally, we present a method of adapting discriminative reranking to grounded language learning in order to improve the performance of our proposed generative models. Although such generative models are easy to implement and are intuitive, it is not always the case that generative models perform best, since they are maximizing the joint probability of data and model, rather than directly maximizing conditional probability. Because we do not have gold-standard references for training a secondary conditional reranker, we incorporate weak supervision of evaluations against the perceptual world during the process of improving model performance.
All these approaches are evaluated on the two publicly available domains that have been actively used in many other grounded language learning studies. Our methods demonstrate consistently improved performance over those of previous studies in the domains with different languages; this proves that our methods are language-independent and can be generally applied to other grounded learning problems as well. Further possible applications of the presented approaches include summarized machine translation tasks and learning from real perception data assisted by computer vision and robotics. / text
|
404 |
Retrieving information from heterogeneous freight data sources to answer natural language queriesSeedah, Dan Paapanyin Kofi 09 February 2015 (has links)
The ability to retrieve accurate information from databases without an extensive knowledge of the contents and organization of each database is extremely beneficial to the dissemination and utilization of freight data. The challenges, however, are: 1) correctly identifying only the relevant information and keywords from questions when dealing with multiple sentence structures, and 2) automatically retrieving, preprocessing, and understanding multiple data sources to determine the best answer to user’s query. Current named entity recognition systems have the ability to identify entities but require an annotated corpus for training which in the field of transportation planning does not currently exist. A hybrid approach which combines multiple models to classify specific named entities was therefore proposed as an alternative. The retrieval and classification of freight related keywords facilitated the process of finding which databases are capable of answering a question. Values in data dictionaries can be queried by mapping keywords to data element fields in various freight databases using ontologies. A number of challenges still arise as a result of different entities sharing the same names, the same entity having multiple names, and differences in classification systems. Dealing with ambiguities is required to accurately determine which database provides the best answer from the list of applicable sources. This dissertation 1) develops an approach to identify and classifying keywords from freight related natural language queries, 2) develops a standardized knowledge representation of freight data sources using an ontology that both computer systems and domain experts can utilize to identify relevant freight data sources, and 3) provides recommendations for addressing ambiguities in freight related named entities. Finally, the use of knowledge base expert systems to intelligently sift through data sources to determine which ones provide the best answer to a user’s question is proposed. / text
|
405 |
Flexible semantic matching of rich knowledge structuresYeh, Peter Zei-Chan 28 August 2008 (has links)
Not available / text
|
406 |
Following natural language route instructionsMacMahon, Matthew Tierney 28 August 2008 (has links)
Following natural language instructions requires transforming language into situated conditional procedures; robustly following instructions, despite the director's natural mistakes and omissions, requires the pragmatic combination of language, action, and domain knowledge. This dissertation demonstrates a software agent that parses, models and executes human-written natural language instructions to accomplish complex navigation tasks. We compare the performance against people following the same instructions. By selectively removing various syntactic, semantic, and pragmatic abilities, this work empirically measures how often these abilities are necessary to correctly navigate along extended routes through unknown, large-scale environments to novel destinations. To study how route instructions are written and followed, this work presents a new corpus of 1520 free-form instructions from 30 directors for 252 routes in three virtual environments. 101 other people followed these instructions and rated them for quality, successfully reaching and identifying the destination on only approximately two-thirds of the trials. Our software agent, MARCO, followed the same instructions in the same environments with a success rate approaching human levels. Overall, instructions subjectively rated 4 or better of 6 comprise just over half of the corpus; MARCO performs at 88% of human performance on these instructions. MARCO's performance was a strong predictor of human performance and ratings of individual instructions. Ablation experiments demonstrate that implicit procedures are crucial for following verbal instructions using an approach integrating language, knowledge and action. Other experiments measure the performance impact of linguistic, execution, and spatial abilities in successfully following natural language route instructions.
|
407 |
Statistical Text Analysis for Social ScienceO'Connor, Brendan T. 01 August 2014 (has links)
What can text corpora tell us about society? How can automatic text analysis algorithms efficiently and reliably analyze the social processes revealed in language production? This work develops statistical text analyses of dynamic social and news media datasets to extract indicators of underlying social phenomena, and to reveal how social factors guide linguistic production. This is illustrated through three case studies: first, examining whether sentiment expressed in social media can track opinion polls on economic and political topics (Chapter 3); second, analyzing how novel online slang terms can be very specific to geographic and demographic communities, and how these social factors affect their transmission over time (Chapters 4 and 5); and third, automatically extracting political events from news articles, to assist analyses of the interactions of international actors over time (Chapter 6). We demonstrate a variety of computational, linguistic, and statistical tools that are employed for these analyses, and also contribute MiTextExplorer, an interactive system for exploratory analysis of text data against document covariates, whose design was informed by the experience of researching these and other similar works (Chapter 2). These case studies illustrate recurring themes toward developing text analysis as a social science methodology: computational and statistical complexity, and domain knowledge and linguistic assumptions.
|
408 |
Word meaning in context as a paraphrase distribution : evidence, learning, and inferenceMoon, Taesun, Ph. D. 25 October 2011 (has links)
In this dissertation, we introduce a graph-based model of instance-based, usage meaning that is cast as a problem of probabilistic inference. The main aim of this model is to provide a flexible platform that can be used to explore multiple hypotheses about usage meaning computation. Our model takes up and extends the proposals of Erk and Pado [2007] and McCarthy and Navigli [2009] by representing usage meaning as a probability distribution over potential paraphrases. We use undirected graphical models to infer this probability distribution for every content word in a given sentence. Graphical models represent complex probability distributions through a graph. In the graph, nodes stand for random variables, and edges stand for direct probabilistic interactions between them. The lack of edges between any two variables reflect independence assumptions. In our model, we represent each content word of the sentence through two adjacent nodes: the observed node represents the surface form of the word itself, and the hidden node represents its usage meaning. The distribution over values that we infer for the hidden node is a paraphrase distribution for the observed word. To encode the fact that lexical semantic information is exchanged between syntactic neighbors, the graph contains edges that mirror the dependency graph for the sentence. Further knowledge sources that influence the hidden nodes are represented through additional edges that, for example, connect to document topic. The integration of adjacent knowledge sources is accomplished in a standard way by multiplying factors and marginalizing over variables.
Evaluating on a paraphrasing task, we find that our model outperforms the current state-of-the-art usage vector model [Thater et al., 2010] on all parts of speech except verbs, where the previous model wins by a small margin. But our main focus is not on the numbers but on the fact that our model is flexible enough to encode different hypotheses about usage meaning computation. In particular, we concentrate on five questions (with minor variants):
- Nonlocal syntactic context: Existing usage vector models only use a word's direct syntactic neighbors for disambiguation or inferring some other meaning representation. Would it help to have contextual information instead "flow" along the entire dependency graph, each word's inferred meaning relying on the paraphrase distribution of its neighbors?
- Influence of collocational information: In some cases, it is intuitively plausible to use the selectional preference of a neighboring word towards the target to determine its meaning in context. How does modeling selectional preferences into the model affect performance?
- Non-syntactic bag-of-words context: To what extent can non-syntactic information in the form of bag-of-words context help in inferring meaning?
- Effects of parametrization: We experiment with two transformations of MLE. One interpolates various MLEs and another transforms it by exponentiating pointwise mutual information. Which performs better?
- Type of hidden nodes: Our model posits a tier of hidden nodes immediately adjacent the surface tier of observed words to capture dynamic usage meaning. We examine the model based on by varying the hidden nodes such that in one the nodes have actual words as values and in the other the nodes have nameless indexes as values. The former has the benefit of interpretability while the latter allows more standard parameter estimation.
Portions of this dissertation are derived from joint work between the author and Katrin Erk [submitted]. / text
|
409 |
Σχεδιασμός, κατασκευή και αξιολόγηση ελληνικού γραμματικού διορθωτήΓάκης, Παναγιώτης 07 May 2015 (has links)
Στόχος της παρούσας διδακτορικής διατριβής είναι ο σχεδιασμός και η υλοποίηση ενός ηλεκτρονικού εύχρηστου εργαλείου (γραμματικού διορθωτή) που θα προβαίνει στη μορφολογική και συντακτική ανάλυση φράσεων, προτάσεων και λέξεων με σκοπό τη διόρθωση γραμματικών και υφολογικών λαθών. Βάση για την αντιμετώπιση όλων αυτών των ζητημάτων συνιστούν οι ρυθμίσεις της Γραμματικής (αναπροσαρμογή της Μικρής Νεοελληνικής Γραμματικής του Μανόλη Τριανταφυλλίδη), η οποία αποτελεί την επίσημη, από το 1976, γραμματική κωδικοποίηση της νεοελληνικής γλώσσας. (Κατά την εκπόνηση της διατριβής δεν έχουν ληφθεί υπόψη οι -ελάχιστες- διαφορές της νέας σχολικής γραμματικής Ε΄ και Στ΄ Δημοτικού).
Με δεδομένη την απουσία ενός τέτοιου εργαλείου για τα ελληνικά, η ανάπτυξη του προϊόντος θα βασίζεται καταρχήν στη λεπτομερή καταγραφή, ανάλυση και τυποποίηση των λαθών του γραπτού λόγου και στη συνέχεια στην επιλογή του λογισμικού εκείνου που θα περιγράφει φορμαλιστικά τα γραμματικά λάθη. Η διατριβή παρουσιάζει στατιστικά στοιχεία που αφορούν τη σχέση των λαθών με το φύλο ή με το κειμενικό είδος των κειμένων στα οποία και συναντούνται όπως επίσης και την αναγνώρισή τους από μαθητές.
Στην παρούσα έρευνα παρουσιάζεται ο φορμαλισμός υλοποίησης που χρησιμοποιήθηκε (Mnemosyne) και παρουσιάζονται οι ιδιαιτερότητες της ελληνικής γλώσσας που δυσχεραίνουν την υπολογιστική επεξεργασία της. Ο φορμαλισμός αυτός έχει ήδη χρησιμοποιηθεί για αναγνώριση πολυλεκτικών όρων καθώς και για την υλοποίηση ηλεκτρονικών εργαλείων (γραμματικών) με στόχο την αυτόματη εξαγωγή πληροφορίας. Με αυτό τον τρόπο όλοι οι χρήστες της γλώσσας (και όχι μόνο αυτοί που έχουν την ελληνική ως μητρική γλώσσα) μπορούν να κατανοήσουν καλύτερα όχι μόνον τη λειτουργία των διαφόρων μερών του συστήματος της γλώσσας αλλά και τον τρόπο με τον οποίο λειτουργούν οι μηχανισμοί λειτουργίας του γλωσσικού συστήματος κατά τη γλωσσική ανάλυση .
14
Οι βασικές περιοχές γραμματικών λαθών όπου θα παρεμβαίνει ο γραμματικός διορθωτής θα είναι:
1) θέματα τονισμού και στίξης,
2) τελικό -ν,
3) υφολογικά ζητήματα (ρηματικοί τύποι σε περιπτώσεις διπλοτυπίας, κλιτικοί τύποι),
4) ζητήματα καθιερωμένης γραφής λέξεων ή φράσεων της νέας ελληνικής γλώσσας (στερεότυπες φράσεις, λόγιοι τύποι),
5) ζητήματα κλίσης (λανθασμένοι κλιτικοί τύποι ονομάτων ή ρημάτων είτε λόγω άγνοιας είτε λόγω σύγχυσης),
6) ζητήματα λεξιλογίου (περιπτώσεις εννοιολογικής σύγχυσης, ελληνικές αποδόσεις ξένων λέξεων, πλεονασμός, χρήση εσφαλμένης φράσης ή λέξης),
7) ζητήματα ορθογραφικής σύγχυσης (ομόηχες λέξεις),
8) ζητήματα συμφωνίας (θέματα ασυμφωνίας στοιχείων της ονοματικής ή της ρηματικής φράσης),
9) ζητήματα σύνταξης (σύνταξη ρημάτων) και
10) περιπτώσεις λαθών που απαιτούν πιο εξειδικευμένη διαχείριση ορθογραφικής διόρθωσης.
Βάση για την υλοποίηση του λεξικού αποτελεί το ηλεκτρονικό μορφολογικό λεξικό Neurolingo Lexicon1, ένα λεξικό χτισμένο σε ένα μοντέλο 5 επιπέδων με τουλάχιστον 90.000 λήμματα που παράγουν 1.200.000 κλιτικούς τύπους. Οι τύποι αυτοί φέρουν πληροφορία: α) ορθογραφική (ορθή γραφή του κλιτικού τύπου), β) μορφηματική (το είδος των μορφημάτων: πρόθημα, θέμα, επίθημα, κατάληξη, που απαρτίζουν τον κλιτικό τύπο), γ) μορφοσυντακτική (μέρος του λόγου, γένος, πτώση, πρόσωπο κτλ.), δ) υφολογική (τα υφολογικά χαρακτηριστικά του τύπου: προφορικό, λόγιο κτλ.) και ε) ορολογική (επιπλέον πληροφορία για το αν ο τύπος αποτελεί μέρος ειδικού λεξιλογίου). Το λεξικό αυτό αποτελεί και τον θεμέλιο λίθο για την υποστήριξη του γραμματικού διορθωτή (Grammar Checker). Η αξία και ο ρόλος του μορφολογικού λεξικού για την υποστήριξη ενός γραμματικού
διορθωτή είναι αυτονόητη, καθώς η μορφολογία είναι το πρώτο επίπεδο γλώσσας που εξετάζεται και το συντακτικό επίπεδο βασίζεται και εξαρτάται από τη μορφολογία των λέξεων.
Μείζον πρόβλημα αποτέλεσε η λεξική ασάφεια, προϊόν της πλούσιας μορφολογίας της ελληνικής γλώσσας. Με δεδομένο αυτό το πρόβλημα σχεδιάστηκε ο σχολιαστής (tagger) με αμιγώς γλωσσολογικά κριτήρια για τις περιπτώσεις εκείνες όπου η λεξική ασάφεια αποτελούσε εμπόδιο στην αποτύπωση λαθών στη χρήση της ελληνικής γλώσσας.
Στον γραμματικό διορθωτή δόθηκαν προς διόρθωση κείμενα που είχαν διορθωθεί από άνθρωπο. Σε ένα πολύ μεγάλο ποσοστό ο διορθωτής προσεγγίζει τη διόρθωση του ανθρώπου με μόνη διαφοροποίηση εκείνα τα λάθη που αφορούν τη συνοχή του κειμένου και κατ’ επέκταση όλα τα νοηματικά λάθη. / The aim of this thesis is to design and then to implement a useful and friendly electronic tool (grammar checker) which will carry out the morphological and syntactic analysis of sentences, phrases and words in order to correct syntactic, grammatical and stylistic errors. Our foundation so as to deal with all these issues, is the settings of Grammar (adaptation of Little Modern Grammar of Manolis Triantafyllidis), which is the formalconstituted codified grammar of Modern Greek, since 1976. (In the presentation of this thesis it has not been taken into account the -minimum- differences that appear in the new Greek grammar book of the fifth and sixth grade of the elementary school).
Bearing in mind that there is a total absence of such a tool in Greek language, the development of the product is based on the detailed record, on the analysis and on the formulation of the errors of writing speech. Additionally, for its development the right software is chosen in order to describe the grammatical errors. In this thesis the statistics demonstrate the link between the errors and the students’ gender or between the errors and the textual type in which these errors appear. Finnally, through the statistics, the link among the errors and their recognition by the students is presented .
This research presents the formalism used (the Mnemosyne) and also the particularities of the Greek language that hinder the computational processing. The formalism has already been used to identify multi-word terms and to phrase grammars, aiming to the automatic information extraction. In this way, all speakers (native or not) will be able to understand better not only the function of various parts of the system of the Greek language but also the way the mechanisms of linguistic analysis operate in the conquest and more broadly in the linguistic realization.
The main areas of the grammatical errors with which the grammar checker will interfere, are:
1) Punctuation problems,
2) Final -n,
3) Stylistic issues (verb forms in cases of duplicates, inflectional types),
4) Standardization issues (stereotyped phrases, words of literary origin),
5) Inclination issues (incorrect declension of names or verbs either through ignorance or because of confusion)
6) Vocabulary issues (cases of conceptual confusion, Greek translation of foreign words, redundancy and use of incorrect word or phrase),
7) Orthographic confusion issues (homonymous words),
8) Agreement issues (cases of elements of nominal or verbal phrase disagreement),
9) Syntax issues (verbs) and
10) Cases of errors that require more specialized management of the spelling correction.
The basis for the implementation is the electronic morphological lexicon (Neurolingo Lexicon), a 5-level lexicon which consists of, at least 90,000 entries that produce ~1,200,000 inflection types. These types carry information: a) spelling (write spelling of inflectional type), b) morpheme information (type of morphemes: prefix, theme, suffix, ending), c) morphosyntactic information (part of speech, gender, case, person, etc.), d) stylistic information (the stylistic characteristics of the type: oral, archaic, etc.) and e) terminology (additional information about whether the word form is part of a special vocabulary).This electronic lexicon is the foundation that supports the grammar checker. The value and the key role of the morphological lexicon in supporting the Greek grammar checker is obvious, since the first level in which the language is examined is the morphology level and since the structural level is not only based but also depends on the morphology of the words.
A major problem in processing the natural language was the lexical ambiguity, a product of the highly morphology of the Greek language. Given that the major problem of modern Greek is the lexical ambiguity we designe the Greek tagger grounded on linguistic criteria for those cases where the lexical ambiguity impede the imprint of the errors in Greek language.
The texts that were given for correction to the grammar checker were also corrected by a person. In a very large percentage the grammar checker approximates in accuracy the human-corrector. Only when the grammar checker had to deal with mistakes concerning the coherence of the text or with meaning errors, the humman corrector was the only accurate corrector.
|
410 |
Automated Discovery and Analysis of Social Networks from Threaded DiscussionsGruzd, Anatoliy A, Haythornthwaite, Caroline January 2008 (has links)
To gain greater insight into the operation of online social networks, we applied Natural Language Processing (NLP) techniques to text-based communication to identify and describe underlying social structures in online communities. This paper presents our approach and preliminary evaluation for content-based, automated discovery of social networks. Our research question is: What syntactic and semantic features of postings in a threaded discussions help uncover explicit and implicit ties between network members, and which provide a reliable estimate of the strengths of interpersonal ties among the network members? To evaluate our automated procedures, we compare the results from the NLP processes with social networks built from basic who-to-whom data, and a sample of hand-coded data derived from a close reading of the text.
For our test case, and as part of ongoing research on networked learning, we used the archive of threaded discussions collected over eight iterations of an online graduate class. We first associate personal names and nicknames mentioned in the postings with class participants. Next we analyze the context in which each name occurs in the postings to determine whether or not there is an interpersonal tie between a sender of the posting and a person mentioned in it. Because information exchange is a key factor in the operation and success of a learning community, we estimate and assign weights to the ties by measuring the amount of information exchanged between each pair of the nodes; information in this case is operationalized as counts of important concept terms in the postings as derived through the NLP analyses. Finally, we compare the resulting network(s) against those derived from other means, including basic who-to-whom data derived from posting sequences (e.g., whose postings follow whose). In this comparison we evaluate what is gained in understanding network processes by our more elaborate analyses.
|
Page generated in 2.4479 seconds