301 |
A study of linguistic pattern recognition and sensor fusion /Auephanwiriyakul, Sansanee, January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 210-216). Also available on the Internet.
|
302 |
A study of linguistic pattern recognition and sensor fusionAuephanwiriyakul, Sansanee, January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 210-216). Also available on the Internet.
|
303 |
Processing information structure : evidence from comprehension and production /Cowles, Heidi Wind. January 2003 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2003. / Vita. Includes bibliographical references (leaves 319-329).
|
304 |
Augmenting environments with multimodal interaction /McGee, David R. January 2003 (has links)
Thesis (Ph. D.)--OGI School of Science & Engineering at OHSU, 2003. / Includes bibliographical references (leaves 112-119).
|
305 |
Semi-automated annotation and active learning for language documentationPalmer, Alexis Mary 03 April 2013 (has links)
By the end of this century, half of the approximately 6000 extant languages will cease to be transmitted from one generation to the next. The field of language documentation seeks to make a record of endangered languages before they reach the point of extinction, while they are still in use. The work of documenting and describing a language is difficult and extremely time-consuming, and resources are extremely limited. Developing efficient methods for making lasting records of languages may increase the amount of documentation achieved within budget restrictions. This thesis approaches the problem from the perspective of computational linguistics, asking whether and how automated language processing can reduce human annotation effort when very little labeled data is available for model training. The task addressed is morpheme labeling for the Mayan language Uspanteko, and we test the effectiveness of two complementary types of machine support: (a) learner-guided selection of examples for annotation (active learning); and (b) annotator access to the predictions of the learned model (semi-automated annotation). Active learning (AL) has been shown to increase efficacy of annotation effort for many different tasks. Most of the reported results, however, are from studies which simulate annotation, often assuming a single, infallible oracle. In our studies, crucially, annotation is not simulated but rather performed by human annotators. We measure and record the time spent on each annotation, which in turn allows us to evaluate the effectiveness of machine support in terms of actual annotation effort. We report three main findings with respect to active learning. First, in order for efficiency gains reported from active learning to be meaningful for realistic annotation scenarios, the type of cost measurement used to gauge those gains must faithfully reflect the actual annotation cost. Second, the relative effectiveness of different selection strategies in AL seems to depend in part on the characteristics of the annotator, so it is important to model the individual oracle or annotator when choosing a selection strategy. And third, the cost of labeling a given instance from a sample is not a static value but rather depends on the context in which it is labeled. We report two main findings with respect to semi-automated annotation. First, machine label suggestions have the potential to increase annotator efficacy, but the degree of their impact varies by annotator, with annotator expertise a likely contributing factor. At the same time, we find that implementation and interface must be handled very carefully if we are to accurately measure gains from semi-automated annotation. Together these findings suggest that simulated annotation studies fail to model crucial human factors inherent to applying machine learning strategies in real annotation settings. / text
|
306 |
Following natural language route instructionsMacMahon, Matthew Tierney 28 August 2008 (has links)
Following natural language instructions requires transforming language into situated conditional procedures; robustly following instructions, despite the director's natural mistakes and omissions, requires the pragmatic combination of language, action, and domain knowledge. This dissertation demonstrates a software agent that parses, models and executes human-written natural language instructions to accomplish complex navigation tasks. We compare the performance against people following the same instructions. By selectively removing various syntactic, semantic, and pragmatic abilities, this work empirically measures how often these abilities are necessary to correctly navigate along extended routes through unknown, large-scale environments to novel destinations. To study how route instructions are written and followed, this work presents a new corpus of 1520 free-form instructions from 30 directors for 252 routes in three virtual environments. 101 other people followed these instructions and rated them for quality, successfully reaching and identifying the destination on only approximately two-thirds of the trials. Our software agent, MARCO, followed the same instructions in the same environments with a success rate approaching human levels. Overall, instructions subjectively rated 4 or better of 6 comprise just over half of the corpus; MARCO performs at 88% of human performance on these instructions. MARCO's performance was a strong predictor of human performance and ratings of individual instructions. Ablation experiments demonstrate that implicit procedures are crucial for following verbal instructions using an approach integrating language, knowledge and action. Other experiments measure the performance impact of linguistic, execution, and spatial abilities in successfully following natural language route instructions.
|
307 |
Αυτόματη μετατροπή από φωνητική σε ορθογραφική γραφήΡεντζεπόπουλος, Παναγιώτης 11 September 2009 (has links)
- / -
|
308 |
Υπολογιστική συντακτική επεξεργασία κειμένων της νέας ελληνικής γλώσσας βασισμένη σε εμπειρικές μεθόδουςΜίχος, Στέφανος 22 September 2009 (has links)
- / -
|
309 |
Towards an intelligent fuzzy based multimodal two stage speech enhancement systemAbel, Andrew January 2013 (has links)
This thesis presents a novel two stage multimodal speech enhancement system, making use of both visual and audio information to filter speech, and explores the extension of this system with the use of fuzzy logic to demonstrate proof of concept for an envisaged autonomous, adaptive, and context aware multimodal system. The design of the proposed cognitively inspired framework is scalable, meaning that it is possible for the techniques used in individual parts of the system to be upgraded and there is scope for the initial framework presented here to be expanded. In the proposed system, the concept of single modality two stage filtering is extended to include the visual modality. Noisy speech information received by a microphone array is first pre-processed by visually derived Wiener filtering employing the novel use of the Gaussian Mixture Regression (GMR) technique, making use of associated visual speech information, extracted using a state of the art Semi Adaptive Appearance Models (SAAM) based lip tracking approach. This pre-processed speech is then enhanced further by audio only beamforming using a state of the art Transfer Function Generalised Sidelobe Canceller (TFGSC) approach. This results in a system which is designed to function in challenging noisy speech environments (using speech sentences with different speakers from the GRID corpus and a range of noise recordings), and both objective and subjective test results (employing the widely used Perceptual Evaluation of Speech Quality (PESQ) measure, a composite objective measure, and subjective listening tests), showing that this initial system is capable of delivering very encouraging results with regard to filtering speech mixtures in difficult reverberant speech environments. Some limitations of this initial framework are identified, and the extension of this multimodal system is explored, with the development of a fuzzy logic based framework and a proof of concept demonstration implemented. Results show that this proposed autonomous,adaptive, and context aware multimodal framework is capable of delivering very positive results in difficult noisy speech environments, with cognitively inspired use of audio and visual information, depending on environmental conditions. Finally some concluding remarks are made along with proposals for future work.
|
310 |
Dependency based CCG derivation and applicationBrewster, Joshua Blake 21 February 2011 (has links)
This paper presents and evaluates an algorithm to translate a dependency treebank into a Combinatory Categorial Grammar (CCG) lexicon. The dependency relations between a head and a child in a dependency tree are exploited to determine how CCG categories should be derived by making a functional distinction between adjunct and argument relations. Derivations for an English (CoNLL08 shared task treebank) and for an Italian (Turin University Treebank) dependency treebank are performed, each requiring a number of preprocessing steps.
In order to determine the adequacy of the lexicons, dubbed DepEngCCG and DepItCCG, they are compared via two methods to preexisting CCG lexicons derived from similar or equivalent sources (CCGbank and TutCCG). First, a number of metrics are used to compare the state of the lexicon, including category complexity and category growth. Second, to measures the potential applicability of the lexicons in NLP tasks, the derived English CCG lexicon and CCGbank are compared in a sentiment analysis task. While the numeric measurements show promising results for the quality of the lexicons, the sentiment analysis task fails to generate a usable comparison. / text
|
Page generated in 0.1507 seconds