• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 10
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 30
  • 24
  • 17
  • 17
  • 13
  • 10
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A translator for languages generated by context-free grammars/

Gillespie, William Gordon January 1974 (has links)
No description available.
42

電腦輔助翻譯的多元進路研究: 以香港譯員訓練為例. / Variational approach to computer-aided translation studies: translators' training in Hong Kong as a case study / CUHK electronic theses & dissertations collection / Dian nao fu zhu fan yi de duo yuan jin lu yan jiu: yi Xianggang yi yuan xun lian wei li.

January 2013 (has links)
袁晓蕾. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 129-144) / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in Chinese and English. / Yuan Xiaolei.
43

Automatic construction of English/Chinese parallel corpus.

January 2001 (has links)
Li Kar Wing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 88-96). / Abstracts in English and Chinese. / ABSTRACT --- p.i / ACKNOWLEDGEMENTS --- p.v / LIST OF TABLES --- p.viii / LIST OF FIGURES --- p.ix / CHAPTERS / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 1.1 --- Application of corpus-based techniques --- p.2 / Chapter 1.1.1 --- Machine Translation (MT) --- p.2 / Chapter 1.1.1.1 --- Linguistic --- p.3 / Chapter 1.1.1.2 --- Statistical --- p.4 / Chapter 1.1.1.3 --- Lexicon construction --- p.4 / Chapter 1.1.2 --- Cross-lingual Information Retrieval (CLIR) --- p.6 / Chapter 1.1.2.1 --- Controlled vocabulary --- p.6 / Chapter 1.1.2.2 --- Free text --- p.7 / Chapter 1.1.2.3 --- Application corpus-based approach in CLIR --- p.9 / Chapter 1.2 --- Overview of linguistic resources --- p.10 / Chapter 1.3 --- Written language corpora --- p.12 / Chapter 1.3.1 --- Types of corpora --- p.13 / Chapter 1.3.2 --- Limitation of comparable corpora --- p.16 / Chapter 1.4 --- Outline of the dissertation --- p.17 / Chapter 2. --- LITERATURE REVIEW --- p.19 / Chapter 2.1 --- Research in automatic corpus construction --- p.20 / Chapter 2.2 --- Research in translation alignment --- p.25 / Chapter 2.2.1 --- Sentence alignment --- p.27 / Chapter 2.2.2 --- Word alignment --- p.28 / Chapter 2.3 --- Research in alignment of sequences --- p.33 / Chapter 3. --- ALIGNMENT AT WORD LEVEL AND CHARACTER LEVEL --- p.35 / Chapter 3.1 --- Title alignment --- p.35 / Chapter 3.1.1 --- Lexical features --- p.37 / Chapter 3.1.2 --- Grammatical features --- p.40 / Chapter 3.1.3 --- The English/Chinese alignment model --- p.41 / Chapter 3.2 --- Alignment at word level and character level --- p.42 / Chapter 3.2.1 --- Alignment at word level --- p.42 / Chapter 3.2.2 --- Alignment at character level: Longest matching --- p.44 / Chapter 3.2.3 --- Longest common subsequence(LCS) --- p.46 / Chapter 3.2.4 --- Applying LCS in the English/Chinese alignment model --- p.48 / Chapter 3.3 --- Reduce overlapping ambiguity --- p.52 / Chapter 3.3.1 --- Edit distance --- p.52 / Chapter 3.3.2 --- Overlapping in the algorithm model --- p.54 / Chapter 4. --- ALIGNMENT AT TITLE LEVEL --- p.59 / Chapter 4.1 --- Review of score functions --- p.59 / Chapter 4.2 --- The Score function --- p.60 / Chapter 4.2.1 --- (C matches E) and (E matches C) --- p.60 / Chapter 4.2.2 --- Length similarity --- p.63 / Chapter 5. --- EXPERIMENTAL RESULTS --- p.69 / Chapter 5.1 --- Hong Kong government press release articles --- p.69 / Chapter 5.2 --- Hang Seng Bank economic monthly reports --- p.76 / Chapter 5.3 --- Hang Seng Bank press release articles --- p.78 / Chapter 5.4 --- Hang Seng Bank speech articles --- p.81 / Chapter 5.5 --- Quality of the collections and future work --- p.84 / Chapter 6. --- CONCLUSION --- p.87 / Bibliography
44

Investigating the practicability of using CAT system and TM :a case study of C-E translation of informative text by SDL Trados 2015

Kuan, Nga Iam, Joanna January 2018 (has links)
University of Macau / Faculty of Arts and Humanities. / Department of English
45

The word segmentation & part-of-speech tagging system for the modern Chinese. / Word segmentation and part-of-speech tagging system for the modern Chinese

January 1994 (has links)
Liu Hon-lung. / Title also in Chinese characters. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves [58-59]). / Chapter 1. --- Introduction --- p.1 / Chapter 2. --- "Word Segmentation and Part-of-Speech Tagging: Techniques, Current Researches and The Embraced Problems" --- p.6 / Chapter 2.1. --- Various Methods on Word Segmentation and Part-of-Speech Tagging --- p.6 / Chapter 2.2. --- Current Researches on Word Segmentation and Part-of-Speech Tagging --- p.9 / Chapter 2.3. --- Embraced Problems in Word Segmentation and Part-of-Speech Tagging --- p.9 / Chapter 3. --- Branch-and-Bound Algorithm for Combinational Optimization of the Probabilistic Scoring Function --- p.15 / Chapter 3.1. --- Definition of Word Segmentation and Part-of-Speech Tagging --- p.15 / Chapter 3.2. --- Framework --- p.17 / Chapter 3.3. --- "Weight Assignment, Intermediate Score Computation & Optimization" --- p.20 / Chapter 4. --- Implementation Issues of the Proposed Word Segmentation and Part-of-Speech Tagging System --- p.26 / Chapter 4.1. --- Design of System Dictionary and Data Structure --- p.30 / Chapter 4.2. --- Training Process --- p.33 / Chapter 4.3. --- Tagging Process --- p.35 / Chapter 4.4. --- Tagging Samples of the Word Segmentation & Part-of-Speech Tagging System --- p.39 / Chapter 5. --- Experiments on the Proposed Word Segmentation and Part-Of-Speech Tagging System --- p.41 / Chapter 5.1. --- Closed Test --- p.41 / Chapter 5.2. --- Open Test --- p.42 / Chapter 6. --- Testing and Statistics --- p.43 / Chapter 7. --- Conclusions and Discussions --- p.47 / References / Appendices / Appendix A: sysdict.tag Sample / Appendix B: econ.tag Sample / Appendix C: open. tag Sample / Appendix D:漢語分詞及詞性標注系統for Windows / Appendix E: Neural Network
46

Semi-automatic acquisition of domain-specific semantic structures.

January 2000 (has links)
Siu, Kai-Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 99-106). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Thesis Outline --- p.5 / Chapter 2 --- Background --- p.6 / Chapter 2.1 --- Natural Language Understanding --- p.6 / Chapter 2.1.1 --- Rule-based Approaches --- p.7 / Chapter 2.1.2 --- Stochastic Approaches --- p.8 / Chapter 2.1.3 --- Phrase-Spotting Approaches --- p.9 / Chapter 2.2 --- Grammar Induction --- p.10 / Chapter 2.2.1 --- Semantic Classification Trees --- p.11 / Chapter 2.2.2 --- Simulated Annealing --- p.12 / Chapter 2.2.3 --- Bayesian Grammar Induction --- p.12 / Chapter 2.2.4 --- Statistical Grammar Induction --- p.13 / Chapter 2.3 --- Machine Translation --- p.14 / Chapter 2.3.1 --- Rule-based Approach --- p.15 / Chapter 2.3.2 --- Statistical Approach --- p.15 / Chapter 2.3.3 --- Example-based Approach --- p.16 / Chapter 2.3.4 --- Knowledge-based Approach --- p.16 / Chapter 2.3.5 --- Evaluation Method --- p.19 / Chapter 3 --- Semi-Automatic Grammar Induction --- p.20 / Chapter 3.1 --- Agglomerative Clustering --- p.20 / Chapter 3.1.1 --- Spatial Clustering --- p.21 / Chapter 3.1.2 --- Temporal Clustering --- p.24 / Chapter 3.1.3 --- Free Parameters --- p.26 / Chapter 3.2 --- Post-processing --- p.27 / Chapter 3.3 --- Chapter Summary --- p.29 / Chapter 4 --- Application to the ATIS Domain --- p.30 / Chapter 4.1 --- The ATIS Domain --- p.30 / Chapter 4.2 --- Parameters Selection --- p.32 / Chapter 4.3 --- Unsupervised Grammar Induction --- p.35 / Chapter 4.4 --- Prior Knowledge Injection --- p.40 / Chapter 4.5 --- Evaluation --- p.43 / Chapter 4.5.1 --- Parse Coverage in Understanding --- p.45 / Chapter 4.5.2 --- Parse Errors --- p.46 / Chapter 4.5.3 --- Analysis --- p.47 / Chapter 4.6 --- Chapter Summary --- p.49 / Chapter 5 --- Portability to Chinese --- p.50 / Chapter 5.1 --- Corpus Preparation --- p.50 / Chapter 5.1.1 --- Tokenization --- p.51 / Chapter 5.2 --- Experiments --- p.52 / Chapter 5.2.1 --- Unsupervised Grammar Induction --- p.52 / Chapter 5.2.2 --- Prior Knowledge Injection --- p.56 / Chapter 5.3 --- Evaluation --- p.58 / Chapter 5.3.1 --- Parse Coverage in Understanding --- p.59 / Chapter 5.3.2 --- Parse Errors --- p.60 / Chapter 5.4 --- Grammar Comparison Across Languages --- p.60 / Chapter 5.5 --- Chapter Summary --- p.64 / Chapter 6 --- Bi-directional Machine Translation --- p.65 / Chapter 6.1 --- Bilingual Dictionary --- p.67 / Chapter 6.2 --- Concept Alignments --- p.68 / Chapter 6.3 --- Translation Procedures --- p.73 / Chapter 6.3.1 --- The Matching Process --- p.74 / Chapter 6.3.2 --- The Searching Process --- p.76 / Chapter 6.3.3 --- Heuristics to Aid Translation --- p.81 / Chapter 6.4 --- Evaluation --- p.82 / Chapter 6.4.1 --- Coverage --- p.83 / Chapter 6.4.2 --- Performance --- p.86 / Chapter 6.5 --- Chapter Summary --- p.89 / Chapter 7 --- Conclusions --- p.90 / Chapter 7.1 --- Summary --- p.90 / Chapter 7.2 --- Future Work --- p.92 / Chapter 7.2.1 --- Suggested Improvements on Grammar Induction Process --- p.92 / Chapter 7.2.2 --- Suggested Improvements on Bi-directional Machine Trans- lation --- p.96 / Chapter 7.2.3 --- Domain Portability --- p.97 / Chapter 7.3 --- Contributions --- p.97 / Bibliography --- p.99 / Chapter A --- Original SQL Queries --- p.107 / Chapter B --- Induced Grammar --- p.109 / Chapter C --- Seeded Categories --- p.111
47

Semi-automatic grammar induction for bidirectional machine translation.

January 2002 (has links)
Wong, Chin Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 137-143). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Objectives --- p.3 / Chapter 1.2 --- Thesis Outline --- p.5 / Chapter 2 --- Background in Natural Language Understanding --- p.6 / Chapter 2.1 --- Rule-based Approaches --- p.7 / Chapter 2.2 --- Corpus-based Approaches --- p.8 / Chapter 2.2.1 --- Stochastic Approaches --- p.8 / Chapter 2.2.2 --- Phrase-spotting Approaches --- p.9 / Chapter 2.3 --- The ATIS Domain --- p.10 / Chapter 2.3.1 --- Chinese Corpus Preparation --- p.11 / Chapter 3 --- Semi-automatic Grammar Induction - Baseline Approach --- p.13 / Chapter 3.1 --- Background in Grammar Induction --- p.13 / Chapter 3.1.1 --- Simulated Annealing --- p.14 / Chapter 3.1.2 --- Bayesian Grammar Induction --- p.14 / Chapter 3.1.3 --- Probabilistic Grammar Acquisition --- p.15 / Chapter 3.2 --- Semi-automatic Grammar Induction 一 Baseline Approach --- p.16 / Chapter 3.2.1 --- Spatial Clustering --- p.16 / Chapter 3.2.2 --- Temporal Clustering --- p.18 / Chapter 3.2.3 --- Post-processing --- p.19 / Chapter 3.2.4 --- Four Aspects for Enhancements --- p.20 / Chapter 3.3 --- Chapter Summary --- p.22 / Chapter 4 --- Semi-automatic Grammar Induction - Enhanced Approach --- p.23 / Chapter 4.1 --- Evaluating Induced Grammars --- p.24 / Chapter 4.2 --- Stopping Criterion --- p.26 / Chapter 4.2.1 --- Cross-checking with Recall Values --- p.29 / Chapter 4.3 --- Improvements on Temporal Clustering --- p.32 / Chapter 4.3.1 --- Evaluation --- p.39 / Chapter 4.4 --- Improvements on Spatial Clustering --- p.46 / Chapter 4.4.1 --- Distance Measures --- p.48 / Chapter 4.4.2 --- Evaluation --- p.57 / Chapter 4.5 --- Enhancements based on Intelligent Selection --- p.62 / Chapter 4.5.1 --- Informed Selection between Spatial Clustering and Tem- poral Clustering --- p.62 / Chapter 4.5.2 --- Selecting the Number of Clusters Per Iteration --- p.64 / Chapter 4.5.3 --- An Example for Intelligent Selection --- p.64 / Chapter 4.5.4 --- Evaluation --- p.68 / Chapter 4.6 --- Chapter Summary --- p.71 / Chapter 5 --- Bidirectional Machine Translation using Induced Grammars ´ؤBaseline Approach --- p.73 / Chapter 5.1 --- Background in Machine Translation --- p.75 / Chapter 5.1.1 --- Rule-based Machine Translation --- p.75 / Chapter 5.1.2 --- Statistical Machine Translation --- p.76 / Chapter 5.1.3 --- Knowledge-based Machine Translation --- p.77 / Chapter 5.1.4 --- Example-based Machine Translation --- p.78 / Chapter 5.1.5 --- Evaluation --- p.79 / Chapter 5.2 --- Baseline Configuration on Bidirectional Machine Translation System --- p.84 / Chapter 5.2.1 --- Bilingual Dictionary --- p.84 / Chapter 5.2.2 --- Concept Alignments --- p.85 / Chapter 5.2.3 --- Translation Process --- p.89 / Chapter 5.2.4 --- Two Aspects for Enhancements --- p.90 / Chapter 5.3 --- Chapter Summary --- p.91 / Chapter 6 --- Bidirectional Machine Translation ´ؤ Enhanced Approach --- p.92 / Chapter 6.1 --- Concept Alignments --- p.93 / Chapter 6.1.1 --- Enhanced Alignment Scheme --- p.95 / Chapter 6.1.2 --- Experiment --- p.97 / Chapter 6.2 --- Grammar Checker --- p.100 / Chapter 6.2.1 --- Components for Grammar Checking --- p.101 / Chapter 6.3 --- Evaluation --- p.117 / Chapter 6.3.1 --- Bleu Score Performance --- p.118 / Chapter 6.3.2 --- Modified Bleu Score --- p.122 / Chapter 6.4 --- Chapter Summary --- p.130 / Chapter 7 --- Conclusions --- p.131 / Chapter 7.1 --- Summary --- p.131 / Chapter 7.2 --- Contributions --- p.134 / Chapter 7.3 --- Future work --- p.136 / Bibliography --- p.137 / Chapter A --- Original SQL Queries --- p.144 / Chapter B --- Seeded Categories --- p.146 / Chapter C --- 3 Alignment Categories --- p.147 / Chapter D --- Labels of Syntactic Structures in Grammar Checker --- p.148
48

Pivot-based Statistical Machine Translation for Morphologically Rich Languages

Kholy, Ahmed El January 2016 (has links)
This thesis describes the research efforts on pivot-based statistical machine translation (SMT) for morphologically rich languages (MRL). We provide a framework to translate to and from morphologically rich languages especially in the context of having little or no parallel corpora between the source and the target languages. We basically address three main challenges. The first one is the sparsity of data as a result of morphological richness. The second one is maximizing the precision and recall of the pivoting process itself. And the last one is making use of any parallel data between the source and the target languages. To address the challenge of data sparsity, we explored a space of tokenization schemes and normalization options. We also examined a set of six detokenization techniques to evaluate detokenized and orthographically corrected (enriched) output. We provide a recipe of the best settings to translate to one of the most challenging languages, namely Arabic. Our best model improves the translation quality over the baseline by 1.3 BLEU points. We also investigated the idea of separation between translation and morphology generation. We compared three methods of modeling morphological features. Features can be modeled as part of the core translation. Alternatively these features can be generated using target monolingual context. Finally, the features can be predicted using both source and target information. In our experimental results, we outperform the vanilla factored translation model. In order to decide on which features to translate, generate or predict, a detailed error analysis should be provided on the system output. As a result, we present AMEANA, an open-source tool for error analysis of natural language processing tasks, targeting morphologically rich languages. The second challenge we are concerned with is the pivoting process itself. We discuss several techniques to improve the precision and recall of the pivot matching. One technique to improve the recall works on the level of the word alignment as an optimization process for pivoting driven by generating phrase pairs between source and target languages. Despite the fact that improving the recall of the pivot matching improves the overall translation quality, we also need to increase the precision of the pivot quality. To achieve this, we introduce quality constraints scores to determine the quality of the pivot phrase pairs between source and target languages. We show positive results for different language pairs which shows the consistency of our approaches. In one of our best models we reach an improvement of 1.2 BLEU points. The third challenge we are concerned with is how to make use of any parallel data between the source and the target languages. We build on the approach of improving the precision of the pivoting process and the methods of combination between the pivot system and the direct system built from the parallel data. In one of the approaches, we introduce morphology constraint scores which are added to the log linear space of features in order to determine the quality of the pivot phrase pairs. We compare two methods of generating the morphology constraints. One method is based on hand-crafted rules relying on our knowledge of the source and target languages; while in the other method, the morphology constraints are induced from available parallel data between the source and target languages which we also use to build a direct translation model. We then combine both the pivot and direct models to achieve better coverage and overall translation quality. Using induced morphology constraints outperformed the handcrafted rules and improved over our best model from all previous approaches by 0.6 BLEU points (7.2/6.7 BLEU points from the direct and pivot baselines respectively). Finally, we introduce applying smart techniques to combine pivot and direct models. We show that smart selective combination can lead to a large reduction of the pivot model without affecting the performance and in some cases improving it.
49

Machine Translation of Arabic Dialects

Salloum, Wael Sameer January 2018 (has links)
This thesis discusses different approaches to machine translation (MT) from Dialectal Arabic (DA) to English. These approaches handle the varying stages of Arabic dialects in terms of types of available resources and amounts of training data. The overall theme of this work revolves around building dialectal resources and MT systems or enriching existing ones using the currently available resources (dialectal or standard) in order to quickly and cheaply scale to more dialects without the need to spend years and millions of dollars to create such resources for every dialect. Unlike Modern Standard Arabic (MSA), DA-English parallel corpora is scarcely available for few dialects only. Dialects differ from each other and from MSA in orthography, morphology, phonology, and to some lesser degree syntax. This means that combining all available parallel data, from dialects and MSA, to train DA-to-English statistical machine translation (SMT) systems might not provide the desired results. Similarly, translating dialectal sentences with an SMT system trained on that dialect only is also challenging due to different factors that affect the sentence word choices against that of the SMT training data. Such factors include the level of dialectness (e.g., code switching to MSA versus dialectal training data), topic (sports versus politics), genre (tweets versus newspaper), script (Arabizi versus Arabic), and timespan of test against training. The work we present utilizes any available Arabic resource such as a preprocessing tool or a parallel corpus, whether MSA or DA, to improve DA-to-English translation and expand to more dialects and sub-dialects. The majority of Arabic dialects have no parallel data to English or to any other foreign language. They also have no preprocessing tools such as normalizers, morphological analyzers, or tokenizers. For such dialects, we present an MSA-pivoting approach where DA sentences are translated to MSA first, then the MSA output is translated to English using the wealth of MSA-English parallel data. Since there is virtually no DA-MSA parallel data to train an SMT system, we build a rule-based DA-to-MSA MT system, ELISSA, that uses morpho-syntactic translation rules along with dialect identification and language modeling components. We also present a rule-based approach to quickly and cheaply build a dialectal morphological analyzer, ADAM, which provides ELISSA with dialectal word analyses. Other Arabic dialects have a relatively small-sized DA-English parallel data amounting to a few million words on the DA side. Some of these dialects have dialect-dependent preprocessing tools that can be used to prepare the DA data for SMT systems. We present techniques to generate synthetic parallel data from the available DA-English and MSA- English data. We use this synthetic data to build statistical and hybrid versions of ELISSA as well as improve our rule-based ELISSA-based MSA-pivoting approach. We evaluate our best MSA-pivoting MT pipeline against three direct SMT baselines trained on these three parallel corpora: DA-English data only, MSA-English data only, and the combination of DA-English and MSA-English data. Furthermore, we leverage the use of these four MT systems (the three baselines along with our MSA-pivoting system) in two system combination approaches that benefit from their strengths while avoiding their weaknesses. Finally, we propose an approach to model dialects from monolingual data and limited DA-English parallel data without the need for any language-dependent preprocessing tools. We learn DA preprocessing rules using word embedding and expectation maximization. We test this approach by building a morphological segmentation system and we evaluate its performance on MT against the state-of-the-art dialectal tokenization tool.
50

Cross-modality semantic integration and robust interpretation of multimodal user interactions. / CUHK electronic theses & dissertations collection

January 2010 (has links)
Multimodal systems can represent and manipulate semantics from different human communication modalities at different levels of abstraction, in which multimodal integration is required to integrate the semantics from two or more modalities and generate an interpretable output for further processing. In this work, we develop a framework pertaining to automatic cross-modality semantic integration of multimodal user interactions using speech and pen gestures. It begins by generating partial interpretations for each input event as a ranked list of hypothesized semantics. We devise a cross-modality semantic integration procedure to align the pair of hypothesis lists between every speech input event and every pen input event in a multimodal expression. This is achieved by the Viterbi alignment that enforces the temporal ordering and semantic compatibility constraints of aligned events. The alignment enables generation of a unimodal paraphrase that is semantically equivalent to the original multimodal expression. Our experiments are based on a multimodal corpus in the navigation domain. Application of the integration procedure to manual transcripts shows that correct unimodal paraphrases are generated for around 96% of the multimodal inquiries in the test set. However, if we replace this with automatic speech and pen recognition transcripts, the performance drops to around 53% of the test set. In order to address this issue, we devised the hypothesis rescoring procedure that evaluates all candidates of cross-modality integration derived from multiple recognition hypotheses from each modality. The rescoring function incorporates the integration score, N-best purity of recognized spoken locative references (SLRs), as well as distances between coordinates of recognized pen gestures and their interpreted icons on the map. Application of cross-modality hypothesis rescoring improved the performance to generate correct unimodal paraphrases for over 72% of the multimodal inquiries of the test set. / We have also performed a latent semantic modeling (LSM) for interpreting multimodal user input consisting of speech and pen gestures. Each modality of a multimodal input carries semantics related to a domain-specific task goal (TG). Each input is annotated manually with a TG based on the semantics. Multimodal input usually has a simpler syntactic structure and different order of semantic constituents from unimodal input. Therefore, we proposed to use LSM to derive the latent semantics from the multimodal inputs. In order to achieve this, we characterized the cross-modal integration pattern as 3-tuple multimodal terms taking into account SLR, pen gesture type and their temporal relation. The correlation term matrix is then decomposed using singular value decomposition (SVD) to derive the latent semantics automatically. TG inference on disjoint test set based on the latent semantics achieves accurate performance for 99% of the multimodal inquiries. / Hui, Pui Yu. / Adviser: Helen Meng. / Source: Dissertation Abstracts International, Volume: 73-02, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 294-306). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Page generated in 0.0928 seconds