• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 21
  • Tagged with
  • 56
  • 56
  • 56
  • 56
  • 29
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Realization of automatic concept extraction for Chinese conceptual information retrieval =: 中文槪念訊息檢索中自動槪念抽取的實踐. / 中文槪念訊息檢索中自動槪念抽取的實踐 / Realization of automatic concept extraction for Chinese conceptual information retrieval =: Zhong wen gai nian xun xi jian suo zhong zi dong gai nian chou qu de shi jian. / Zhong wen gai nian xun xi jian suo zhong zi dong gai nian chou qu de shi jian

January 1998 (has links)
Wai Ip Lam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 84-87). / Text in English; abstract also in Chinese. / Wai Ip Lam. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background --- p.5 / Chapter 2.1 --- Information Retrieval --- p.5 / Chapter 2.1.1 --- Index Extraction --- p.6 / Chapter 2.1.2 --- Other Approaches to Extracting Indexes --- p.7 / Chapter 2.1.3 --- Conceptual Information Retrieval --- p.8 / Chapter 2.1.4 --- Information Extraction --- p.9 / Chapter 2.2 --- Natural Language Parsing --- p.9 / Chapter 2.2.1 --- Linguistics-based --- p.10 / Chapter 2.2.2 --- Corpus-based --- p.11 / Chapter 3 --- Concept Extraction --- p.13 / Chapter 3.1 --- Concepts in Sentences --- p.13 / Chapter 3.1.1 --- Semantic Structures and Themantic Roles --- p.13 / Chapter 3.1.2 --- Syntactic Functions --- p.14 / Chapter 3.2 --- Representing Concepts --- p.15 / Chapter 3.3 --- Application to Conceptual Information Retrieval --- p.18 / Chapter 3.4 --- Overview of Our Concept Extraction Model --- p.20 / Chapter 3.4.1 --- Corpus Training --- p.21 / Chapter 3.4.2 --- Sentence Analyzing --- p.22 / Chapter 4 --- Noun Phrase Detection --- p.23 / Chapter 4.1 --- Significance of Noun Phrase Detection --- p.23 / Chapter 4.1.1 --- Noun Phrases versus Terminals in Parse Trees --- p.23 / Chapter 4.1.2 --- Quantitative Analysis of Applying Noun Phrase Detection --- p.26 / Chapter 4.2 --- An Algorithm for Chinese Noun Phrase Partial Parsing --- p.28 / Chapter 4.2.1 --- The Hybrid Approach --- p.28 / Chapter 4.2.2 --- CNP3´ؤThe Chinese NP Partial Parser --- p.30 / Chapter 5 --- Rule Extraction and SVO Parsing --- p.35 / Chapter 5.1 --- Annotation of Corpora --- p.36 / Chapter 5.1.1 --- Components of Chinese Sentence Patterns --- p.36 / Chapter 5.1.2 --- Annotating Sentence Structures --- p.37 / Chapter 5.1.3 --- Illustrative Examples --- p.38 / Chapter 5.2 --- Parsing with Rules Obtained Directly from Corpora --- p.43 / Chapter 5.2.1 --- Extracting Rules --- p.43 / Chapter 5.2.2 --- Parsing --- p.44 / Chapter 5.3 --- Using Word Specific Information --- p.45 / Chapter 6 --- Generalization of Rules --- p.48 / Chapter 6.1 --- Essence of Chinese Linguistics on Generalization --- p.49 / Chapter 6.1.1 --- Classification of Chinese Sentence Patterns --- p.50 / Chapter 6.1.2 --- Revision of Chinese Verb Phrase Classification --- p.52 / Chapter 6.2 --- Initial Generalization --- p.53 / Chapter 6.2.1 --- Generalizing Rules --- p.55 / Chapter 6.2.2 --- Dealing with Alternative Results --- p.58 / Chapter 6.2.3 --- Parsing --- p.58 / Chapter 6.2.4 --- An illustrative Example --- p.59 / Chapter 6.3 --- Further Generalization --- p.60 / Chapter 7 --- Experiments on SVO Parsing --- p.62 / Chapter 7.1 --- Experimental Setup --- p.63 / Chapter 7.2 --- Effect of Adopting Noun Phrase Detection --- p.65 / Chapter 7.3 --- Results of Generalization --- p.68 / Chapter 7.4 --- Reliability Evaluation --- p.69 / Chapter 7.4.1 --- Covergence Sequence Tests --- p.69 / Chapter 7.4.2 --- Cross Evaluation Tests --- p.72 / Chapter 7.5 --- Overall Performance --- p.75 / Chapter 8 --- Conclusions --- p.79 / Chapter 8.1 --- Summary --- p.79 / Chapter 8.2 --- Contribution --- p.81 / Chapter 8.3 --- Future Directions --- p.81 / Chapter 8.3.1 --- Improvements in Parsing --- p.81 / Chapter 8.3.2 --- Concept Representations --- p.82 / Chapter 8.3.3 --- Non-IR Applications --- p.83 / Bibliography --- p.84 / Appendix --- p.88 / Chapter A --- The Extended Part of Speech Tag Set --- p.88
42

Chinese readability analysis and its applications on the internet.

January 2007 (has links)
Lau Tak Pang. / Thesis submitted in: October 2006. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 110-122). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Major Contributions --- p.1 / Chapter 1.1.1 --- Chinese Readability Analysis --- p.1 / Chapter 1.1.2 --- Web Readability Analysis --- p.3 / Chapter 1.2 --- Thesis Chapter Organization --- p.6 / Chapter 2 --- Related Work --- p.7 / Chapter 2.1 --- Readability Assessment --- p.7 / Chapter 2.1.1 --- Assessment for Text Document --- p.8 / Chapter 2.1.2 --- Assessment for Web Page --- p.13 / Chapter 2.2 --- Support Vector Machine --- p.14 / Chapter 2.2.1 --- Characteristics and Advantages --- p.14 / Chapter 2.2.2 --- Applications --- p.16 / Chapter 2.3 --- Chinese Word Segmentation --- p.16 / Chapter 2.3.1 --- Difficulty in Chinese Word Segmentation --- p.16 / Chapter 2.3.2 --- Approaches for Chinese Word Segmentation --- p.17 / Chapter 3 --- Chinese Readability Analysis --- p.20 / Chapter 3.1 --- Chinese Readability Factor Analysis --- p.20 / Chapter 3.1.1 --- Systematic Analysis --- p.20 / Chapter 3.1.2 --- Feature Extraction --- p.30 / Chapter 3.1.3 --- Limitation of Our Analysis and Possible Extension --- p.32 / Chapter 3.2 --- Research Methodology --- p.33 / Chapter 3.2.1 --- Definition of Readability --- p.33 / Chapter 3.2.2 --- Data Acquisition and Sampling --- p.34 / Chapter 3.2.3 --- Text Processing and Feature Extraction . --- p.35 / Chapter 3.2.4 --- Regression Analysis using Support Vector Regression --- p.36 / Chapter 3.2.5 --- Evaluation --- p.36 / Chapter 3.3 --- Introduction to Support Vector Regression --- p.38 / Chapter 3.3.1 --- Basic Concept --- p.38 / Chapter 3.3.2 --- Non-Linear Extension using Kernel Technique --- p.41 / Chapter 3.4 --- Implementation Details --- p.42 / Chapter 3.4.1 --- Chinese Word Segmentation --- p.42 / Chapter 3.4.2 --- Building Basic Chinese Character / Word Lists --- p.47 / Chapter 3.4.3 --- Pull Sentence Detection --- p.49 / Chapter 3.4.4 --- Feature Selection Using Genetic Algorithm --- p.50 / Chapter 3.5 --- Experiments --- p.55 / Chapter 3.5.1 --- Experiment 1: Evaluation on Chinese Word Segmentation using the LMR-RC Tagging Scheme --- p.56 / Chapter 3.5.2 --- Experiment 2: Initial SVR Parameters Searching with Different Kernel Functions --- p.61 / Chapter 3.5.3 --- Experiment 3: Feature Selection Using Genetic Algorithm --- p.63 / Chapter 3.5.4 --- Experiment 4: Training and Cross-validation Performance using the Selected Feature Subset --- p.67 / Chapter 3.5.5 --- Experiment 5: Comparison with Linear Regression --- p.74 / Chapter 3.6 --- Summary and Future Work --- p.76 / Chapter 4 --- Web Readability Analysis --- p.78 / Chapter 4.1 --- Web Page Readability --- p.79 / Chapter 4.1.1 --- Readability as Comprehension Difficulty . --- p.79 / Chapter 4.1.2 --- Readability as Grade Level --- p.81 / Chapter 4.2 --- Web Site Readability --- p.83 / Chapter 4.3 --- Experiments --- p.85 / Chapter 4.3.1 --- Experiment 1: Web Page Readability Analysis -Comprehension Difficulty --- p.87 / Chapter 4.3.2 --- Experiment 2: Web Page Readability Analysis -Grade Level --- p.92 / Chapter 4.3.3 --- Experiment 3: Web Site Readability Analysis --- p.98 / Chapter 4.4 --- Summary and Future Work --- p.101 / Chapter 5 --- Conclusion --- p.104 / Chapter A --- List of Symbols and Notations --- p.107 / Chapter B --- List of Publications --- p.110 / Bibliography --- p.113
43

Associative information network and applications to an intelligent search engine. / CUHK electronic theses & dissertations collection

January 1998 (has links)
Qin An. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (p. 135-142). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
44

An empirical study on Chinese text compression: from character-based to word-based approach.

January 1997 (has links)
by Kwok-Shing Cheng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 114-120). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Importance of Text Compression --- p.1 / Chapter 1.2 --- Motivation of this Research --- p.2 / Chapter 1.3 --- Characteristics of Chinese --- p.2 / Chapter 1.3.1 --- Huge size of character set --- p.3 / Chapter 1.3.2 --- Lack of word segmentation --- p.3 / Chapter 1.3.3 --- Rich semantics --- p.3 / Chapter 1.4 --- Different Coding Schemes for Chinese --- p.4 / Chapter 1.4.1 --- Big5 Code --- p.4 / Chapter 1.4.2 --- GB (Guo Biao) Code --- p.4 / Chapter 1.4.3 --- HZ (Hanzi) Code --- p.5 / Chapter 1.4.4 --- Unicode Code --- p.5 / Chapter 1.5 --- Modeling and Coding for Chinese Text --- p.6 / Chapter 1.6 --- Static and Adaptive Modeling --- p.6 / Chapter 1.7 --- One-Pass and Two-Pass Modeling --- p.8 / Chapter 1.8 --- Ordering of models --- p.9 / Chapter 1.9 --- Two Sets of Benchmark Files and the Platform --- p.9 / Chapter 1.10 --- Outline of the Thesis --- p.11 / Chapter 2 --- A Survey of Chinese Text Compression --- p.13 / Chapter 2.1 --- Entropy for Chinese Text --- p.14 / Chapter 2.2 --- Weakness of Traditional Compression Algorithms on Chinese Text --- p.15 / Chapter 2.3 --- Statistical Class Algorithms for Compressing Chinese --- p.16 / Chapter 2.3.1 --- Huffman coding scheme --- p.17 / Chapter 2.3.2 --- Arithmetic Coding Scheme --- p.22 / Chapter 2.3.3 --- Restricted Variable Length Coding Scheme --- p.26 / Chapter 2.4 --- Dictionary-based Class Algorithms for Compressing Chinese --- p.27 / Chapter 2.5 --- Experiments and Results --- p.32 / Chapter 2.6 --- Chapter Summary --- p.35 / Chapter 3 --- Indicator Dependent Huffman Coding Scheme --- p.37 / Chapter 3.1 --- Chinese Character Identification Routine --- p.37 / Chapter 3.2 --- Reduction of Header Size --- p.39 / Chapter 3.3 --- Semi-adaptive IDC for Chinese Text --- p.44 / Chapter 3.3.1 --- Theoretical Analysis of Partition Technique for Com- pression --- p.48 / Chapter 3.3.2 --- Experiments and Results of the Semi-adaptive IDC --- p.50 / Chapter 3.4 --- Adaptive IDC for Chinese Text --- p.54 / Chapter 3.4.1 --- Experiments and Results of the Adaptive IDC --- p.57 / Chapter 3.5 --- Chapter Summary --- p.58 / Chapter 4 --- Cascading LZ Algorithms with Huffman Coding Schemes --- p.59 / Chapter 4.1 --- Variations of Huffman Coding Scheme --- p.60 / Chapter 4.1.1 --- Analysis of EPDC and PDC --- p.60 / Chapter 4.1.2 --- "Analysis of PDC, 16Huff and IDC" --- p.65 / Chapter 4.1.3 --- Time and Memory Consumption --- p.71 / Chapter 4.2 --- "Cascading LZSS with PDC, 16Huff and IDC" --- p.73 / Chapter 4.2.1 --- Experimental Results --- p.76 / Chapter 4.3 --- "Cascading LZW with PDC, 16Huff and IDC" --- p.79 / Chapter 4.3.1 --- Experimental Results --- p.82 / Chapter 4.4 --- Chapter Summary --- p.84 / Chapter 5 --- Applying Compression Algorithms to Word-segmented Chi- nese Text --- p.85 / Chapter 5.1 --- Background of word-based compression algorithms --- p.86 / Chapter 5.2 --- Terminology and Benchmark Files for Word Segmentation Model --- p.88 / Chapter 5.3 --- Word Segmentation Model --- p.88 / Chapter 5.4 --- Chinese Entropy from Byte to Word --- p.91 / Chapter 5.5 --- The Generalized Compression and Decompression Model for Word-segmented Chinese text --- p.92 / Chapter 5.6 --- Applying Huffman Coding Scheme to Word-segmented Chinese text --- p.94 / Chapter 5.7 --- Applying WLZSSHUF to Word-segmented Chinese text --- p.97 / Chapter 5.8 --- Applying WLZWHUF to Word-segmented Chinese text --- p.102 / Chapter 5.9 --- Match Ratio and Compression Ratio --- p.105 / Chapter 5.10 --- Chapter Summary --- p.108 / Chapter 6 --- Concluding Remarks --- p.110 / Chapter 6.1 --- Conclusions --- p.110 / Chapter 6.2 --- Contributions --- p.111 / Chapter 6.3 --- Future Directions --- p.112 / Chapter 6.3.1 --- Integrate Decremental Coding Scheme with IDC --- p.112 / Chapter 6.3.2 --- Re-order the Character Sequences in the Sliding Window of LZSS --- p.113 / Chapter 6.3.3 --- Multiple Huffman Trees for Word-based Compression --- p.113 / Bibliography --- p.114
45

Automatic topic detection from news stories.

January 2001 (has links)
Hui Kin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 115-120). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Topic Detection Problem --- p.2 / Chapter 1.1.1 --- What is a Topic? --- p.2 / Chapter 1.1.2 --- Topic Detection --- p.3 / Chapter 1.2 --- Our Contributions --- p.5 / Chapter 1.2.1 --- Thesis Organization --- p.6 / Chapter 2 --- Literature Review --- p.7 / Chapter 2.1 --- Dragon Systems --- p.7 / Chapter 2.2 --- University of Massachusetts (UMass) --- p.9 / Chapter 2.3 --- Carnegie Mellon University (CMU) --- p.10 / Chapter 2.4 --- BBN Technologies --- p.11 / Chapter 2.5 --- IBM T. J. Watson Research Center --- p.12 / Chapter 2.6 --- National Taiwan University (NTU) --- p.13 / Chapter 2.7 --- Drawbacks of Existing Approaches --- p.14 / Chapter 3 --- System Overview --- p.16 / Chapter 3.1 --- News Sources --- p.17 / Chapter 3.2 --- Story Preprocessing --- p.21 / Chapter 3.3 --- Named Entity Extraction --- p.22 / Chapter 3.4 --- Gross Translation --- p.22 / Chapter 3.5 --- Unsupervised Learning Module --- p.24 / Chapter 4 --- Term Extraction and Story Representation --- p.27 / Chapter 4.1 --- IBM Intelligent Miner For Text --- p.28 / Chapter 4.2 --- Transformation-based Error-driven Learning --- p.31 / Chapter 4.2.1 --- Learning Stage --- p.32 / Chapter 4.2.2 --- Design of New Tags --- p.33 / Chapter 4.2.3 --- Lexical Rules Learning --- p.35 / Chapter 4.2.4 --- Contextual Rules Learning --- p.39 / Chapter 4.3 --- Extracting Named Entities Using Learned Rules --- p.42 / Chapter 4.4 --- Story Representation --- p.46 / Chapter 4.4.1 --- Basic Representation --- p.46 / Chapter 4.4.2 --- Enhanced Representation --- p.47 / Chapter 5 --- Gross Translation --- p.52 / Chapter 5.1 --- Basic Translation --- p.52 / Chapter 5.2 --- Enhanced Translation --- p.60 / Chapter 5.2.1 --- Parallel Corpus Alignment Approach --- p.60 / Chapter 5.2.2 --- Enhanced Translation Approach --- p.62 / Chapter 6 --- Unsupervised Learning Module --- p.68 / Chapter 6.1 --- Overview of the Discovery Algorithm --- p.68 / Chapter 6.2 --- Topic Representation --- p.70 / Chapter 6.3 --- Similarity Calculation --- p.72 / Chapter 6.3.1 --- Similarity Score Calculation --- p.72 / Chapter 6.3.2 --- Time Adjustment Scheme --- p.74 / Chapter 6.3.3 --- Language Normalization Scheme --- p.75 / Chapter 6.4 --- Related Elements Combination --- p.78 / Chapter 7 --- Experimental Results and Analysis --- p.84 / Chapter 7.1 --- TDT corpora --- p.84 / Chapter 7.2 --- Evaluation Methodology --- p.85 / Chapter 7.3 --- Experimental Results on Various Parameter Settings --- p.88 / Chapter 7.4 --- Experiments Results on Various Named Entity Extraction Ap- proaches --- p.89 / Chapter 7.5 --- Experiments Results on Various Story Representation Approaches --- p.100 / Chapter 7.6 --- Experiments Results on Various Translation Approaches --- p.104 / Chapter 7.7 --- Experiments Results on the Effect of Language Normalization Scheme on Detection Approaches --- p.106 / Chapter 7.8 --- TDT2000 Topic Detection Result --- p.110 / Chapter 8 --- Conclusions and Future Works --- p.112 / Chapter 8.1 --- Conclusions --- p.112 / Chapter 8.2 --- Future Work --- p.114 / Bibliography --- p.115 / Chapter A --- List of Topics annotated for TDT2 Corpus --- p.121 / Chapter B --- Significant Test Results --- p.124
46

A robust unification-based parser for Chinese natural language processing.

January 2001 (has links)
Chan Shuen-ti Roy. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 168-175). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.12 / Chapter 1.1. --- The nature of natural language processing --- p.12 / Chapter 1.2. --- Applications of natural language processing --- p.14 / Chapter 1.3. --- Purpose of study --- p.17 / Chapter 1.4. --- Organization of this thesis --- p.18 / Chapter 2. --- Organization and methods in natural language processing --- p.20 / Chapter 2.1. --- Organization of natural language processing system --- p.20 / Chapter 2.2. --- Methods employed --- p.22 / Chapter 2.3. --- Unification-based grammar processing --- p.22 / Chapter 2.3.1. --- Generalized Phase Structure Grammar (GPSG) --- p.27 / Chapter 2.3.2. --- Head-driven Phrase Structure Grammar (HPSG) --- p.31 / Chapter 2.3.3. --- Common drawbacks of UBGs --- p.33 / Chapter 2.4. --- Corpus-based processing --- p.34 / Chapter 2.4.1. --- Drawback of corpus-based processing --- p.35 / Chapter 3. --- Difficulties in Chinese language processing and its related works --- p.37 / Chapter 3.1. --- A glance at the history --- p.37 / Chapter 3.2. --- Difficulties in syntactic analysis of Chinese --- p.37 / Chapter 3.2.1. --- Writing system of Chinese causes segmentation problem --- p.38 / Chapter 3.2.2. --- Words serving multiple grammatical functions without inflection --- p.40 / Chapter 3.2.3. --- Word order of Chinese --- p.42 / Chapter 3.2.4. --- The Chinese grammatical word --- p.43 / Chapter 3.3. --- Related works --- p.45 / Chapter 3.3.1. --- Unification grammar processing approach --- p.45 / Chapter 3.3.2. --- Corpus-based processing approach --- p.48 / Chapter 3.4. --- Restatement of goal --- p.50 / Chapter 4. --- SERUP: Statistical-Enhanced Robust Unification Parser --- p.54 / Chapter 5. --- Step One: automatic preprocessing --- p.57 / Chapter 5.1. --- Segmentation of lexical tokens --- p.57 / Chapter 5.2. --- "Conversion of date, time and numerals" --- p.61 / Chapter 5.3. --- Identification of new words --- p.62 / Chapter 5.3.1. --- Proper nouns ´ؤ Chinese names --- p.63 / Chapter 5.3.2. --- Other proper nouns and multi-syllabic words --- p.67 / Chapter 5.4. --- Defining smallest parsing unit --- p.82 / Chapter 5.4.1. --- The Chinese sentence --- p.82 / Chapter 5.4.2. --- Breaking down the paragraphs --- p.84 / Chapter 5.4.3. --- Implementation --- p.87 / Chapter 6. --- Step Two: grammar construction --- p.91 / Chapter 6.1. --- Criteria in choosing a UBG model --- p.91 / Chapter 6.2. --- The grammar in details --- p.92 / Chapter 6.2.1. --- The PHON feature --- p.93 / Chapter 6.2.2. --- The SYN feature --- p.94 / Chapter 6.2.3. --- The SEM feature --- p.98 / Chapter 6.2.4. --- Grammar rules and features principles --- p.99 / Chapter 6.2.5. --- Verb phrases --- p.101 / Chapter 6.2.6. --- Noun phrases --- p.104 / Chapter 6.2.7. --- Prepositional phrases --- p.113 / Chapter 6.2.8. --- """Ba2"" and ""Bei4"" constructions" --- p.115 / Chapter 6.2.9. --- The terminal node S --- p.119 / Chapter 6.2.10. --- Summary of phrasal rules --- p.121 / Chapter 6.2.11. --- Morphological rules --- p.122 / Chapter 7. --- Step Three: resolving structural ambiguities --- p.128 / Chapter 7.1. --- Sources of ambiguities --- p.128 / Chapter 7.2. --- The traditional practices: an illustration --- p.132 / Chapter 7.3. --- Deficiency of current practices --- p.134 / Chapter 7.4. --- A new point of view: Wu (1999) --- p.140 / Chapter 7.5. --- Improvement over Wu (1999) --- p.142 / Chapter 7.6. --- Conclusion on semantic features --- p.146 / Chapter 8. --- "Implementation, performance and evaluation" --- p.148 / Chapter 8.1. --- Implementation --- p.148 / Chapter 8.2. --- Performance and evaluation --- p.150 / Chapter 8.2.1. --- The test set --- p.150 / Chapter 8.2.2. --- Segmentation of lexical tokens --- p.150 / Chapter 8.2.3. --- New word identification --- p.152 / Chapter 8.2.4. --- Parsing unit segmentation --- p.156 / Chapter 8.2.5. --- The grammar --- p.158 / Chapter 8.3. --- Overall performance of SERUP --- p.162 / Chapter 9. --- Conclusion --- p.164 / Chapter 9.1. --- Summary of this thesis --- p.164 / Chapter 9.2. --- Contribution of this thesis --- p.165 / Chapter 9.3. --- Future work --- p.166 / References --- p.168 / Appendix I --- p.176 / Appendix II --- p.181 / Appendix III --- p.183
47

Fuzzy set theoretic approach to handwritten Chinese character recognition

陳國評, Chan, Kwok-ping. January 1989 (has links)
published_or_final_version / abstract / toc / Electrical Engineering / Doctoral / Doctor of Philosophy
48

Machine recognition of multi-font printed Chinese Characters

葉賜權, Yip, Chee-kuen. January 1990 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
49

Computer recognition of printed Chinese characters

林依民, Lin, Yi-min. January 1990 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
50

Computer recognition of handprinted Chinese characters

梁祥海, Leung, Cheung-hoi. January 1986 (has links)
published_or_final_version / Electrical Engineering / Doctoral / Doctor of Philosophy

Page generated in 0.1306 seconds