• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 78
  • 78
  • 78
  • 78
  • 28
  • 25
  • 24
  • 21
  • 19
  • 14
  • 12
  • 11
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Text compression for Chinese documents.

January 1995 (has links)
by Chi-kwun Kan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 133-137). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Importance of Text Compression --- p.1 / Chapter 1.2 --- Historical Background of Data Compression --- p.2 / Chapter 1.3 --- The Essences of Data Compression --- p.4 / Chapter 1.4 --- Motivation and Objectives of the Project --- p.5 / Chapter 1.5 --- Definition of Important Terms --- p.6 / Chapter 1.5.1 --- Data Models --- p.6 / Chapter 1.5.2 --- Entropy --- p.10 / Chapter 1.5.3 --- Statistical and Dictionary-based Compression --- p.12 / Chapter 1.5.4 --- Static and Adaptive Modelling --- p.12 / Chapter 1.5.5 --- One-Pass and Two-Pass Modelling --- p.13 / Chapter 1.6 --- Benchmarks and Measurements of Results --- p.15 / Chapter 1.7 --- Sources of Testing Data --- p.16 / Chapter 1.8 --- Outline of the Thesis --- p.16 / Chapter 2 --- Literature Survey --- p.18 / Chapter 2.1 --- Data compression Algorithms --- p.18 / Chapter 2.1.1 --- Statistical Compression Methods --- p.18 / Chapter 2.1.2 --- Dictionary-based Compression Methods (Ziv-Lempel Fam- ily) --- p.23 / Chapter 2.2 --- Cascading of Algorithms --- p.33 / Chapter 2.3 --- Problems of Current Compression Programs on Chinese --- p.34 / Chapter 2.4 --- Previous Chinese Data Compression Literatures --- p.37 / Chapter 3 --- Chinese-related Issues --- p.38 / Chapter 3.1 --- Characteristics in Chinese Data Compression --- p.38 / Chapter 3.1.1 --- Large and Not Fixed Size Character Set --- p.38 / Chapter 3.1.2 --- Lack of Word Segmentation --- p.40 / Chapter 3.1.3 --- Rich Semantic Meaning of Chinese Characters --- p.40 / Chapter 3.1.4 --- Grammatical Variance of Chinese Language --- p.41 / Chapter 3.2 --- Definition of Different Coding Schemes --- p.41 / Chapter 3.2.1 --- Big5 Code --- p.42 / Chapter 3.2.2 --- GB (Guo Biao) Code --- p.43 / Chapter 3.2.3 --- Unicode --- p.44 / Chapter 3.2.4 --- HZ (Hanzi) Code --- p.45 / Chapter 3.3 --- Entropy of Chinese and Other Languages --- p.45 / Chapter 4 --- Huffman Coding on Chinese Text --- p.49 / Chapter 4.1 --- The use of the Chinese Character Identification Routine --- p.50 / Chapter 4.2 --- Result --- p.51 / Chapter 4.3 --- Justification of the Result --- p.53 / Chapter 4.4 --- Time and Memory Resources Analysis --- p.58 / Chapter 4.5 --- The Heuristic Order-n Huffman Coding for Chinese Text Com- pression --- p.61 / Chapter 4.5.1 --- The Algorithm --- p.62 / Chapter 4.5.2 --- Result --- p.63 / Chapter 4.5.3 --- Justification of the Result --- p.64 / Chapter 4.6 --- Chapter Conclusion --- p.66 / Chapter 5 --- The Ziv-Lempel Compression on Chinese Text --- p.67 / Chapter 5.1 --- The Chinese LZSS Compression --- p.68 / Chapter 5.1.1 --- The Algorithm --- p.69 / Chapter 5.1.2 --- Result --- p.73 / Chapter 5.1.3 --- Justification of the Result --- p.74 / Chapter 5.1.4 --- Time and Memory Resources Analysis --- p.80 / Chapter 5.1.5 --- Effects in Controlling the Parameters --- p.81 / Chapter 5.2 --- The Chinese LZW Compression --- p.92 / Chapter 5.2.1 --- The Algorithm --- p.92 / Chapter 5.2.2 --- Result --- p.94 / Chapter 5.2.3 --- Justification of the Result --- p.95 / Chapter 5.2.4 --- Time and Memory Resources Analysis --- p.97 / Chapter 5.2.5 --- Effects in Controlling the Parameters --- p.98 / Chapter 5.3 --- A Comparison of the performance of the LZSS and the LZW --- p.100 / Chapter 5.4 --- Chapter Conclusion --- p.101 / Chapter 6 --- Chinese Dictionary-based Huffman coding --- p.103 / Chapter 6.1 --- The Algorithm --- p.104 / Chapter 6.2 --- Result --- p.107 / Chapter 6.3 --- Justification of the Result --- p.108 / Chapter 6.4 --- Effects of Changing the Size of the Dictionary --- p.111 / Chapter 6.5 --- Chapter Conclusion --- p.114 / Chapter 7 --- Cascading of Huffman coding and LZW compression --- p.116 / Chapter 7.1 --- Static Cascading Model --- p.117 / Chapter 7.1.1 --- The Algorithm --- p.117 / Chapter 7.1.2 --- Result --- p.120 / Chapter 7.1.3 --- Explanation and Analysis of the Result --- p.121 / Chapter 7.2 --- Adaptive (Dynamic) Cascading Model --- p.125 / Chapter 7.2.1 --- The Algorithm --- p.125 / Chapter 7.2.2 --- Result --- p.126 / Chapter 7.2.3 --- Explanation and Analysis of the Result --- p.127 / Chapter 7.3 --- Chapter Conclusion --- p.128 / Chapter 8 --- Concluding Remarks --- p.129 / Chapter 8.1 --- Conclusion --- p.129 / Chapter 8.2 --- Future Work Direction --- p.130 / Chapter 8.2.1 --- Improvement in Efficiency and Resources Consumption --- p.130 / Chapter 8.2.2 --- The Compressibility of Chinese and Other Languages --- p.131 / Chapter 8.2.3 --- Use of Grammar Model --- p.131 / Chapter 8.2.4 --- Lossy Compression --- p.131 / Chapter 8.3 --- Epilogue --- p.132 / Bibliography --- p.133
12

On-line learning for adaptive text filtering.

January 1999 (has links)
Yu Kwok Leung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 91-96). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Problem --- p.1 / Chapter 1.2 --- Information Filtering --- p.2 / Chapter 1.3 --- Contributions --- p.7 / Chapter 1.4 --- Organization Of The Thesis --- p.10 / Chapter 2 --- Related Work --- p.12 / Chapter 3 --- Adaptive Text Filtering --- p.22 / Chapter 3.1 --- Representation --- p.22 / Chapter 3.1.1 --- Textual Document --- p.23 / Chapter 3.1.2 --- Filtering Profile --- p.28 / Chapter 3.2 --- On-line Learning Algorithms For Adaptive Text Filtering --- p.29 / Chapter 3.2.1 --- The Sleeping Experts Algorithm --- p.29 / Chapter 3.2.2 --- The EG-based Algorithms --- p.32 / Chapter 4 --- The REPGER Algorithm --- p.37 / Chapter 4.1 --- A New Approach --- p.37 / Chapter 4.2 --- Relevance Prediction By RElevant feature Pool --- p.42 / Chapter 4.3 --- Retrieving Good Training Examples --- p.45 / Chapter 4.4 --- Learning Dissemination Threshold Dynamically --- p.49 / Chapter 5 --- The Threshold Learning Algorithm --- p.50 / Chapter 5.1 --- Learning Dissemination Threshold Dynamically --- p.50 / Chapter 5.2 --- Existing Threshold Learning Techniques --- p.51 / Chapter 5.3 --- A New Threshold Learning Algorithm --- p.53 / Chapter 6 --- Empirical Evaluations --- p.55 / Chapter 6.1 --- Experimental Methodology --- p.55 / Chapter 6.2 --- Experimental Settings --- p.59 / Chapter 6.3 --- Experimental Results --- p.62 / Chapter 7 --- Integrating With Feature Clustering --- p.76 / Chapter 7.1 --- Distributional Clustering Algorithm --- p.79 / Chapter 7.2 --- Integrating With Our REPGER Algorithm --- p.82 / Chapter 7.3 --- Empirical Evaluation --- p.84 / Chapter 8 --- Conclusions --- p.87 / Chapter 8.1 --- Summary --- p.87 / Chapter 8.2 --- Future Work --- p.88 / Bibliography --- p.91 / Chapter A --- Experimental Results On The AP Corpus --- p.97 / Chapter A.1 --- The EG Algorithm --- p.97 / Chapter A.2 --- The EG-C Algorithm --- p.98 / Chapter A.3 --- The REPGER Algorithm --- p.100 / Chapter B --- Experimental Results On The FBIS Corpus --- p.102 / Chapter B.1 --- The EG Algorithm --- p.102 / Chapter B.2 --- The EG-C Algorithm --- p.103 / Chapter B.3 --- The REPGER Algorithm --- p.105 / Chapter C --- Experimental Results On The WSJ Corpus --- p.107 / Chapter C.1 --- The EG Algorithm --- p.107 / Chapter C.2 --- The EG-C Algorithm --- p.108 / Chapter C.3 --- The REPGER Algorithm --- p.110
13

A probabilistic approach for automatic text filtering.

January 1998 (has links)
Low Kon Fan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 165-168). / Abstract also in Chinese. / Abstract --- p.i / Acknowledgment --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of Information Filtering --- p.1 / Chapter 1.2 --- Contributions --- p.4 / Chapter 1.3 --- Organization of this thesis --- p.6 / Chapter 2 --- Existing Approaches --- p.7 / Chapter 2.1 --- Representational issues --- p.7 / Chapter 2.1.1 --- Document Representation --- p.7 / Chapter 2.1.2 --- Feature Selection --- p.11 / Chapter 2.2 --- Traditional Approaches --- p.15 / Chapter 2.2.1 --- NewsWeeder --- p.15 / Chapter 2.2.2 --- NewT --- p.17 / Chapter 2.2.3 --- SIFT --- p.19 / Chapter 2.2.4 --- InRoute --- p.20 / Chapter 2.2.5 --- Motivation of Our Approach --- p.21 / Chapter 2.3 --- Probabilistic Approaches --- p.23 / Chapter 2.3.1 --- The Naive Bayesian Approach --- p.25 / Chapter 2.3.2 --- The Bayesian Independence Classifier Approach --- p.28 / Chapter 2.4 --- Comparison --- p.31 / Chapter 3 --- Our Bayesian Network Approach --- p.33 / Chapter 3.1 --- Backgrounds of Bayesian Networks --- p.34 / Chapter 3.2 --- Bayesian Network Induction Approach --- p.36 / Chapter 3.3 --- Automatic Construction of Bayesian Networks --- p.38 / Chapter 4 --- Automatic Feature Discretization --- p.50 / Chapter 4.1 --- Predefined Level Discretization --- p.52 / Chapter 4.2 --- Lloyd's algorithm . . > --- p.53 / Chapter 4.3 --- Class Dependence Discretization --- p.55 / Chapter 5 --- Experiments and Results --- p.59 / Chapter 5.1 --- Document Collections --- p.60 / Chapter 5.2 --- Batch Filtering Experiments --- p.63 / Chapter 5.3 --- Batch Filtering Results --- p.65 / Chapter 5.4 --- Incremental Session Filtering Experiments --- p.87 / Chapter 5.5 --- Incremental Session Filtering Results --- p.88 / Chapter 6 --- Conclusions and Future Work --- p.105 / Appendix A --- p.107 / Appendix B --- p.116 / Appendix C --- p.126 / Appendix D --- p.131 / Appendix E --- p.145
14

Multi-lingual text retrieval and mining.

January 2003 (has links)
Law Yin Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 130-134). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Cross-Lingual Information Retrieval (CLIR) --- p.2 / Chapter 1.2 --- Bilingual Term Association Mining --- p.5 / Chapter 1.3 --- Our Contributions --- p.6 / Chapter 1.3.1 --- CLIR --- p.6 / Chapter 1.3.2 --- Bilingual Term Association Mining --- p.7 / Chapter 1.4 --- Thesis Organization --- p.8 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- CLIR Techniques --- p.9 / Chapter 2.1.1 --- Existing Approaches --- p.9 / Chapter 2.1.2 --- Difference Between Our Model and Existing Approaches --- p.13 / Chapter 2.2 --- Bilingual Term Association Mining Techniques --- p.13 / Chapter 2.2.1 --- Existing Approaches --- p.13 / Chapter 2.2.2 --- Difference Between Our Model and Existing Approaches --- p.17 / Chapter 3 --- Cross-Lingual Information Retrieval (CLIR) --- p.18 / Chapter 3.1 --- Cross-Lingual Query Processing and Translation --- p.18 / Chapter 3.1.1 --- Query Context and Document Context Generation --- p.20 / Chapter 3.1.2 --- Context-Based Query Translation --- p.23 / Chapter 3.1.3 --- Query Term Weighting --- p.28 / Chapter 3.1.4 --- Final Weight Calculation --- p.30 / Chapter 3.2 --- Retrieval on Documents and Automated Summaries --- p.32 / Chapter 4 --- Experiments on Cross-Lingual Information Retrieval --- p.38 / Chapter 4.1 --- Experimental Setup --- p.38 / Chapter 4.2 --- Results of English-to-Chinese Retrieval --- p.45 / Chapter 4.2.1 --- Using Mono-Lingual Retrieval as the Gold Standard --- p.45 / Chapter 4.2.2 --- Using Human Relevance Judgments as the Gold Stan- dard --- p.49 / Chapter 4.3 --- Results of Chinese-to-English Retrieval --- p.53 / Chapter 4.3.1 --- Using Mono-lingual Retrieval as the Gold Standard --- p.53 / Chapter 4.3.2 --- Using Human Relevance Judgments as the Gold Stan- dard --- p.57 / Chapter 5 --- Discovering Comparable Multi-lingual Online News for Text Mining --- p.61 / Chapter 5.1 --- Story Representation --- p.62 / Chapter 5.2 --- Gloss Translation --- p.64 / Chapter 5.3 --- Comparable News Discovery --- p.67 / Chapter 6 --- Mining Bilingual Term Association Based on Co-occurrence --- p.75 / Chapter 6.1 --- Bilingual Term Cognate Generation --- p.75 / Chapter 6.2 --- Term Mining Algorithm --- p.77 / Chapter 7 --- Phonetic Matching --- p.87 / Chapter 7.1 --- Algorithm Design --- p.87 / Chapter 7.2 --- Discovering Associations of English Terms and Chinese Terms --- p.93 / Chapter 7.2.1 --- Converting English Terms into Phonetic Representation --- p.93 / Chapter 7.2.2 --- Discovering Associations of English Terms and Man- darin Chinese Terms --- p.100 / Chapter 7.2.3 --- Discovering Associations of English Terms and Can- tonese Chinese Terms --- p.104 / Chapter 8 --- Experiments on Bilingual Term Association Mining --- p.111 / Chapter 8.1 --- Experimental Setup --- p.111 / Chapter 8.2 --- Result and Discussion of Bilingual Term Association Mining Based on Co-occurrence --- p.114 / Chapter 8.3 --- Result and Discussion of Phonetic Matching --- p.121 / Chapter 9 --- Conclusions and Future Work --- p.126 / Chapter 9.1 --- Conclusions --- p.126 / Chapter 9.1.1 --- CLIR --- p.126 / Chapter 9.1.2 --- Bilingual Term Association Mining --- p.127 / Chapter 9.2 --- Future Work --- p.128 / Bibliography --- p.134 / Chapter A --- Original English Queries --- p.135 / Chapter B --- Manual translated Chinese Queries --- p.137 / Chapter C --- Pronunciation symbols used by the PRONLEX Lexicon --- p.139 / Chapter D --- Initial Letter-to-Phoneme Tags --- p.141 / Chapter E --- English Sounds with their Chinese Equivalents --- p.143
15

Semi-supervised document clustering with active learning. / CUHK electronic theses & dissertations collection

January 2008 (has links)
Most existing semi-supervised document clustering approaches are model-based clustering and can be treated as parametric model taking an assumption that the underlying clusters follow a certain pre-defined distribution. In our semi-supervised document clustering, each cluster is represented by a non-parametric probability distribution. Two approaches are designed for incorporating pairwise constraints in the document clustering approach. The first approach, term-to-term relationship approach (TR), uses pairwise constraints for capturing term-to-term dependence relationships. The second approach, linear combination approach (LC), combines the clustering objective function with the user-provided constraints linearly. Extensive experimental results show that our proposed framework is effective. / This thesis presents a new framework for automatically partitioning text documents taking into consideration of constraints given by users. Semi-supervised document clustering is developed based on pairwise constraints. Different from traditional semi-supervised document clustering approaches which assume pairwise constraints to be prepared by user beforehand, we develop a novel framework for automatically discovering pairwise constraints revealing the user grouping preference. Active learning approach for choosing informative document pairs is designed by measuring the amount of information that can be obtained by revealing judgments of document pairs. For this purpose, three models, namely, uncertainty model, generation error model, and term-to-term relationship model, are designed for measuring the informativeness of document pairs from different perspectives. Dependent active learning approach is developed by extending the active learning approach to avoid redundant document pair selection. Two models are investigated for estimating the likelihood that a document pair is redundant to previously selected document pairs, namely, KL divergence model and symmetric model. / Huang, Ruizhang. / Adviser: Wai Lam. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3600. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 117-123). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
16

Geometric and topological approaches to semantic text retrieval. / CUHK electronic theses & dissertations collection

January 2007 (has links)
In the first part of this thesis, we present a new understanding of the latent semantic space of a dataset from the dual perspective, which relaxes the above assumed conditions and leads naturally to a unified kernel function for a class of vector space models. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also show that the new methods possess the stable property on the rank choice, i.e., even if the selected rank is quite far away from the optimal one, the retrieval performance will not degrade much. The experimental results of our methods on the standard test sets are promising. / In the second part of this thesis, we propose that the mathematical structure of simplexes can be attached to a term-document matrix in the vector-space model (VSM) for information retrieval. The Q-analysis devised by R. H. Atkin may then be applied to effect an analysis of the topological structure of the simplexes and their corresponding dataset. Experimental results of this analysis reveal that there is a correlation between the effectiveness of LSI and the topological structure of the dataset. By using the information obtained from the topological analysis, we develop a new query expansion method. Experimental results show that our method can enhance the performance of VSM for datasets over which LSI is not effective. Finally, the notion of homology is introduced to the topological analysis of datasets and its possible relation to word sense disambiguation is studied through a simple example. / With the vast amount of textual information available today, the task of designing effective and efficient retrieval methods becomes more important and complex. The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, it can not retrieve all relevant documents since it is based on literal term matching. The Generalized Vector Space Model (GVSM) and the Latent Semantic Indexing (LSI) are two famous semantic retrieval methods, in which some underlying latent semantic structures in the dataset are assumed. However, their assumptions about where the semantic structure locates are a bit strong. Moreover, the performance of LSI can be very different for various datasets and the questions of what characteristics of a dataset and why these characteristics contribute to this difference have not been fully understood. The present thesis focuses on providing answers to these two questions. / Li , Dandan. / "August 2007." / Adviser: Chung-Ping Kwong. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1108. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 118-120). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
17

New learning strategies for automatic text categorization.

January 2001 (has links)
Lai Kwok-yin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 125-130). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Automatic Textual Document Categorization --- p.1 / Chapter 1.2 --- Meta-Learning Approach For Text Categorization --- p.3 / Chapter 1.3 --- Contributions --- p.6 / Chapter 1.4 --- Organization of the Thesis --- p.7 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- Existing Automatic Document Categorization Approaches --- p.9 / Chapter 2.2 --- Existing Meta-Learning Approaches For Information Retrieval --- p.14 / Chapter 2.3 --- Our Meta-Learning Approaches --- p.20 / Chapter 3 --- Document Pre-Processing --- p.22 / Chapter 3.1 --- Document Representation --- p.22 / Chapter 3.2 --- Classification Scheme Learning Strategy --- p.25 / Chapter 4 --- Linear Combination Approach --- p.30 / Chapter 4.1 --- Overview --- p.30 / Chapter 4.2 --- Linear Combination Approach - The Algorithm --- p.33 / Chapter 4.2.1 --- Equal Weighting Strategy --- p.34 / Chapter 4.2.2 --- Weighting Strategy Based On Utility Measure --- p.34 / Chapter 4.2.3 --- Weighting Strategy Based On Document Rank --- p.35 / Chapter 4.3 --- Comparisons of Linear Combination Approach and Existing Meta-Learning Methods --- p.36 / Chapter 4.3.1 --- LC versus Simple Majority Voting --- p.36 / Chapter 4.3.2 --- LC versus BORG --- p.38 / Chapter 4.3.3 --- LC versus Restricted Linear Combination Method --- p.38 / Chapter 5 --- The New Meta-Learning Model - MUDOF --- p.40 / Chapter 5.1 --- Overview --- p.41 / Chapter 5.2 --- Document Feature Characteristics --- p.42 / Chapter 5.3 --- Classification Errors --- p.44 / Chapter 5.4 --- Linear Regression Model --- p.45 / Chapter 5.5 --- The MUDOF Algorithm --- p.47 / Chapter 6 --- Incorporating MUDOF into Linear Combination approach --- p.52 / Chapter 6.1 --- Background --- p.52 / Chapter 6.2 --- Overview of MUDOF2 --- p.54 / Chapter 6.3 --- Major Components of the MUDOF2 --- p.57 / Chapter 6.4 --- The MUDOF2 Algorithm --- p.59 / Chapter 7 --- Experimental Setup --- p.66 / Chapter 7.1 --- Document Collection --- p.66 / Chapter 7.2 --- Evaluation Metric --- p.68 / Chapter 7.3 --- Component Classification Algorithms --- p.71 / Chapter 7.4 --- Categorical Document Feature Characteristics for MUDOF and MUDOF2 --- p.72 / Chapter 8 --- Experimental Results and Analysis --- p.74 / Chapter 8.1 --- Performance of Linear Combination Approach --- p.74 / Chapter 8.2 --- Performance of the MUDOF Approach --- p.78 / Chapter 8.3 --- Performance of MUDOF2 Approach --- p.87 / Chapter 9 --- Conclusions and Future Work --- p.96 / Chapter 9.1 --- Conclusions --- p.96 / Chapter 9.2 --- Future Work --- p.98 / Chapter A --- Details of Experimental Results for Reuters-21578 corpus --- p.99 / Chapter B --- Details of Experimental Results for OHSUMED corpus --- p.114 / Bibliography --- p.125
18

Language and representation : the recontextualisation of participants, activities and reactions

Van Leeuwen, Theo January 1993 (has links)
Doctor of Philosophy / This thesis proposes a model for the description of social practice which analyses social practices into the following elements: (1) the participants of the practice; (2) the activities which constitute the practice; (3) the performance indicators which stipulate how the activities are to be performed; (4) the dress and body grooming for the participants; (5) the times when, and (6)the locations where the activities take place; (7) the objects, tools and materials, required for performing the activities; and (8) the eligibility conditions for the participants and their dress, the objects, and the locations, that is, the characteristics these elements must have to be eligible to participate in, or be used in, the social practice.
19

A method for finding common attributes in hetrogenous DoD databases /

Zobair, Hamza A. January 2004 (has links) (PDF)
Thesis (M.S. in Software Engineering)--Naval Postgraduate School, June 2004. / Thesis advisor(s): Valdis Berzins. Includes bibliographical references (p. 179). Also available online.
20

Latent semantic sentence clustering for multi-document summarization

Geiss, Johanna January 2011 (has links)
No description available.

Page generated in 0.4494 seconds