• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 645
  • 243
  • 94
  • 71
  • 50
  • 35
  • 30
  • 26
  • 25
  • 18
  • 16
  • 15
  • 10
  • 8
  • 7
  • Tagged with
  • 1449
  • 1449
  • 322
  • 236
  • 230
  • 204
  • 201
  • 194
  • 194
  • 193
  • 178
  • 153
  • 138
  • 125
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modelling the IR task : supporting the user

Ennis, Mark January 1998 (has links)
No description available.
2

Distributed inverted files and performance : a study of parallelism and data distribution methods in IR

Macfarlane, Andrew January 2000 (has links)
No description available.
3

Criteria for user-friendliness in retrieval software design

Trenner, Lesley January 1988 (has links)
No description available.
4

The entity relationship model as a basis for information retrieval

Pitkin, W. J. January 1984 (has links)
No description available.
5

Cooperative working in an open hypermedia environment

Melley, Mylene January 1995 (has links)
No description available.
6

The implementation and use of a logic based approach to assist retrieval from a relational database

Jones, P. January 1988 (has links)
No description available.
7

A concept-space based multi-document text summarizer.

January 2001 (has links)
by Tang Ting Kap. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 88-94). / Abstracts in English and Chinese. / List of Figures --- p.vi / List of Tables --- p.vii / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 1.1 --- Information Overloading and Low Utilization --- p.2 / Chapter 1.2 --- Problem Needs To Solve --- p.3 / Chapter 1.3 --- Research Contributions --- p.4 / Chapter 1.3.1 --- Using Concept Space in Summarization --- p.5 / Chapter 1.3.2 --- New Extraction Method --- p.5 / Chapter 1.3.3 --- Experiments on New System --- p.6 / Chapter 1.4 --- Organization of This Thesis --- p.7 / Chapter 2. --- LITERATURE REVIEW --- p.8 / Chapter 2.1 --- Classical Approach --- p.8 / Chapter 2.1.1 --- Luhn's Algorithm --- p.9 / Chapter 2.1.2 --- Edumundson's Algorithm --- p.11 / Chapter 2.2 --- Statistical Approach --- p.15 / Chapter 2.3 --- Natural Language Processing Approach --- p.15 / Chapter 3. --- PROPOSED SUMMARIZATION APPROACH --- p.18 / Chapter 3.1 --- Direction of Summarization --- p.19 / Chapter 3.2 --- Overview of Summarization Algorithm --- p.20 / Chapter 3.2.1 --- Document Pre-processing --- p.21 / Chapter 3.2.2 --- Vector Space Model --- p.23 / Chapter 3.2.3 --- Sentence Extraction --- p.24 / Chapter 3.3 --- Evaluation Method --- p.25 / Chapter 3.3.1 --- "Recall, Precision and F-measure" --- p.25 / Chapter 3.4 --- Advantage of Concept Space Approach --- p.26 / Chapter 4. --- SYSTEM ARCHITECTURE --- p.27 / Chapter 4.1 --- Converge Process --- p.28 / Chapter 4.2 --- Diverge Process --- p.30 / Chapter 4.3 --- Backward Search --- p.31 / Chapter 5. --- CONVERGE PROCESS --- p.32 / Chapter 5.1 --- Document Merging --- p.32 / Chapter 5.2 --- Word Phrase Extraction --- p.34 / Chapter 5.3 --- Automatic Indexing --- p.34 / Chapter 5.4 --- Cluster Analysis --- p.35 / Chapter 5.5 --- Hopfield Net Classification --- p.37 / Chapter 6. --- DIVERGE PROCESS --- p.42 / Chapter 6.1 --- Concept Terms Refinement --- p.42 / Chapter 6.2 --- Sentence Selection --- p.43 / Chapter 6.3 --- Backward Searching --- p.46 / Chapter 7. --- EXPERIMENT AND RESEARCH FINDINGS --- p.48 / Chapter 7.1 --- System-generated Summary v.s. Source Documents --- p.52 / Chapter 7.1.1 --- Compression Ratio --- p.52 / Chapter 7.1.2 --- Information Loss --- p.54 / Chapter 7.2 --- System-generated Summary v.s. Human-generated Summary --- p.58 / Chapter 7.2.1 --- Background of EXTRACTOR --- p.59 / Chapter 7.2.2 --- Evaluation Method --- p.61 / Chapter 7.3 --- Evaluation of different System-generated Summaries by Human Experts --- p.63 / Chapter 8. --- CONCLUSIONS AND FUTURE RESEARCH --- p.68 / Chapter 8.1 --- Conclusions --- p.68 / Chapter 8.2 --- Future Work --- p.69 / Chapter A. --- EXTRACTOR SYSTEM FLOW AND TEN-STEP PROCEDURE --- p.71 / Chapter B. --- SUMMARY GENERATED BY MS WORD2000 --- p.75 / Chapter C. --- SUMMARY GENERATED BY EXTRACTOR SOFTWARE --- p.76 / Chapter D. --- SUMMARY GENERATED BY OUR SYSTEM --- p.77 / Chapter E. --- SYSTEM-GENERATED WORD PHRASES FROM TEST SAMPLE --- p.78 / Chapter F. --- WORD PHRASES IDENTIFIED BY SUBJECTS --- p.79 / Chapter G. --- SAMPLE OF QUESTIONNAIRE --- p.84 / Chapter H. --- RESULT OF QUESTIONNAIRE --- p.85 / Chapter I. --- EVALUATION FOR DIVERGE PROCESS --- p.86 / BIBLIOGRAPHY --- p.88
8

A tightness continuum measure of Chinese semantic units, and its application to information retrieval

Xu, Ying 06 1900 (has links)
Chinese is very different from alphabetical languages such as English, as there are no delimiters between Chinese words. So Chinese segmentation is an important step for most Chinese natural language processing (NLP) tasks. We propose a tightness continuum for Chinese semantic units. The construction of the continuum is based on statistical informations. Based on this continuum, sequences can be dynamically segmented, and then that information can be exploited in a number of information retrieval tasks. In order to show that our tightness continuum is useful for NLP tasks, we propose two methods to exploit the tightness continuum within IR systems. The first method refines the result of a general Chinese word segmenter. The second method embeds the tightness value into IR score functions. Experimental results show that our tightness measure is reasonable and does improve the performance of IR systems.
9

Effectiveness of index size reduction techniques

Jacobson, Bryan L. 19 February 1992 (has links)
Index size savings from three techniques are measured. The three techniques are: 1) eliminating common, low information words found in a "stop list" (such as: of, the, at, etc.), 2) truncating terms by eliminating word stems (such as: -s, -ed, -ing, etc.), and 3) simple data compression. Savings are measured on two moderately large collections of text. The index size savings that result from using the techniques individually and in combination are reported. The impact on query performance in terms of speed, recall and precision are estimated. / Graduation date: 1992
10

Simultaneously searching with multiple algorithm settings an alternative to parameter tuning for suboptimal single-agent search /

Valenzano, Richard. January 2009 (has links)
Thesis (M. Sc.)--University of Alberta, 2009. / Title from PDF file main screen (viewed on Nov. 27, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta." Includes bibliographical references.

Page generated in 0.0547 seconds