Spelling suggestions: "subject:"forminformation retrieval."" "subject:"informationation retrieval.""
71 |
An Evaluation of Existing Light Stemming Algorithms for Arabic Keyword SearchesBrittany E. Rogerson 17 November 2008 (has links)
The field of Information Retrieval recognizes the importance of stemming in improving retrieval effectiveness. This same tool, when applied to searches conducted in the Arabic language, increases the relevancy of documents returned and expands searches to encompass the general meaning of a word instead of the word itself. Since the Arabic language relies mainly on triconsonantal roots for verb forms and derives nouns by adding affixes, words with similar consonants are closely related in meaning. Stemming allows a search term to focus more on the meaning of a term and closely related terms and less on specific character matches. This paper discusses the strengths of light stemming, the best techniques, and components for algorithmic affix-based stemmers used in keyword searching in the Arabic language.
|
72 |
Accessing and using multilanguage information by users searching in differenct information retrieval systemsHa, Yoo Jin. January 2008 (has links)
Thesis (Ph. D.)--Rutgers University, 2008. / "Graduate Program in Communication, Information and Library Studies." Includes bibliographical references (p. 226-238).
|
73 |
Predicting information searching performance with measures of cognitive diversity /Kim, Chang Suk. January 1900 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 2002. / Typescript. Includes bibliographical references (p. 145-152). Available through UMI Dissertation Services (Ann Arbor, MI). Also available on the Internet. Photocopy version and microfiche:
|
74 |
Ontologiebasiertes Information-Retrieval für das WissensmanagementHermans, Jan January 2008 (has links)
Zugl.: Münster (Westfalen), Univ., Diss., 2008
|
75 |
Robust knowledge extraction over large text collections /Song, Min. Song, Il-Yeol. January 2005 (has links)
Thesis (Ph. D.)--Drexel University, 2005. / Includes abstract and vita. Includes bibliographical references (leaves 171-190).
|
76 |
Evaluierung des Text-Retrievalsystems "Intelligent Miner for Text" von IBM eine Studie im Vergleich zur Evaluierung anderer Systeme /Käter,Thorsten. January 1999 (has links)
Konstanz, Univ., Diplomarb. 1999.
|
77 |
Information Retrieval in Portalen : Gestaltungselemente, Praxisbeispiele und Methodenvorschlag /Kremer, Stefan, January 2004 (has links)
St. Gallen, Univ., Diss., 2004.
|
78 |
Information retrieval on the world wide webLee, Kwok-wai, Joseph, 李國偉 January 2001 (has links)
published_or_final_version / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
|
79 |
A collaborative approach to IR evaluationSheshadri, Aashish 16 September 2014 (has links)
In this thesis we investigate two main problems: 1) inferring consensus from disparate inputs to improve quality of crowd contributed data; and 2) developing a reliable crowd-aided IR evaluation framework.
With regard to the first contribution, while many statistical label aggregation methods have been proposed, little comparative benchmarking has occurred in the community making it difficult to determine the state-of-the-art in consensus or to quantify novelty and progress, leaving modern systems to adopt simple control strategies. To aid the progress of statistical consensus and make state-of-the-art methods accessible, we develop a benchmarking framework in SQUARE, an open source shared task framework including benchmark datasets, defined tasks, standard metrics, and reference implementations with empirical results for several popular methods. Through the development of SQUARE we propose a crowd simulation model that emulates real crowd environments to enable rapid and reliable experimentation of collaborative methods with different crowd contributions. We apply the findings of the benchmark to develop reliable crowd contributed test collections for IR evaluation.
As our second contribution, we describe a collaborative model for distributing relevance judging tasks between trusted assessors and crowd judges. Based on prior work's hypothesis of judging disagreements on borderline documents, we train a logistic regression model to predict assessor disagreement, prioritizing judging tasks by expected disagreement. Judgments are generated from different crowd models and intelligently aggregated. Given a priority queue, a judging budget, and a ratio for expert vs. crowd judging costs, critical judging tasks are assigned to trusted assessors with the crowd supplying remaining judgments. Results on two TREC datasets show significant judging burden can be confidently shifted to the crowd, achieving high rank correlation and often at lower cost vs. exclusive use of trusted assessors. / text
|
80 |
Blog Searching for Competitive Intelligence, Brand Image, and Reputation ManagementPikas, Christina K. 07 1900 (has links)
Reviews why it is important to search blogs for competitive intelligence, reputation management, and brand image management. Describes the structure of blogs and how to format searches in several search engines to effectively retrieve this information.
|
Page generated in 0.1217 seconds