Spelling suggestions: "subject:"keyword searching"" "subject:"ikeyword searching""
11 |
Third-order tensor decomposition for search in social tagging systemsBi, Bin., 闭彬. January 2010 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
12 |
Selecting keyword search terms in computer forensics examinations using domain analysis and modelingBogen, Alfred Christopher, January 2006 (has links)
Thesis (Ph.D.) -- Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
|
13 |
Xpareto : a text-centric XML search engine /Feng, Zhisheng. January 2007 (has links)
Thesis (M.Sc.)--York University, 2007. Graduate Programme in Computer Science and Engineering. / Typescript. Includes bibliographical references (leaves 187-189). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR38770
|
14 |
Answer extraction for simple and complex questionsJoty, Shafiz Rayhan, University of Lethbridge. Faculty of Arts and Science January 2008 (has links)
When a user is served with a ranked list of relevant documents by the standard document
search engines, his search task is usually not over. He has to go through the entire
document contents to find the precise piece of information he was looking for. Question
answering, which is the retrieving of answers to natural language questions from a document
collection, tries to remove the onus on the end-user by providing direct access to
relevant information. This thesis is concerned with open-domain question answering. We
have considered both simple and complex questions. Simple questions (i.e. factoid and
list) are easier to answer than questions that have complex information needs and require
inferencing and synthesizing information from multiple documents.
Our question answering system for simple questions is based on question classification
and document tagging. Question classification extracts useful information (i.e. answer
type) about how to answer the question and document tagging extracts useful information
from the documents, which is used in finding the answer to the question.
For complex questions, we experimented with both empirical and machine learning approaches.
We extracted several features of different types (i.e. lexical, lexical semantic,
syntactic and semantic) for each of the sentences in the document collection in order to
measure its relevancy to the user query. One hill climbing local search strategy is used
to fine-tune the feature-weights. We also experimented with two unsupervised machine
learning techniques: k-means and Expectation Maximization (EM) algorithms and evaluated
their performance. For all these methods, we have shown the effects of different kinds
of features. / xi, 214 leaves : ill. (some col.) ; 29 cm. --
|
15 |
Controlled Vocabularies in the Digital Age: Are They Still Relevant?Baker, William 08 1900 (has links)
Keyword searching and controlled vocabularies such as Library of Congress subject headings (LCSH) proved to work well together in automated technologies and the two systems have been considered complimentary. When the Internet burst onto the information landscape, users embraced the simplicity of keyword searching of this resource while researchers and scholars seemed unable to agree on how best to make use of controlled vocabularies in this huge database. This research looked at a controlled vocabulary, LCSH, in the context of keyword searching of a full text database. The Internet and probably its most used search engine, Google, seemed to have set a standard that users have embraced: a keyword-searchable single search box on an uncluttered web page. Libraries have even introduced federated single search boxes to their web pages, another testimony to the influence of Google. UNT's Thesis and Dissertation digital database was used to compile quantitative data with the results input into an EXCEL spreadsheet. Both Library of Congress subject headings (LCSH) and author-assigned keywords were analyzed within selected dissertations and both systems were compared. When the LCSH terms from the dissertations were quantified, the results showed that from a total of 788 words contained in the 207 LCSH terms assigned to 70 dissertations, 246 of 31% did not appear in the title or abstract while only 8, or about 1% from the total of 788, did not appear in the full text. When the author-assigned keywords were quantified, the results showed that from a total of 552 words from304 author-assigned keywords in 86 dissertations, 50 or 9% did not appear in the title or abstract while only one word from the total of 552 or .18% did not appear in the full text. Qualitatively, the LCSH terms showed a hierarchical construction that was clearly designed for a print card catalog, seemingly unnecessary in a random access digital environment. While author-assigned keywords were important words and phrases, these words and phrases often appeared in the title, metadata, and full text of the dissertation, making them seemingly unnecessary in a keyword search environment as they added no additional access points. Authors cited in this research have tended to agree that controlled vocabularies such as LCSH are complicated to develop and implement and expensive to maintain. Most researchers have also tended to agree that LCSH needs to be simplified for large, full text databases such as the Internet. Some of the researchers have also called for some form of automation that seamlessly links LCSH to subject terms in a keyword search. This research tends to confirm that LCSH could benefit from simplification as well as automation and offers some suggestions for improvements in both areas.
|
Page generated in 0.076 seconds