• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluating User Feedback Systems

Menard, Jr., Kevin Joseph 04 May 2006 (has links)
The increasing reliance of people on computers for daily tasks has resulted in a vast number of digital documents. Search engines were once luxury tools for quickly scanning a set of documents but are now quickly becoming the only practical way to navigate through this sea of information. Traditionally, search engine results are based upon a mathematical formula of document relevance to a search phrase. Often, however, what a user deems to be relevant and what a search engine computes as relevant are not the same. User feedback regarding the utility of a search result can be collected in order to refine query results. Additionally, user feedback can be used to identify queries that lack high quality search results. A content author can then further develop existing content or create new content to improve those search results. The most straightforward way of collecting user feedback is to add a graphical user interface component to the search interface that asks the user how much he or she liked the search result. However, if the feedback mechanism requires the user to provide feedback before he or she can progress further with his or her search, the user may become annoyed and provide incorrect feedback values out of spite. Conversely, if the feedback mechanism does not require the user to provide feedback at all then the overall amount of collected feedback will be diminished as many users will not expend the effort required to give feedback. This research focused on the collection of explicit user feedback in both mandatory (a user must give feedback) and voluntary (a user may give feedback) scenarios. The collected data was used to train a set of decision tree classifiers that provided user satisfaction values as a function of implicit user behavior and a set of search terms. The results of our study indicate that a more accurate classifier can be built from explicit data collected in a voluntary scenario. Given a limited search domain, the classification accuracy can be further improved.

Page generated in 0.0391 seconds