Return to search

Bibliomining for Automated Collection Development in a Digital Library Setting: Using Data Mining to Discover Web-Based Scholarly Research Works

Based off Nicholson's 2000 University of North Texas dissertation, "CREATING A CRITERION-BASED INFORMATION AGENT THROUGH DATA MINING FOR AUTOMATED IDENTIFICATION OF SCHOLARLY RESEARCH ON THE WORLD WIDE WEB" located at http://scottnicholson.com/scholastic/finaldiss.doc / This research creates an intelligent agent for automated collection development in a digital library setting. It uses a predictive model based on facets of each Web page to select scholarly works. The criteria came from the academic library selection literature, and a Delphi study was used to refine the list to 41 criteria. A Perl program was designed to analyze a Web page for each criterion and applied to a large collection of scholarly and non-scholarly Web pages. Bibliomining, or data mining for libraries, was then used to create different classification models. Four techniques were used: logistic regression, non-parametric discriminant analysis, classification trees, and neural networks. Accuracy and return were used to judge the effectiveness of each model on test datasets. In addition, a set of problematic pages that were difficult to classify because of their similarity to scholarly research was gathered and classified using the models.

The resulting models could be used in the selection process to automatically create a digital library of Web-based scholarly research works. In addition, the technique can be extended to create a digital library of any type of structured electronic information.

Identiferoai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/106521
Date12 1900
CreatorsNicholson, Scott
Source SetsUniversity of Arizona
LanguageEnglish
Detected LanguageEnglish
TypeJournal (On-line/Unpaginated)

Page generated in 0.0024 seconds