Have you ever searched for something on the web and been overloaded with irrelevant results? Many search engines tend to cast a very wide net and rely on ranking to show you the relevant results first. But, this doesn't always work. Perhaps the occurrence of irrelevant results could be reduced if we could eliminate the unimportant content from each webpage while indexing. Instead of casting a wide net, maybe we can make the net smarter. Here, I investigate the feasibility of using automated document summarization and clustering to do just that. The results indicate that such methods can make search engines more precise, more efficient, and faster, but not without costs. / McAnulty College and Graduate School of Liberal Arts / Computational Mathematics / MS / Thesis
Identifer | oai:union.ndltd.org:DUQUESNE/oai:digital.library.duq.edu:etd/154101 |
Date | 23 March 2012 |
Creators | Cotter, Steven |
Contributors | Patrick Juola, John Kern, Donald Simon |
Source Sets | Duquesne University |
Detected Language | English |
Rights | Worldwide Access |
Page generated in 0.0012 seconds