Return to search

Expertise classification: Collaborative classification vs. automatic extraction

Social classification is the process in which a community of users categorizes the resources in that community for their own use. Given enough users and categorization, this will lead to any given resource being represented by a set of labels or descriptors shared throughout the community (Mathes, 2004). Social classification has become an extremely popular way of structuring online communities in recent years. Well-known examples of such communities are the bookmarking websites Furl (http://www.furl.net/) and del.icio.us (http://del.icio.us/), and Flickr (http://www.flickr.com/) where users can post their own photos and tag them.

Social classification, however, is not limited to tagging resources: another possibility is to tag people, examples of which are Consumating (http://www.consumating.com/), a collaborative tag-based personals website, and Kevo (http://www.kevo.com/), a website that lets users tag and contribute media and information on celebrities.

Another application of people tagging is expertise classification, an emerging subfield of social classification. Here, members of a group or community are classified and ranked based on the expertise they possess on a particular topic. Expertise classification is essentially comprised of two different components: expertise tagging and expert ranking. Expertise tagging focuses on describing one person at a time by assigning tags that capture that person's topical expertise, such as â speech recognition' or â small-world networks'. information request, such as, for instance, a query submitted to a search engine. Methods are developed to combine the information about individual members' expertise (tags), to provide on-the-fly query-driven rankings of community members.

Expertise classification can be done in two principal ways. The simplest option follows the principle of social bookmarking websites: members are asked to supply tags that describe their own expertise and to rank the other community members with regard to a specific request for information. Alternatively, automatic expertise classification ideally extracts expertise terms automatically from a user's documents and e-mails by looking for terms that are representative for that user. These terms are then matched on the information request to produce an expert ranking of all community members. In this paper we describe such an automatic method of expertise classification and evaluate it using human expertise classification judgments. In the next section we will describe some of the related work on expertise classification, after which we will describe our automatic method of expertise classification and our evaluation of them in sections 3 and 4. Sections 5.1 and 5.1 describe our findings on expertise tagging and expert rankings, followed by discussion and our conclusions in section 6 and recommendations for future work in section 7.

Identiferoai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/105709
Date January 2006
CreatorsBogers, Toine, Thoonen, Willem, van den Bosch, Antal
ContributorsFurner, Jonathan, Tennis, Joseph T.
PublisherdLIST
Source SetsUniversity of Arizona
LanguageEnglish
Detected LanguageEnglish
TypeConference Paper

Page generated in 0.0019 seconds