Return to search

Explaining recommendations

Recommender systems such as Amazon, offer uses recommendations, or suggestions of items to try or buy. We propose a novel classification of reasons for including explanations in recommender systems. Our focus is on the aim of effectiveness, or decision support, and we contrast it with other metrics such as satisfaction and persuasion. In user studies, we found that people varied in the features they found important, and composed a short list of features in two domains (movies and cameras). We then built a natural language explanation testbed system, considering these features as well as the limitations of using commercial data. This testbed was used in a series of experiments to test whether personalization of explanations affects effectiveness, persuasion and satisfaction. We chose a simple form of personalization which considers likely constraints of a recommender system (e.g. limited meta-data related to the user) as well as brevity. In these experiments we found that: 1. Explanations help participants to make decisions compared to recommendations without explanations, we saw as a significant decrease in opt-outs in item ratings – participants were more likely to be able to give an initial rating for an item if they were given an explanation, and the likelihood of receiving a rating increased for feature-based explanations compared to a baseline. 2. Contrary to our initial hypothesis, our method of personalization could damage effectiveness for both movies and cameras which are domains that differ with regard to two dimensions which we found affected perceived effectiveness: cost (low vs. high), and valuation type (subjective vs. objective). 3. Participants were more satisfied with feature-based than baseline explanations. If the personalization is perceived as relevant to them, then personalized feature-based explanations were preferred over non-personalized. 4. Satisfaction with explanation was also reflected in the proportion of opt-outs. The opt-out rate for the explanations was highest in the baseline for all experiments. This was the case despite the different types of explanation baselines used in the two domains.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:509140
Date January 2009
CreatorsTintarev, Nava
PublisherUniversity of Aberdeen
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=59438

Page generated in 0.0018 seconds