Spelling suggestions: "subject:"make feview"" "subject:"make areview""
1 |
A System Using Deep Learning and Fuzzy Logic to Detect Fake Yelp ReviewsBai, Jun 30 May 2019 (has links)
No description available.
|
2 |
Detecting Fake Reviews with Machine LearningFerreira Uchoa, Marina January 2018 (has links)
Many individuals and businesses make decisions based on freely and easily accessible online reviews. This provides incentives for the dissemination of fake reviews, which aim to deceive the reader into having undeserved positive or negative opinions about an establishment or service. With that in mind, this work proposes machine learning applications to detect fake online reviews from hotel, restaurant and doctor domains. In order to _lter these deceptive reviews, Neural Networks and Support Vector Ma- chines are used. Both algorithms' parameters are optimized during training. Parameters that result in the highest accuracy for each data and feature set combination are selected for testing. As input features for both machine learning applications, unigrams, bigrams and the combination of both are used. The advantage of the proposed approach is that the models are simple yet yield results comparable with those found in the literature using more complex models. The highest accuracy achieved was with Support Vector Machine using the Laplacian kernel which obtained an accuracy of 82.92% for hotel, 80.83% for restaurant and 73.33% for doctor reviews.
|
3 |
DECEPTIVE REVIEW IDENTIFICATION VIA REVIEWER NETWORK REPRESENTATION LEARNINGShih-Feng Yang (11502553) 19 December 2021 (has links)
<div><div>With the growth of the popularity of e-commerce and mobile apps during the past decade, people rely on online reviews more than ever before for purchasing products, booking hotels, and choosing all kinds of services. Users share their opinions by posting product reviews on merchant sites or online review websites (e.g., Yelp, Amazon, TripAdvisor). Although online reviews are valuable information for people who are interested in products and services, many reviews are manipulated by spammers to provide untruthful information for business competition. Since deceptive reviews can damage the reputation of brands and mislead customers’ buying behaviors, the identification of fake reviews has become an important topic for online merchants. Among the computational approaches proposed for fake review identification, network-based fake review analysis jointly considers the information from review text, reviewer behaviors, and production information. Researchers have proposed network-based methods (e.g., metapath) on heterogeneous networks, which have shown promising results.</div><div><br></div><div>However, we’ve identified two research gaps in this study: 1) We argue the previous network-based reviewer representations are not sufficient to preserve the relationship of reviewers in networks. To be specific, previous studies only considered first-order proximity, which indicates the observable connection between reviewers, but not second-order proximity, which captures the neighborhood structures where two vertices overlap. Moreover, although previous network-based fake review studies (e.g., metapath) connect reviewers through feature nodes across heterogeneous networks, they ignored the multi-view nature of reviewers. A view is derived from a single type of proximity or relationship between the nodes, which can be characterized by a set of edges. In other words, the reviewers could form different networks with regard to different relationships. 2) The text embeddings of reviews in previous network-based fake review studies were not considered with reviewer embeddings.</div><div><br></div><div>To tackle the first gap, we generated reviewer embeddings via MVE (Qu et al., 2017), a framework for multi-view network representation learning, and conducted spammer classification experiments to examine the effectiveness of the learned embeddings for distinguishing spammers and non-spammers. In addition, we performed unsupervised hierarchical clustering to observe the clusters of the reviewer embeddings. Our results show the clusters generated based on reviewer embeddings capture the difference between spammers and non-spammers better than those generated based on reviewers’ features.</div><div><br></div><div>To fill the second gap, we proposed hybrid embeddings that combine review text embeddings with reviewer embeddings (i.e., the vector that represents a reviewer’s characteristics, such as writing or behavioral patterns). We conducted fake review classification experiments to compare the performance between using hybrid embeddings (i.e., text+reviewer) as features and using text-only embeddings as features. Our results suggest that hybrid embedding is more effective than text-only embedding for fake review identification. Moreover, we compared the prediction performance of the hybrid embeddings with baselines and showed our approach outperformed others on fake review identification experiments.</div><div><br></div><div>The contributions of this study are four-fold: 1) We adopted a multi-view representation learning approach for reviewer embedding learning and analyze the efficacy of the embeddings used for spammer classification and fake review classification. 2) We proposed a hybrid embedding that considers the characteristics of both review text and the reviewer. Our results are promising and suggest hybrid embedding is very effective for fake review identification. 3) We proposed a heuristic network construction approach that builds a user network based on user features. 4) We evaluated how different spammer thresholds impact the performance of fake review classification. Several studies have used the same datasets as we used in this study, but most of them followed the spammer definition mentioned by Jindal and Liu (2008). We argued that the spammer definition should be configurable based on different datasets. Our findings showed that by carefully choosing the spammer thresholds for the target datasets, hybrid embeddings have higher efficacy for fake review classification.</div></div>
|
Page generated in 0.0372 seconds