Fake opinion detection: how similar are crowdsourced datasets to real data?

  • Version
  • Download 3
  • File Size 0.00 KB
  • File Count 1
  • Create Date March 14, 2020
  • Last Updated July 14, 2021

Fake opinion detection: how similar are crowdsourced datasets to real data?

Fake opinion detection: how similar are crowdsourced
datasets to real data?
Fornaciari, T., Cagnina, L., Rosso, P. et al. Fake opinion detection: how similar are crowdsourced datasets to real data?. Lang Resources & Evaluation 54, 1019–1058 (2020). https://doi.org/10.1007/s10579-020-09486-5

Abstract
Identifying deceptive online reviews is a challenging tasks for Natural Language Processing (NLP). Collecting corpora for the task is difficult, because normally it is not possible to know whether reviews are genuine. A common workaround involves collecting (supposedly) truthful reviews online and adding them to a set of deceptive reviews obtained through crowdsourcing services. Models trained this way are generally successful at discriminating between ‘genuine’ online reviews and the crowdsourced deceptive reviews. It has been argued that the deceptive reviews obtained via crowdsourcing are very different from real fake reviews, but the claim has never been properly tested. In this paper, we compare (false) crowdsourced reviews with a set of ‘real’ fake reviews published on line. We evaluate their degree of similarity and their usefulness in training models for the detection of untrustworthy reviews. We find that the deceptive reviews collected via crowdsourcing are significantly different from the fake reviews published online. In the case of the artificially produced deceptive texts, it turns out that their domain
similarity with the targets affects the models’ performance, much more than their untruthfulness. This suggests that the use of crowdsourced datasets for opinion spam detection may not result in models applicable to the real task of detecting deceptive reviews. As an alternative method to create large-size datasets for the fake reviews detection task, we propose methods based on the probabilistic annotation of unlabeled texts, relying on the use of meta-information generally available on the ecommerce sites. Such methods are independent from the content of the reviews and allow to train reliable models for the detection of fake reviews.