skip to main content
research-article

An Experimental Study to Understand User Experience and Perception Bias Occurred by Fact-checking Messages

Published: 03 June 2021 Publication History

Abstract

Fact-checking has become the de facto solution for fighting fake news online. This research brings attention to the unexpected and diminished effect of fact-checking due to cognitive biases. We experimented (66,870 decisions) comparing the change in users’ stance toward unproven claims before and after being presented with a hypothetical fact-checked condition. We found that, first, the claims tagged with the ‘Lack of Evidence’ label are recognized similarly as false information unlike other borderline labels, indicating the presence of uncertainty-aversion bias in response to insufficient information. Second, users who initially show disapproval toward a claim are less likely to correct their views later than those who initially approve of the same claim when opposite fact-checking labels are shown — an indication of disapproval bias. Finally, user interviews revealed that users are more likely to share claims with Divided Evidence than those with Lack of Evidence among borderline messages, reaffirming the presence of uncertainty-aversion bias. On average, we confirm that fact-checking helps users correct their views and reduces the circulation of falsehoods by leading them to abandon extreme views. Simultaneously, the presence of two biases reveals that fact-checking does not always elicit the desired user experience and that the outcome varies by the design of fact-checking messages and people’s initial view. These new observations have direct implications for multiple stakeholders, including platforms, policy-makers, and online users.

References

[1]
Mahmoudreza Babaei, Abhijnan Chakraborty, Juhi Kulshrestha, Elissa M Redmiles, Meeyoung Cha, and Krishna P Gummadi. 2019. Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking. In proc. of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). 139.
[2]
Meital Balmas. 2014. When Fake News Becomes Real: Combined Exposure to Multiple News Sources and Political Attitudes of Inefficacy, Alienation, and Cynicism. Communication Research 41 (2014), 430–454.
[3]
David M Bersoff. 1999. Why Good People Sometimes Do Bad Things: Motivated Reasoning and Unethical Behavior. Personality and Social Psychology Bulletin 25, 1 (1999), 28–39.
[4]
Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science 2, 1 (2011), 1–8.
[5]
Toby Bolsen, James N Druckman, and Fay Lomax Cook. 2014. The Influence of Partisan Motivated Reasoning on Public Opinion. Political Behavior 36(2014), 235–262.
[6]
Nadia M. Brashier, Gordon Pennycook, Adam J. Berinsky, and David G. Rand. 2021. Timing matters when correcting fake news. Proceedings of the National Academy of Sciences 118, 5 (2021), e2020043118.
[7]
Grégoire Burel, Tracie Farrell, Martino Mensio, Prashant Khare, and Harith Alani. 2020. Co-spread of Misinformation and Fact-Checking Content During the COVID-19 Pandemic. In proc. of the International Conference on Social Informatics (SocInfo). 28–42.
[8]
Colin Camerer and Martin Weber. 1992. Recent developments in modeling preferences: Uncertainty and ambiguity. Journal of Risk and Uncertainty 5, 4 (1992), 325–370.
[9]
Katherine Clayton, Spencer Blair, Jonathan A Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, 2020. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior 42(2020), 1–23.
[10]
Niall J Conroy, Victoria L Rubin, and Yimin Chen. 2015. Automatic deception detection: Methods for finding fake news. In proc. of the Association for Information Science and Technology (ASIS&T), Vol. 52. 1–4.
[11]
Kari Edwards and Edward E Smith. 1996. A disconfirmation bias in the evaluation of arguments. Journal of Personality and Social Psychology 71, 1(1996), 5–24.
[12]
Larry G Epstein. 1999. A Definition of Uncertainty Aversion. The Review of Economic Studies 66, 3 (1999), 579–608.
[13]
Jessica T. Feezell, John K. Wagnera, and Meredith Conroy. 2021. Exploring the effects of algorithm-driven news sources on political behavior and polarization. Computers in Human Behavior 116 (2021), 106626.
[14]
L Festinger. 1962. Cognitive Dissonance. Scientific American. https://www.scientificamerican.com/article/cognitive-dissonance/. Accessed: 2021-02-14.
[15]
Seth Flaxman, Sharad Goel, and Justin M Rao. 2016. Filter Bubbles, Echo Chambers, and Online News Consumption. Public Opinion Quarterly 80, S1 (2016), 298–320.
[16]
Marco Furini, Silvia Mirri, Manuela Montangero, and Catia Prandi. 2020. Untangling between fake-news and truth in social media to understand the Covid-19 Coronavirus. In IEEE Symposium on Computers and Communications (ISCC). 1–6.
[17]
Mingkun Gao, Ziang Xiao, Karrie Karahalios, and Wai-Tat Fu. 2018. To Label or Not to Label: The Effect of Stance and Credibility Labels on Readers’ Selection and Perception of News Articles. In proc. of the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW), Vol. 2.
[18]
Christine Geeng, Savanna Yee, and Franziska Roesner. 2020. Fake News on Facebook and Twitter: Investigating How People (Don’t) Investigate. In proc. of the ACM CHI Conference on Human Factors in Computing Systems (CHI). 1–14.
[19]
Lucas Graves and Federica Cherubini. 2016. The rise of fact-checking in Europe. Reuters Institute. https://tinyurl.com/1h0ny63x. Accessed: 2021-02-14.
[20]
Aditi Gupta, Hemank Lamba, Ponnurangam Kumaraguru, and Anupam Joshi. 2013. Faking sandy: characterizing and identifying fake images on twitter during hurricane sandy. In proc. of the Web Conference (WWW). 729–736.
[21]
Jiyoung Han, Meeyoung Cha, and Wonjae Lee. 2020. Anger contributes to the spread of COVID-19 misinformation. Harvard Kennedy School (HKS) Misinformation Review 1 (2020). https://doi.org/10.37016/mr-2020-39
[22]
Aniko Hannak, Drew Margolin, Brian Keegan, and Ingmar Weber. 2014. Get Back! You Don’t Know Me Like That: The Social Mediation of Fact Checking Interventions in Twitter Conversations. In proc. of International AAAI Conference on Weblogs and Social Media (ICWSM). 187–196.
[23]
Shan Jiang, Simon Baumgartner, Abe Ittycheriah, and Cong Yu. 2020. Factoring fact-checks: Structured information extraction from fact-checking articles. In proc. of the Web Conference (WWW). 1592–1603.
[24]
Jooyeon Kim, Behzad Tabibian, Alice Oh, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. 2018. Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation. In proc. of the ACM International Conference on Web Search and Data Mining (WSDM). 324–332.
[25]
Srijan Kumar, Robert West, and Jure Leskovec. 2016. Disinformation on the Web: Impact, Characteristics, and Detection of Wikipedia Hoaxes. In proc. of the Web Conference (WWW). 591–602.
[26]
Sejeong Kwon, Meeyoung Cha, and Kyomin Jung. 2017. Rumor Detection over Varying Time Windows. PloS one 12, 1 (2017), e0168344. https://doi.org/10.1371/journal.pone
[27]
David MJ Lazer, Matthew A Baum, Yochai Benkler, Adam J Berinsky, Kelly M Greenhill, Filippo Menczer, Miriam J Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, 2018. The science of fake news. Science 359, 6380 (2018), 1094–1096.
[28]
Kalev Leetaru. 2017. The Backfire Effect And Why Facebook’s ‘Fake News’ Warning Gets It All Wrong. Forbes. https://bit.ly/3dwwxJh. Accessed: 2021-02-14.
[29]
Xiaomo Liu, Armineh Nourbakhsh, Quanzhi Li, Rui Fang, and Sameena Shah. 2015. Real-time Rumor Debunking on Twitter. In proc. of ACM Conference on Information and Knowledge Management (CIKM). 1867–1870.
[30]
Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting Rumors from Microblogs with Recurrent Neural Networks. In proc. of the International Joint Conferences on Artificial Intelligence (IJCAI). 3818–3824.
[31]
Drew B Margolin, Aniko Hannak, and Ingmar Weber. 2018. Political Fact-Checking on Twitter: When Do Corrections Have an Effect?Political Communication 35, 2 (2018), 196–219.
[32]
Raymond S Nickerson. 1998. Confirmation bias: A ubiquitous phenomenon in many guises.Review of general psychology 2, 2 (1998), 175.
[33]
Sungkyu Park, Jaimie Yejean Park, Jeong-han Kang, and Meeyoung Cha. 2021. The presence of unexpected biases in online fact-checking. Harvard Kennedy School (HKS) Misinformation Review (2021). https://doi.org/10.37016/mr-2020-53
[34]
Gordon Pennycook, Tyrone D Cannon, and David G Rand. 2018. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General 147, 12 (2018), 1865.
[35]
Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaozhu Mei. 2011. Rumor has it: Identifying Misinformation in Microblogs. In proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 1589–1599.
[36]
Matihew Rabin. 2013. Risk aversion and expected-utility theory: A calibration theorem. In Handbook of the Fundamentals of Financial Decision Making: Part I. World Scientific, 241–252.
[37]
Jon Roozenbeek, Claudia R. Schneider, Sarah Dryhurst, John Kerr, Alexandra L. J. Freeman, Gabriel Recchia, Anne Marthe van der Bles, and Sander van der Linden. 2020. Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science 7, 201199 (2020).
[38]
Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. CSI: A Hybrid Deep Model for Fake News Detection. In proc. of ACM Conference on Information and Knowledge Management (CIKM). 797–806.
[39]
Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter 19, 1 (2017), 22–36.
[40]
Jeff Smith. 2017. Designing Against Misinformation. Medium.com. https://tinyurl.com/11vo86wb. Accessed: 2021-02-14.
[41]
Dominik Andrzej Stecula, Ozan Kuru, and Kathleen Hall Jamieson. 2020. How trust in experts and media use affect acceptance of common anti-vaccination claims. Harvard Kennedy School (HKS) Misinformation Review 1, 1 (2020). https://doi.org/10.37016/mr-2020-007
[42]
Charles S Taber and Milton Lodge. 2006. Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science 50, 3 (2006), 755–769.
[43]
Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science 359, 6380 (2018), 1146–1151.
[44]
Nathan Walter, Jonathan Cohen, R Lance Holbert, and Yasmin Morag. 2020. Fact-checking: A meta-analysis of what works and for whom. Political Communication 37, 3 (2020), 350–375.
[45]
Brian E Weeks and R Kelly Garrett. 2014. Electoral consequences of political rumors: Motivated reasoning, candidate rumors, and vote choice during the 2008 US presidential election. International Journal of Public Opinion Research 26, 4 (2014), 401–422.
[46]
Robert M Worcester and John Downham. 1986. Consumer market research handbook. Sole distributors for the USA and Canada, Elsevier Science Pub. Co.
[47]
Dustin Wright and Isabelle Augenstein. 2020. Claim Check-Worthiness Detection as Positive Unlabelled Learning. Findings of EMNLP. Association for Computational Linguistics (2020).

Cited By

View all
  • (2024)Examining the Effectiveness of Fact-Checking Tools on Social Media in Reducing the Spread of MisinformationInternational Journal of E-Adoption10.4018/IJEA.34794816:1(1-19)Online publication date: 17-Jul-2024
  • (2024)The Landscape of User-centered Misinformation Interventions - A Systematic Literature ReviewACM Computing Surveys10.1145/367472456:11(1-36)Online publication date: 25-Jun-2024
  • (2024)Peer-supplied credibility labels as an online misinformation interventionInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103276188:COnline publication date: 1-Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WWW '21: Proceedings of the Web Conference 2021
April 2021
4054 pages
ISBN:9781450383127
DOI:10.1145/3442381
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Social media
  2. disapproval bias
  3. fact-checking
  4. perception
  5. risk-aversion
  6. uncertainty-aversion

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

WWW '21
Sponsor:
WWW '21: The Web Conference 2021
April 19 - 23, 2021
Ljubljana, Slovenia

Acceptance Rates

Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)172
  • Downloads (Last 6 weeks)17
Reflects downloads up to 24 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Examining the Effectiveness of Fact-Checking Tools on Social Media in Reducing the Spread of MisinformationInternational Journal of E-Adoption10.4018/IJEA.34794816:1(1-19)Online publication date: 17-Jul-2024
  • (2024)The Landscape of User-centered Misinformation Interventions - A Systematic Literature ReviewACM Computing Surveys10.1145/367472456:11(1-36)Online publication date: 25-Jun-2024
  • (2024)Peer-supplied credibility labels as an online misinformation interventionInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103276188:COnline publication date: 1-Aug-2024
  • (2024)Sociotechnical governance of misinformation: An Annual Review of Information Science and Technology (ARIST) paperJournal of the Association for Information Science and Technology10.1002/asi.24953Online publication date: 3-Oct-2024
  • (2023)Seeing is Not Believing: A Nuanced View of Misinformation Warning Efficacy on Video-Sharing Social Media PlatformsProceedings of the ACM on Human-Computer Interaction10.1145/36100857:CSCW2(1-35)Online publication date: 4-Oct-2023
  • (2023)SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P)10.1109/EuroSP57164.2023.00056(868-895)Online publication date: Jul-2023
  • (2022)Fake news zealots: Effect of perception of news on online sharing behaviorFrontiers in Psychology10.3389/fpsyg.2022.85953413Online publication date: 26-Jul-2022
  • (2022)Reactions to Fact CheckingProceedings of the ACM on Human-Computer Interaction10.1145/35551286:CSCW2(1-17)Online publication date: 11-Nov-2022
  • (2022)Mitigating Bias in Algorithmic Systems—A Fish-eye ViewACM Computing Surveys10.1145/352715255:5(1-37)Online publication date: 3-Dec-2022
  • (2022)Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact CheckingIEEE Transactions on Computational Social Systems10.1109/TCSS.2021.30960389:3(839-850)Online publication date: Jun-2022

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media