Abstract
During the COVID-19 pandemic, much fake news emerged in the medical field (Naeem et al., 2020: 1). Nowadays, computers can generate text considered to be more trustworthy than text written by a person (Zellers et al., 2019). This means that laypeople are able to produce disinformation; however, they may not understand the implications. This study revealed the most reliable clues as guidance to spot machine writing. While natural-language processing (NLP) research focuses on L1 speakers, studies in second language acquisition demonstrate that L1 and L2 speakers attend to different aspects of English (Scarcella, 1984; Tsang, 2017). In this study, social media users completed a Turing-test style quiz, guessed whether news excerpts were machine generated or human written (Saygin et al., 2000) and identified errors that guided their decision. Quantitative analysis revealed that although both L1 and L2 speakers were equally able to defend themselves against machine-generated fake news, L2 participants were more sceptical, labelling more human-written texts as being machine generated. This is possibly due to concern about the stigma associated with being fooled by a machine due to lower language levels. However, factual errors and internal contradictions were the most reliable indicators of machine writing for both groups. This emphasises the importance of fact-checking when news articles prioritise exaggerated headlines, and NLP tools enable production of popular content in areas like medicine.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2023 Barbora Dankova