Lies in Disguise

Semester 2, academic year 2023/2024

By Paul Ballot

Advancements in generative Artificial Intelligence (AI) and the emergence of Large Language Models (LLMs) are fuelling fears over the rise of personalised mis- and disinformation on an industrial scale. However, this potential for the mass production of synthetic “fake news” may not be the only cause for concern: Recent findings indicate, that AI generated disinformation could also be more difficult to detect for human raters and possibly even automated classifiers. This could result from the prevalence of news content in the training data, allowing LLMs to imitate linguistic patterns attributed to actual news, while maintaining the semantics of misinformation. Hence, a key contribution of this paper is its attempt to understand why AI generated disinformation possesses greater credibility than its conventional counterpart and whether this, in turn, could reduce the effectiveness of media literacy interventions on disinformation. To test for these hypotheses, we analyse, to what degree synthetic disinformation resembles traditional news and human-authored disinformation regarding various linguistic features. Furthermore, running an experiment, we evaluate, whether generic inoculation remains effective in increasing accuracy for synthetic disinformation. Synthezising insights from various methods, we thereby hope to illuminate the risks associated with LLMs while showcasing the potential of combining computational content analysis and experimentation.