Semester 2, academic year 2024/2025
By Xinkangrui Gao
Cancer-related conspiracy beliefs can hinder timely treatment and damage trust between patients and healthcare providers. This study investigates how artificial intelligence (AI) chatbots can be used to counter such beliefs. It addresses two key research questions: (1) Are chatbots more effective than traditional health websites in reducing individuals’ conspiracy beliefs about cancer? and (2) How does the expression of empathy by chatbots influence their persuasive effectiveness? A between-subjects experiment compares three conditions: exposed to a cancer information website, interacting with an empathetic chatbot, and interacting with a non-empathetic chatbot. The chatbot is custom-built using Llama 3 and integrated with a retrieval-augmented generation (RAG) system. It provides real-time responses based on a locally stored knowledge base, compiled by scraping cancer-related information from authoritative health websites. This design enables dynamic, content-grounded human-AI interaction and allows for a controlled comparison of communication modes and emotional tone. Theoretically, the study is informed by the HAII-TIME framework and examines how chatbot empathy may function as an interface cue in human-AI interaction. Practically, it contributes to the development of scalable, evidence-based interventions that leverage AI to combat health misinformation in cancer care.