Thesis Grants

Each semester, the Digicomlab funds several master thesis projects by students of the (research) master at the department of Communication Science at the University of Amsterdam. The lab funds projects that make use of or study digital methods in an innovative way. Find more information on theses funded by the lab below:

Algorithmic Ranking on YouTube

Semester 2, academic year 2023/2024

By Ellen Linnert

Despite its popularity, YouTube has gained a reputation for fostering toxic communication through its affordances (Alexander, 2018; Munn, 2020). This echoes general concerns that social media sites could hinder deliberative democratic communication processes (Pfetsch, 2020). However, the role of social media in facilitating certain discourse dynamics remains unclear. Algorithms that have the power to structure discourse on a platform and to encourage expressive behaviors by ranking user comments (e.g., YouTube’s “top comment” algorithm) are still greatly understudied.

Specifically divisive expressions – outraged references to one’s partisan out-group within a moral context (e.g., accusing the out-group of violating moral norms) – might fuel social divides. These highly emotional comments that impose a moralization of political conflict can obstruct productive deliberation while also being highly engaging to the platform users (Brady et al., 2017, 2021; Rathje et al., 2021). This poses a potential dilemma for the algorithmic ranking of these comments. Therefore, this study investigates the role of divisive expressions in the algorithmic ranking of and engagement with user comments on YouTube news content.

To tackle challenges of big social media datasets and language processing, the study employs state-of-the-art computational methods using pre-trained large language models. This way, elements of divisive expressions can be detected automatically in large quantities of user comments on US-focused news content. By assessing the relationship between divisive expressions and the algorithmic ranking of user comments on a large scale, this study contributes to the understanding of algorithmic affordances and public discourse online.

Behavior Matters

Semester 2, academic year 2023/2024

By Yuyao Lu

Climate researchers, positioned as key communicators for climate change, face the challenge of promoting climate actions among the public. With billions engaging on social media every day, this study utilizes this platform for climate researchers to shape public perceptions and actions towards climate change. Leveraging the popularity of short form videos, th e study aims to create a novel online intervention, illustrating climate researchers’ personal behaviors (climate friendly vs. climate unfriendly) and substantiating the impact through empirical evidence.

Our central research question probes the extent to which climate researchers’ behavior influences the public’s efficacy beliefs, perceptions of climate urgency, and subsequently their pro environmental actions. Considering the role of algorithmic recommendation systems in shaping repetitive media consumption, we are particularly interested in the effects of repeated exposures rather than one time intervention. Taken together, we employ a longitudinal experiment with repeated exposures to climate researchers’ behavior, in combination with a between subjects design. This approach allows for an in depth investigation into changes in public perceptions and behaviors over time.

Beyond empirical insights, this study seeks theoretical extensions to behavior change theories, enhancing their applicability to the collective challenge of climate change. Additionally, this study aims to develop a robust climate communication strategy wi th practical implications for science communication. The ultimate goal is to contribute both theoretically and practically, providing valuable insights for effective climate communication strategies

Candidate Visual Portrayals in the 2020 US Election on YouTube

Semester 2, academic year 2023/2024

By Eoghan O’Neill

Constructing an image that resonates with voters is a major focus for any modern political campaign. A recent explosion of visual Communication Science studies reflects this. The bulk of these have considered how candidates visually frame themselves on their personal social media accounts. The sustained importance of the broader news media however means that candidates can never have complete control of their image. The image production process consists of multiple stages within which editors, journalists, and other news producers attempt to influence how the public sees a candidate. With the expansion of digital media, the number of these news producers has diversified but the dynamics of visual framing in this new diversified context is not understood. This paper then explores the visual framing of candidates during the 2020 US election within the context of YouTube. A major source of news for US adults, YouTube is a fitting place to investigate this question as it has a large community of independent and mainstream news producers. Just as shifts in the campaign and media environment require us to pose new questions, so to do they require new methodologies. To adequately process the large amount of visual data posted to YouTube during an election, this paper takes an automated content analysis approach. First, representative frames from videos will be analysed for the presence of the two candidates using Python’s facial recognition library. Then, the manifest elements of visual frames will be analysed using the Contrastive Image Language Processing (CLIP) model.

Feeling Understood by AI

Semester 2, academic year 2023/2024

By Nele Pralat

As AI applications such as ChatGPT or MetaAI continue to advance and integrate into various online spaces, their touchpoints with daily life are growing with some AI’s offering advice, purchase recommendations and more. Yet, in proportion to AI’s growing presence, there is relatively little research focusing on the implications of us interacting with AI. Understanding the nuances of how we interact with AI as a social actor, particularly with AI as an empathetic entity, remains unexplored. However, this is especially relevant when considering the potential of AI not just as a tool, but as an entity capable of influencing trust and decision-making, that is becoming increasingly more challenging to distinguish from human interaction partners.

Displays of empathy have been found to significantly influence trust in interpersonal relationships, whilst trust has been found to positively influence individuals’ patronage intentions. This project explores whether this effect translates into the realm of human-computer communication, by examining the impact of an empathetic AI on patronage intentions toward it, mediated by trust towards the AI’s benevolence and competence.

The project uses two custom AI chatbot interactions, which are created by means of GPT-4 and trained to communicate in either a highly empathetic or non-empathetic manner. Previous studies on conversational agents primarily focused on simulating AI chatbot interactions using basic, pre-scripted dialogues. This research innovates by leveraging advanced machine learning to actively generate empathetic and rational responses, moving beyond mere simulations to actual AI-driven conversations.

Give An Option, or Give A Solution

Semester 2, academic year 2023/2024

By Yilan Wang

Conversational agents, with the application of advanced Artificial Intelligence and Natural Language Processing technologies, are becoming increasingly accessible and prevalent in real life. Over the past few years, they have undergone phenomenal progress. The progress has further brought a massive, promising prospect for online marketing practices, because it satisfies the consumer need of being involved in conversational marketing activities.

Although the field of developing and employing conversational agents is emerging rapidly, there is not much communication research examining the relevant media effects. The following research aims to bring insights into the persuasive effects of conversational agents from the perspectives of consumer psychology and marketing practices. Particularly, it tests how distinct types of decisional guidance (suggestive guidance: providing direct advice about purchase decisions vs. informative guidance: only providing pertinent product information) provided by a conversational agent will influence the purchase intention of consumer decision making. Additionally, the research investigates the moderation role of consumer scepticism in this persuasion process.

The research is conducted through a lab experiment with 2x2 between-subjects factorial design in the university lab. Participants are recruited from students at the University of Amsterdam. They are instructed to interact with one of two chatbots in a scenario of exploring restaurants that are worth visiting in Amsterdam, which is innovative since interactivity brings more reality than pure scenario designs. The chatbots are trained with pre-determined datasets with the same codes, and the experimental conditions only differ in the text output (i.e., whether explicit recommendation is provided).

Lies in Disguise

Semester 2, academic year 2023/2024

By Paul Ballot

Advancements in generative Artificial Intelligence (AI) and the emergence of Large Language Models (LLMs) are fuelling fears over the rise of personalised mis- and disinformation on an industrial scale. However, this potential for the mass production of synthetic “fake news” may not be the only cause for concern: Recent findings indicate, that AI generated disinformation could also be more difficult to detect for human raters and possibly even automated classifiers. This could result from the prevalence of news content in the training data, allowing LLMs to imitate linguistic patterns attributed to actual news, while maintaining the semantics of misinformation. Hence, a key contribution of this paper is its attempt to understand why AI generated disinformation possesses greater credibility than its conventional counterpart and whether this, in turn, could reduce the effectiveness of media literacy interventions on disinformation. To test for these hypotheses, we analyse, to what degree synthetic disinformation resembles traditional news and human-authored disinformation regarding various linguistic features. Furthermore, running an experiment, we evaluate, whether generic inoculation remains effective in increasing accuracy for synthetic disinformation. Synthezising insights from various methods, we thereby hope to illuminate the risks associated with LLMs while showcasing the potential of combining computational content analysis and experimentation.

Reshaping Cultural Hierarchy

Semester 2, academic year 2023/2024

By Xinfeng Gu

This study aims to explore how two emerging platforms in the digital age influence cultural hierarchies. First, the streaming media platforms have become increasingly important in people’s lives given its unprecedented capacity to popularize and globalize cultural products, yet debate remains surrounding its role as a democratizing force within the culture sphere. Second, the rise of social media platforms has challenged the elite discourse that was historically upheld by elite journalism in the field of culture, while the extent and trends are difficult to quantify. These shifts prompt an inquiry into the difference in changing discourse between journalism and social media platform in the last two decades, and how the enlarging or narrowing of the discursive gap—serving as a representative of cultural hierarchy—relates to the streaming media platform. In this study, an automated content analysis will be conducted on music reviews from art journalism and online forum. Utilizing Concept Mover’s Distance (CMD) to gauge the concept engagement, we will measure the discourse of each music review along two conceptual dimensions—legitimacy and gender—in a word-embedding space. To validate the accuracy of the approach in measuring concepts, human coders will manually code a portion of the data. Upon validation, CMD will be computed for the entire dataset. Subsequently, the relationship between characteristics at both news agency level and music streaming platform level and the discourse gap in legitimacy and gender will be examined by performing linear regression and multilevel regression analyses.

Visual News Values

Semester 1, academic year 2023/2024

By Bruno Nadalic Sotic

This thesis explores how visual elements within news headlines influence audience engagement. It specifically investigates the presence of news values in images and their impact on user interactions with news articles, a topic previously understudied in communication science. The research utilizes a comprehensive dataset of user behavior logs from the Microsoft News platform, encompassing both click and non-click activities, to shed light on the dynamics of news consumption.

The study employs machine vision techniques via commercially available APIs. These are used to systematically convert visual elements of news headline imagery into textual representations. This approach allows for a detailed analysis of ‘image features’—such as the presence of notable figures, emotional expressions, or unusual scenes—and their alignment with traditional news values like prominence, novelty, and emotional impact. By doing so, this study attempts to validate to what extent we can measure news values using automated visual content analysis.

The study applies a combination of supervised and unsupervised machine learning methods to correlate said image features with news value factors, and subsequently quantify their influence on user engagement metrics. This approach provides a measurable framework for understanding how visuals affect news engagement but also extends news value theory to the visual domain.