Facts to you, opinions to me

2023

By Zilin Lin, Susan Vermeer, and Anne Kroon

Machine learning has been thriving in the field of communication science. Yet, it should be noted that decent model performance could only be possible to achieve when there is an ample amount of training data with correct annotation. Such model input, unfortunately, is sometimes difficult to obtain, due to the trade-off between quality and quantity within a reasonable research budget and timeframe. In our study, we would like to explore the potential of crowdsourcing as an approach to providing accurate model input. Specifically, we investigate whether annotation biases exist, and if so, whether they are associated with different individual characteristics.