Quantifying and Measuring Bias and Engagement in Automated Decision-Making

Abstract

Navigating online news, media and information has never been more difficult despite the curatorial work of algorithmic news feeds, search engines and other automated decision systems. This project focuses on strategies for effective fact checking as part of a broader study of Quantifying and Measuring Bias and Engagement in Automated Decision Making systems. The project explored the following research questions: - How do users perceive fairness, bias, or trust, and how can these perceptions be measured effectively? - Can bias be measured by observing users interacting with search engines or intelligent assistants? - To what extent can sensors in wearable devices and interactions inform the measurement of bias and engagement? Addressing these questions in the context of fact checking, we developed a three-phase mixed-method approach which included participatory research (Phase 1), online user studies via crowdsourcing (Phase 2), and controlled lab user studies (Phase 3). This report summarises the methodology, findings, and implications of the three phases of the project. This report aims to provide readers with a clear understanding of the key contributions made within the Quantifying and Measuring Bias and Engagement project.

Type