Back to All Events

Does Feedback on the Accuracy of Human and Algorithmic Prediction Increase Algorithm Use and Reduce Better-Than-Average-Effects Over Time?

  • Amsterdam Leadership Lab 7 Van der Boechorststraat Amsterdam, NH, 1081 BT Netherlands (map)

Jacob Matić

The aim of this project is to investigate whether feedback on participants’ predictive validity in a prediction task can stimulate algorithm use over time and reduce the better-than-average-effect. Participants will predict the job performance of 40 applicants based on their assessment information, either with or without algorithmic advice. Afterwards, participants will receive feedback on their own predictive accuracy, other people’s accuracy, or no feedback. Two weeks later, participants will predict another set of 40 applicants and receive algorithmic advice. We hypothesize that participants who received accuracy feedback, especially on their own accuracy, will deviate less from the advice, make more accurate predictions, and become less overconfident. We ran a pilot study (N = 258) to investigate how different presentations of predictive validity estimates affect perceptions of hiring procedures, using a 2 (holistic vs. clinical synthesis) × 2 (visual and numeric display vs. numeric display only) × 2 (correlation vs. BESD) between-subjects design. The results revealed that participants are more likely to select an algorithm after they have seen that prescribed algorithms result in more accurate judgments than holistic approaches. Participants perceived larger accuracy differences between hiring procedures when accuracy was communicated numerically and visually, rather than only numerically.