Automated Video Interview Personality Assessments: Reliability, Validity, and Generalizability Investigations

Contributors: Wharton People Analytics Postdoctoral Researcher Louis Hickman & Nigel Bosch, Vincent Ng, Rachel Saef, Louis Tay, Sang Eun Woo

Organizations are increasingly adopting automated video interviews (AVIs) to screen job applicants despite a paucity of research on their reliability, validity, and generalizability. In this study, we address this gap by developing AVIs that use verbal, paraverbal, and nonverbal behaviors extracted from video interviews to assess Big Five personality traits. We developed and validated machine learning models within (using nested cross-validation) and across three separate samples of mock video interviews (total N = 1,073). Also, we examined their test–retest reliability in a fourth sample (N = 99). In general, we found that the AVI personality assessments exhibited stronger evidence of validity when they were trained on interviewer-reports rather than self-reports. When cross-validated in the other samples, AVI personality assessments trained on interviewer-reports had mixed evidence of reliability, exhibited consistent convergent and discriminant relations, used predictors that appear to be conceptually relevant to the focal traits, and predicted academic outcomes. On the other hand, there was little evidence of reliability or validity for the AVIs trained on self-reports. We discuss the implications for future work on AVIs and personality theory, and provide practical recommendations for the vendors marketing such approaches and organizations considering adopting them.

Read more.