Bot Detection in Online Studies and Experiments

Abstract

Most experimental and online studies in the empirical social sciences rely on online panels from crowdsourcing platforms, such as Amazon Mechanical Turk (MTurk), Prolific, Qualtrics Online Panel, and their lesser known competitors. The key benefit of all of these services is an easy and affordable access to a large pool of diverse participants, a privilege that was previously reserved for globally leading and financially independent universities. However, this newly achieved leveled playing field comes at a cost. Semi- or fully automated response tools, also called bots, decrease data quality and reliability. This case describes how two online studies were conducted on a crowdsourcing platform in anticipation of bot responses. Specifically, the case offers insights into the study design process, the selection of appropriate survey questions and bot traps, as well as the ex-post analysis and filtering of bot responses. Best practices are identified, and potential pitfalls explained. The description should aid readers in designing anticipatory online studies and experiments to increase their data quality, validity, and reliability.

locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles