Amazon’s Mechanical Turk (MTurk) has emerged as a convenient new tool for participant recruitment for research in the social sciences. Mechanical Turk’s web-based platform is often credited for providing a low-cost and efficient solution for fielding surveys and survey-based experiments. This case study introduces MTurk as a participant recruitment and data collection tool for sociological experiments. It discusses sampling and data quality issues associated with crowdsourcing platforms, and examines research design and implementation choices that can be used to minimize potential methodological pitfalls. Examples are drawn from a recent survey-based experiment conducted on MTurk, which tested how being exposed to news about retractions in science affects public perceptions of science and technology.