Over the course of two non-consecutive weeks, we collected 410 2-hr experimental participation data-points. This article recounts the planning, preparation, implementation, and the aftermath of collecting 820 hr of empirical data as a learning experience and framework for future large-scale data collection endeavors. The single-most critical aspect of the project, or indeed any large-scale data collection effort, is workflow: the recursive efforts of the researchers to ensure smooth delivery and back-end processing for the ultimate goal of publication and presentation of the data. Workflow is optimized by loading as much time into the back-end of a project as possible, enabling hassle-free administration of an experimental paradigm and analysis of the data. Although the start-to-finish delivery of a large-scale experiment may often differ from that of a small-scale (read normal) experiment chiefly in sample size (i.e., N), this multiplication of N can result in each step of the process incurring a significant time-debt. From coding counterbalancing conditions to printing consent forms to managing and processing the collected data, we offer our experience and post-hoc reflections on workflow and back-end planning, in the hope that these may prove valuable to other researchers engaged in large-scale data collection.