Crowdsourced online experiments are a promising complement to laboratory research in the social sciences. Conducting experiments over the Internet is not just a quick and inexpensive way of data collection. Transporting a homogeneous decision situation into heterogeneous living conditions, online experiments can generate behavioral data from people with different social backgrounds. If paired with recruiting from a global crowdsourcing platform, the potential for participant heterogeneity rises, including different nationalities and cultures. We guide through the 10 steps necessary for the practical implementation of a crowdsourced online experiment. For illustration, we refer to our recent project on fairness behavior conducted over Amazon Mechanical Turk. We used the platform to recruit participants from the United States and India. We address general issues of experimental research including randomization, instructions, incentives, and participants’ informed consent as well as specific issues of online implementation such as international recruiting, payment, field control, and matching of asynchronously participating subjects. Our exemplary project focuses on altruism, fairness, and costly punishment as measured in standard experimental games. First, we randomly assigned participants to different levels of monetary incentives. We found that monetary incentives induce more selfish behavior, but the exact size of the stake is irrelevant for observed rates of prosocial behavior. Second, we explored context effects on elicited behavior using variation in participants’ geographical location. We showed that context effects of regional prosperity and local social capital are comparable in size to stake effects. More importantly, we demonstrated that context effects are visible and quantifiable in a large-scale online experiment. We thus argue that studying the diverse backgrounds people bring into the experiment is the key potential of crowdsourced online designs.