Skip to main content icon/video/no-internet

Randomization tests use a large number of random permutations of given data to determine the probability that the actual empirical test result might have occurred by chance. Contrary to traditional parametric tests, they do not rely on the properties of known distribution, such as the Gaussian, for determining error probability, but construct the probability according to the actual distribution of the data. In parametric statistics, it is assumed that all potential sampled scores follow a theoretical distribution that is mathematically describable and well known. By transforming and locating an empirical score on the respective distribution, one can calculate a density function, which then gives the position of this score within this density function. This position can be interpreted as a p value. Most researchers know that some restrictions apply for such parametric tests, so they may seek refuge in nonparametric statistics. However, nonparametrical tests are also normally approximated to a parametrical distribution, which is then used to calculate probabilities. In such a situation, strictly speaking, randomization tests apply.

Moreover, there are several situations in which parametric tests cannot be used. For instance, if a situation is so special that it is not reasonable to assume a particular underlying distribution, or if a single case research is done, other approaches are needed. In those, and in many more other cases, randomization tests are a good alternative.

General Procedure

The basic reasoning behind randomization tests is quite simple and straightforward. Randomization tests assume that a researcher has done a controlled experiment, obtaining values for the experimental condition and values for the control condition, and in which an element of chance is involved as to either when the intervention started or which treatment a person was assigned to. Randomization tests also assume that a researcher has a range of possible values that could have popped up simply by chance. The researcher calculates test statistics, such as the difference between the experimental and control scores. The researcher then has the computer calculate all possible other differences (or does it by hand, in simple cases), obtaining a little universe of all possible scores. The researcher then counts the number of cases in which the empirically calculated difference is equal to or lesser or greater than all possible ones. This figure, divided by all possible scores, gives the researcher the true probability that the score obtained can be achieved just by chance.

The benefits of this procedure are as follows: It can be related directly to an empirical situation. It does not make any assumptions about distributions. It gives a true probability. It does not necessitate the sampling of many cases to reach a valid estimate of a sampling statistics, such as a mean or a variance (i.e., it can also be done with single cases). Most randomization tests are also associated with certain designs, but the basic reasoning is always the same.

Practical Examples

Let us take an example for a single-case statistical approach, a randomized intervention design. The assumption of the intervention in this case is that a change in general level, say of depression or anxiety, is produced by the intervention. A researcher might want to think about such a design, if he or she has only a few patients with a particular problem and cannot wait until there are enough for a group study, or if he or she tailors interventions individually to patients, or if simply too little is known about an intervention.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading