Skip to main content icon/video/no-internet

Type I error is an incorrect rejection of a true null hypothesis. The truth is that two variables in a research hypothesis (alternative hypothesis) are independent of each other, so there is no association between the two variables in reality. However, researchers often mistakenly conclude that those variables are related to one another. Simply put, Type I error can be understood as false positive. This entry provides an explanation of Type I error, offers an example, and discusses how to reduce Type I error rates.

When Type I Error Occurs

Statistical testing is based on probability using data from a sample not from a population. Thus, although the selected sample well represents the population, errors might occur. That is, the decision researchers make from statistical testing could potentially result in errors. Particularly, Type I error occurs when researchers decide to reject the null hypothesis when it is true and should not be rejected. The following example offers further insight into Type I error.

A research hypothesis predicts that there is a sex difference on self-disclosure between men and women. The truth in reality is that men and women are not different in terms of self-disclosure, but a researcher incorrectly concludes that there is a difference. That is, a null hypothesis is true and should not be rejected. However, by random chance, results from statistical testing indicate that the null hypothesis is not true. So, the researcher rejects the null hypothesis and argues that men and women are different in self-disclosure when there is no difference in reality. This is Type I error.

The threshold for rejecting a null hypothesis is called the significance level. When conducting statistical testing, researchers choose the significance level. So, the level of Type I error can be controlled by researchers. Conventionally, the significance level is set at .05, which indicates that there is a 5% chance that a null hypothesis is erroneously rejected when it should not be. Because the significance level is sometimes called alpha (α), the probability of committing Type I error can be called the alpha (α) level.

In statistics, multiple comparisons would potentially increase a chance of committing Type I error. Let’s say that a researcher hypothesizes that teacher self-disclosure of personal information in an online class influences class satisfaction, and the researcher wants to compare three conditions: (a) high self-disclosure, (b) low self-disclosure, and (c) no self-disclosure. To identify differences in class satisfaction, three sets of comparison should be performed for this analysis: high versus low, high versus none, and low versus none. This means the chance of making Type I error increases up to three times as each set of comparison includes a 5% chance of committing Type I error. With this example of three comparisons, the estimated chance of making Type I error would be 3 x .05 (significance level; alpha) = .15. So, a 15% chance of making Type I error would be expected. Thus, multiple comparisons would potentially result in a high chance of committing Type I error.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading