Skip to main content
SAGE
Search form
  • 00:08

    SPEAKER 1: Let's calculate a numberof inferential statistics.First, state two statistical hypothesesregarding what is believed to exist in the population.The first is the null hypothesis.What it implies is that the difference or the change,or what we call the effect in the population, does not exist.The mutually exclusive alternative

  • 00:28

    SPEAKER 1 [continued]: to the null hypothesis is the alternative hypothesis,which implies that the difference, change,or effect does exist.In the second step, you're going to make one of two decisions,either reject the null hypothesis or notreject the null hypothesis.Because the decision is based on probability,the reality exists that the decision you make

  • 00:50

    SPEAKER 1 [continued]: could be correct or incorrect, or youcan makes two types of errors.The first error is known as type I error.Type I error can be thought of as rejectinga true null hypothesis, which sounds pretty weird.So another way to think about it is when you're rejectingthe null hypothesis when you shouldn't.This means that an effect exists, when in reality it

  • 01:13

    SPEAKER 1 [continued]: does not.The probability of making a type I erroris the same as the probability of rejectingthe null hypothesis.It's called alpha, which is the probabilityof a statistic needed to reject the null hypothesis.The probability of making a type I erroris equal to alpha, which is traditionally set to 0.05.

  • 01:34

    SPEAKER 1 [continued]: Every time you make the decision to reject the null hypothesis,there is a 0.05 probability that you'vemade the incorrect decision.You've concluded that an effect exists when, in reality, itdoes not.You can reduce the probability of making a type Ierror by making it harder to reject the null hypothesis.For example, let's say you set alpha to something

  • 01:55

    SPEAKER 1 [continued]: smaller, from 0.05 to 0.01.To reject the null hypothesis, the probabilityof this statistic has to be less than 0.01,which is more stringent than 0.05.So why not reduce the probabilityof making type I error?When you reduce the probability of saying an effect exists whenit does not, this increases the likelihood of not saying

  • 02:19

    SPEAKER 1 [continued]: in effect exists when it does.Reducing the probability of type I errorincreases the probability of making the other type of error.A type II error occurs when you don'treject the null hypothesis.So lowering the probability of type II errormeans increasing the likelihood of rejectingthe null hypothesis.The reality is the probability of committing

  • 02:40

    SPEAKER 1 [continued]: a type II error is difficult to calculate.Why?Think of it this way.You've concluded that an effect does notexist because it's possible that, in reality, it does notexist, or it could be that the effect doesexist but you didn't find it.Imagine you go to Mars looking for life.You don't find it.

  • 03:00

    SPEAKER 1 [continued]: It could be because there is no life on Mars,or it could be because there is life on Marsand you just didn't find it.There's no way of telling.And that's why you can't exactly calculate the probabilityof type II error.However, you can reduce the probability of type II error.A type II error occurs when you don'treject the null hypothesis.

  • 03:22

    SPEAKER 1 [continued]: So lowering the probability of type II errormeans increasing the likelihood of rejectingthe null hypothesis.Increasing the probability of rejecting the nullis the concept known as statistical power.Statistical power is the probabilityof detecting an effect when, in fact, it exists.There is a number of ways of increasing statistical power.

  • 03:42

    SPEAKER 1 [continued]: One way is to increase your sample size,because the bigger your sample, the more likelyyou are to detect effects.Another other way is to increase alpha.Let's say you set alpha at 0.1 rather than 0.05.This implies you're going to reject the null hypothesisif the probability of your statistic is less than 0.1.

  • 04:02

    SPEAKER 1 [continued]: That's a more lenient requirement than 0.05.And you've increase the likelihoodof rejecting the null.The problem with this is that the probability of type IIerror increases the likelihood of type I error.So reducing the probability of saying an effectdoes not exist when it does increasesthe likelihood of saying an effect exists when it does not.

  • 04:24

    SPEAKER 1 [continued]: As you can see, there are two typesof errors you can make in hypothesis testing.And they are directly linked to each other.They're important to know, because theyaffect the ability of researchersto accurately and appropriately interpretthe results of their statistical analyses.

Video Info

Publisher: SAGE Publications, Inc.

Publication Year: 2019

Video Type:Tutorial

Methods: Hypothesis testing, Statistical power, Statistical inference, Type I errors, Type II errors, Null hypothesis

Keywords: effect size; hypothesis testing; null hypothesis; research methods; Statistical inference; Statistical power; type I errors; type II errors ... Show More

Segment Info

Segment Num.: 1

Persons Discussed:

Events Discussed:

Keywords:

Abstract

Errors in hypothesis testing, including type I and type II errors and their interrelationship, are summarized.

Looks like you do not have access to this content.

Errors in Hypothesis Testing, Statistical Power, and Effect Size

Errors in hypothesis testing, including type I and type II errors and their interrelationship, are summarized.

Copy and paste the following HTML into your website