Skip to main content
SAGE
Search form
  • 00:00

    [One-way ANOVA: Part III]

  • 00:00

    DANIEL LITTLE: In this video, we willexamine how we use the sum of squares concept introducedin parts 1 and 2 arrive at a p-valuethat we can use to make some inference about our sample.By the end of the video, you shouldunderstand the different components containedin the report of an ANOVA.[Example of Reporting an ANOVA]Namely, you should understand what the F-statistic is,

  • 00:23

    DANIEL LITTLE [continued]: or the F-ratio, between and within subject degreesof freedom and the p value.Here's an example of a report of an ANOVAthat you might see in a published scientific paper.A one-way ANOVA showed a significant differencein the hit rates between the three groups--

  • 00:44

    DANIEL LITTLE [continued]: F(2,458) equals 3.57, p equals 0.024.What this report illustrates is the different componentsof an ANOVA.And we're going to go through these components one at a time.These components represent the F-value or F-ratio,

  • 01:08

    DANIEL LITTLE [continued]: the between subject degrees of freedom,the within subjects degrees of freedom,and finally the p-value.[Components of the One Way ANOVA]The F-ratio tells you what the size of the ANOVA statisticactually is.This is the aim of the process of going through the ANOVA.It's to arrive at this particular F-ratio.

  • 01:31

    DANIEL LITTLE [continued]: The degrees of freedom tell you something about the sample sizeand the number of independent groups in your analysis.And the p-value tells you the probabilityof observing your F-ratio, assumingthat there's no variation between the groups.That is, it tells you the probability of your F-ratiounder the assumption that the null hypothesis is true.

  • 01:52

    DANIEL LITTLE [continued]: [The F-ratio]So the F-ratio is the ratio of variance between groupsto the variance within groups.[How to compute the F-ratio]Now how do we compute the ratio?Well, the sum of squares is a measure of variation.But we can't actually use the sum of squaresdirectly to compute the F-ratio.

  • 02:13

    DANIEL LITTLE [continued]: The reason for this is that the sum of squaresis actually sensitive to our sample size.So we can't just divide the sum of squaresbetween by the sum of squares within.[Sum of Squares is sensitive to sample size]So here's an example of two generateddata sets from the same population,but with different sample sizes.

  • 02:34

    DANIEL LITTLE [continued]: They each have the same mean and they eachhave the same standard deviation.But in one case, we have 300 observed data points.And in the other case, we have 3,000 observed data points.You can see that the sum of squaresfor the case on the left is much smaller than itis for the case on the right.In fact, because our sample size is 10 times the size

  • 03:01

    DANIEL LITTLE [continued]: for the case on the right, our sum of squaresare about 10 times the sum of squaresfor the smaller data set.[How to compute the F-ratio]In order to account for the sensitivity to sample size,we need to correct the sum of squaresby dividing by the degrees of freedom.

  • 03:25

    DANIEL LITTLE [continued]: The degrees of freedom that we needrepresent our number of independent groupsand the number of subjects.[Degrees of freedom]There are three degrees of freedom that we can compute.First of all, we can compute the degrees of freedomfor our total data set, which is simplythe number of participants minus 1.So n represents the total data set that we have.

  • 03:48

    DANIEL LITTLE [continued]: We can compute a degrees of freedomfor our number of conditions.Our number of conditions here is represented by the letter k.So if you remember the anxiety example, we had three groups.So our number of conditions would be equal to 3.And our degrees of freedom would equal 3 minus 1, or 2.

  • 04:14

    DANIEL LITTLE [continued]: And finally, we can compute the degrees of freedomwithin groups, which is our number of participantsminus our number of groups.So if we have 3,000 participants,we subtract off our three groups.We end up with 2,997 as our within groups

  • 04:36

    DANIEL LITTLE [continued]: degrees of freedom.Like the t-test, what these degrees of freedomallow us to do is determine what the F-distribution looks like.We also use these degrees of freedomto correct our sum of squares for the samplesize of our number of participantsand our number of groups.[How to compute the F-ratio]

  • 04:56

    DANIEL LITTLE [continued]: So to correct the sum of squares for this sensitivityto sample size, all we have to dois divide by each of the degrees of freedom.And what this gives us is somethingcalled the mean square.So the mean square within groups is a measureof within groups variation that's

  • 05:16

    DANIEL LITTLE [continued]: been corrected for sample size.And to do this, we take our sum of squares within groupsand divide by our degrees of freedom within groups.The mean square between is a measureof between groups variation.What we do here is take our sum of squaresbetween groups and divide by our degreesof freedom between groups.

  • 05:37

    DANIEL LITTLE [continued]: Finally, having computed these mean square values,we're now in a position to compute the F-ratio.And we do this simply by dividingthe mean square between groups by the mean squarewithin groups.[The F-ratio]So this gives us then a value, termed F,which represents the between groups variation divided

  • 05:58

    DANIEL LITTLE [continued]: by within groups variation.If the F-ratio is sufficiently large,then we have evidence that the difference that we observebetween our groups is real.But how large is sufficiently large?When F is very large, what this meansis that the F-value that we observeis unlikely if the differences in our data

  • 06:21

    DANIEL LITTLE [continued]: are just due to chance alone.What we're really trying to do hereis use that F-ratio to get at a p-valueand determine whether or not that p-value is less than 0.05.Our F-ratio is assessed against the F-distribution in orderto determine that probability.

  • 06:42

    DANIEL LITTLE [continued]: What we're trying to do is determinewhether the probability of our observedF-ratio that we find from our F-distributionis less than 0.05.We require both the between and within groups degreesof freedom in order to tell us whatthe shape of the F-distribution actually is.[What does the F-ratio mean?]

  • 07:03

    DANIEL LITTLE [continued]: So here's an example of the F-distribution.Because the F-distribution is a ratioof sum of squares values divided by degrees of freedom values,it's always going to be a positive number.So it starts at 0 and gets larger as we go across.

  • 07:24

    DANIEL LITTLE [continued]: We compute the probability of observingan F-ratio with a specific value by findingthe area under the curve.The p-value, then, tells us whereour F-ratio is located in this particular distribution.So if we want to find the area under the curve, what we'll

  • 07:45

    DANIEL LITTLE [continued]: note is that 95% of our F-ratios arelocated in the beginning of this distribution.So the distribution starts off very steeplybut then has a long skinny tail, which points out positively.95% of our F-ratios are in the main part of this distribution.

  • 08:08

    DANIEL LITTLE [continued]: 5% of our F-ratios are in the tail of this distribution.So finding out where this 5% region isgives us a cut-off for which we can tell whether or notour F-ratio is greater than or less than.If our F-ratio is greater than the region which representsthis 95% cut-off, then we would conclude

  • 08:30

    DANIEL LITTLE [continued]: that the differences between the means of our groupsare actually significant.[P-value]So the p-value tells us the probability of our F-ratio,assuming that the null hypothesis is true.The null hypothesis is that there is no differencebetween the groups.

  • 08:50

    DANIEL LITTLE [continued]: That is, all of the groups were generatedfrom the same underlying population.If the p-value is less than 0.05,which will occur whenever F is larger than that cut-off,then we conclude that there is a significant differencebetween the groups and that the null hypothesisis unlikely to have generated the observed data.[What the ANOVA does and does not tell you]

  • 09:12

    DANIEL LITTLE [continued]: So ANOVA tells us whether there'sa significant difference between the groups.But it doesn't tell us how big that difference is.In order for us to determine how big that difference is,we actually need to compute something called the effectsize.And there will be a separate tutorial videodemonstrating the effect size.

  • 09:34

    DANIEL LITTLE [continued]: [ANOVA does not tell you where that difference is]The ANOVA also doesn't tell you wherethat particular difference between our groups might be.So going back to our example, if we're testing three groups,there are six possible outcomes that we might observe.For instance, the student anxiety levelsmight be exactly the same as our ambulance anxiety levels, which

  • 09:56

    DANIEL LITTLE [continued]: might be exactly the same as our firefighter anxiety levels.This was what we observed in data set 1.On the other hand, we might have a case where the students aredifferent from the ambulance drivers,but the ambulance drivers are the same as the firefighters.

  • 10:20

    DANIEL LITTLE [continued]: Or it could be the case that the students aredifferent from the firefighters, and the firefightersare not different from the ambulance drivers,and vice versa.Or we could have a case which is what we observed in data set 2that all three of our groups are different from one another.

  • 10:42

    DANIEL LITTLE [continued]: A significant ANOVA only tells us that one of these casesmust be true, but it doesn't tell us which one.In order to determine which of these cases is true,we have to rely on post hoc tests, whichtest the difference between each of our groups separately.[Beyond One-Way ANOVA]There are other types of situations

  • 11:03

    DANIEL LITTLE [continued]: for which we can use ANOVA, but we alsouse different types of ANOVA in these situations.For instance, if you have more than one factor whichdiffers between the groups, then we might use a one-way ANOVA.Or instead of using a one-way ANOVA,we would use a two-way ANOVA or a three-way ANOVA.We might have a case in which our groups are not actually

  • 11:25

    DANIEL LITTLE [continued]: independent from one another, analogous to the repeatedmeasures t-test or the paired samples t-test, in which casewe would use a repeated measures ANOVA.Or we might have both, in which casewe would use a mixed ANOVA-- for instance,something like a two-way, between, and within ANOVA.So this concludes our tour of the one-way ANOVA.

  • 11:46

    DANIEL LITTLE [continued]: And by now you should understand the F-ratio,sum of squares, degrees of freedom,mean squares, and how to use that informationto determine the p-value and what that p-value tells you.

Video Info

Series Name: Statistics for Psychology

Episode: 10

Publisher: University of Melbourne

Publication Year: 2014

Video Type:Tutorial

Methods: Analysis of variance, F-ratios, P-value, Degrees of freedom

Keywords: anxiety; mathematical concepts; Stress at work

Segment Info

Segment Num.: 1

Persons Discussed:

Events Discussed:

Keywords:

Abstract

In chapter 10 of his series on statistics for psychology, Professor Daniel Little concludes his discussion of one-way ANOVAs. Little discusses F-ratios in detail and the information ANOVA does and does not give.

Looks like you do not have access to this content.

One-Way ANOVA: Part III

In chapter 10 of his series on statistics for psychology, Professor Daniel Little concludes his discussion of one-way ANOVAs. Little discusses F-ratios in detail and the information ANOVA does and does not give.

Copy and paste the following HTML into your website