If you collect a set of scores from people they will not
all be the same. That is, there will be variance
in the scores. Imagine you have three groups of people and
each person provides a score on a test. There will be two
sources of variance: (1) Some groups, on average, perform
better than others and (2) the people within each group
won't all perform at exactly the same level.
ANOVA allows us to look at these two types of variance in
relation to each other. The first type is called EFFECT
VARIANCE. It is the extent to which the scores are
different from each other because some of the people come
from a different group to the others, and this is what we
are interested in. The second type of variance is called
ERROR VARIANCE. It is not really interesting to us, as we
would expect the people within a group to perform at
slightly different levels from each other. What ANOVA does
is to calculate the ratio (F-ratio) of effect
variance to error variance (F = effect / error).
If F is big, this means that the scores we
measured differ from each other primarily because some
groups are better than others, and this is interesting. If
F is small, this means that the scores we obtained
differ from each other primarily because there is a lot of
random variation between the people we tested; a lot of
noise, in other words.
It's done with Analyse > Compare means >
One-way ANOVA