After
you carry out statistical analyses, you usually want to
report your findings to other people. Your goal is to
communicate clearly the information readers need to
understand what you did and what you found. So your task is
to report as clearly as possible the relevant parts of the
SPSS output.

By far the best way to learn how to report statistics
results is to look at published papers. My guidelines below
notwithstanding, the rules on how you present findings are
not written in stone, and there are plenty of variations in
how professional researchers report statistics. Looking at
the Results sections of some published papers will give you
a feel for the most common ways.

That said, below is a rough guide that you might find
useful. Remember, results are normally reported in
passenges of text with the relevant statistics included. In
almost all cases, it is not desirable to present tables
full of stats, *especially*
not
tables taken straight from SPSS! Your job is to show you
know which parts of the SPSS output are important and which
are not - copying the tables wholesale suggests you are not
able to do this. Trust me on this one: I mark your work.

A note on *p*-values:
These come from the part of the SPSS output labelled "sig".
Usually, people are just interested in whether this value
is above or below .05. Therefore, it is enough to write one
of two things in a report: "*p*
<
.05" or "n.s." (not significant). However, when
*p*
is very
low, this is interesting and so people usually give a more
accurate value. If *p*
is .009,
you might report "*p*
<
.01". If *p*
is .0004
you might report "p < .001". However, my preferred
approach is always to give the exact p-value, to 2 or 3
decimal places (as appropriate). This is a good system as
it provides the reader with as much information as
possible.

Remember to begin all your results sections with the
relevant descriptive statistics, either in a table or, if
it is better, a graph, to show the reader what the study
actually found. Don't present the same data in both a table
and a graph unless it's really necessary (aide-memoire:
it's *never*
really
necessary). All SPSS tests that you know have options to
get descriptive stats and/or graphs from the dialogue box
that you fill in to run the analysis. You can also get them
with analyse > descriptives... and the Graphs menu of
SPSS.

Note that in all statistical reporting, the cases are NOT
optional - where I've used lower-case or upper-case
letters, you MUST follow this example. Similarly,
where SPSS uses upper- or lower-case, you can usually
follow its lead (although SPSS does get it wrong in
places!). Getting this wrong *does*
matter,
as lower- and upper-case variants have different meanings -
for example, *p*
stands
for 'probability' whereas *P*
stands
for 'proportion'. Getting it wrong also immediately conveys
the impression that you either don't understand what you
are doing or don't care what you are doing, neither of
which look good in assessed work...

### t-tests, Mann-Whitney tests, and Wilcoxon tests

These are all reported in a similar way. From the SPSS output you need the degrees of freedom (df), the*t*,

*U*, or

*W*value (depending on which test you've done) and the

*p*value. These are reported as follows:

t-test: "

*t*(df) =

*t*-value,

*p*value" e.g., "The two groups differed significantly from each other with t(14) = 9.56, p = .02"

Mann-Whitney: "U(df) = u value, Z = z value, p value" e.g., "The two groups did not differ significantly, U(18) = 3.16, Z = 1.12, n.s." OR "W(df) = W value, Z value, p value" if the stats package you're using gives W rather than U (like some versions of SPSS)

Wilcoxon: "W(df) = W value, Z value, p value" e.g., "The two sets of scores differed significantly with W(24) = 14.01, Z = 3.22, p = .007"

### Correlations

Here you simply report the*r*value and, if appropriate, the corresponding

*p*-value. e.g., "Smelliness and number of friends were correlated with

*r*= -0.70,

*p*= .02"

### Regression

With simple linear regression the key things you need are the*R*-squared value and the equation. e.g., "Number of friends could be predicted from smelliness by the following formula: friends = -0.4 x smelliness + 0.6, R^2 = .49"

With multiple regression you again need the R-squared value, but you also need to report the influence of each predictor. This is often done by giving the standardised coefficient, Beta (it's in the SPSS output table) as well as the p-value for each predictor. If possible, use the Greek capital letter Beta in your report. Below, I've just written "Beta". e.g.,

"When number of friends was predicted it was found that smelliness (Beta = -0.59, p < .01), sociability (Beta = 0.41, p < .05) and wealth (Beta = 0.32, p < .05) were significant predictors. Shoe size was not a significant predictor (Beta = -0.02, n.s.). The overall model fit was R^2 = 0.47."

Depending on the question you are answering, you might also want to report the regression equation, either in normal equation form or in a table that gives the intercept and the unstandardised coefficients.

### ANOVA

With one-way ANOVA you need to find the following in the SPSS output: the*F*value, the

*p*-value, the error mean square, the degrees of freedom for the effect and the degrees of freedom for the error term. They are reported as follows:

"

*F*(df effect, df error) =

*F*-value,

*MSE*= mean-square error,

*p*-value". e.g., "IQ scores differed significantly as a function of academic discipline,

*F*(2,25) = 11.37,

*MSE*= 236.43,

*p*< .01".

If necessary, you also report the results of post-hoc tests. However, all you need do is say something like "post-hoc Tukey's HSD tests showed that psychologists had significantly higher IQ scores than the other two groups at the .05 level of significance. All other comparisons were not significant."

With more complex ANOVAs, you still report the same things. The only difference is that you need to report all the main effects and interactions. You also have to be careful to pull the right numbers from the SPSS output, especially with repeated-measures analyses.

A typical example of a split-plot analysis report might be: "The main effect of Gender was significant,

*F*(1,19) = 7.91,

*MSE*= 23.20,

*p*< .01, as was the main effect of Time,

*F*(3,19) = 12.70,

*MSE*= 23.20,

*p*< .01. The interaction of these two factors was not significant,

*F*(3,19) = 2.71,

*MSE*= 23.20, n.s."