|Date: ||Mon, 25 Jun 2001 09:44:02 +0100|
|Reply-To: ||David Hitchin <D.H.Hitchin@SUSSEX.AC.UK>|
|Sender: ||"SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>|
|From: ||David Hitchin <D.H.Hitchin@SUSSEX.AC.UK>|
|Subject: ||Re: appropriate analyses for likert scale items|
|Content-Type: ||text/plain; charset=us-ascii|
--On 23 June 2001 06:30 -0700 j driscoll <johnd2222@YAHOO.COM> wrote:r
> A reviewer suggested that manovas were not correct
> tests for likert scale items. (The author of the scale
> constructed and validated the measure with parametric
> stats...1976). Indeed I found an spss article
> further documenting this.
> The problem is that I have 84 items that form
> 24 composites. yes, I could do chi-square on 84
> items, but want to look at the 24 composites. Does
> anyone have a reference for combining ordinal
> data into composites...is that possible...advisable...
When analysing single items there are very good reasons for using
non-parametric tests. Likert scales are not necessarily equal interval
scales and they are not normal variables because they have limited ranges,
can take only integer values, and may be J, L or U-shaped. Even if some
variables in some groups seem to have a nice symmetric bell-shaped
distribution, it is very untidy to treat these as normally distributed
(thus appropriate for t-tests) and to treat the others by non-parametric
The Mann-Whitney test has an efficiency of 95% compared with a t-test on a
truly normal variable, so little is lost by using it even on normal
variables; when the variables are not normal the comparison isn't worth
doing because the normal tests cannot be strictly justified - although
t-tests and Mann-Whitney tests amy produce very similar p-values in many
You can still use non-parametric tests with two or three variables at a
time (rank correlations and partial correlations) and Kendall's coefficient
of concordance can be applied to any number of items, but non-parametric
between-group tests for multiple variables don't exist.
The only solution in these cases, for such techniques as factor analysis
and the multivariate analysis of variance, is to check that most of the
variables are reasonably symmetric in distribution, and if they are, to
treat them as if they were normal, using the usual normal-based tests.
Yes, the reviewer was right, manovas aren't strictly appropriate, but if
there aren't any alternatives you may have to use manova as the best method
which is available.
When you take linear combinations of independently distributed variables,
the combinations tend to e closer to the normal distribution than the
individual variables. This doesn't quite amount to the Central Limit
Theorem which requires rather more precise conditions, but in practice the
normal-based tests work quite well, and give useful results.
If, for example, you get a calculated p-value of 0.00001 or 0.80 you KNOW
what is going on, even if the true values might have been a little
different; true p-values of 0.005 or 0.75 don't tell a different story.