|Date: ||Mon, 25 Jun 2001 09:17:42 -0300|
|Sender: ||"SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>|
|From: ||Hector Maletta <hmaletta@FIBERTEL.COM.AR>|
|Subject: ||Re: appropriate analyses for likert scale items|
|Content-Type: ||text/plain; charset=us-ascii|
Quite apart of the problem of how to treat Likert scales:
Just a question about David Hitchin's comments on J.Driscoll's question:
I wonder whether he is duly making a clear distinction between the
distribution of the variable itself, which can have any shape, and the
sampling distribution of the variable (which should be normal or
near-normal for parametric tests to be applied). The sampling
distribution is the theoretical distribution of sample means around the
population mean, which tends to normality according to the 'law of large
numbers' as sample size increases. The distribution of the variable in
the actual sample is the distribution of individual values around the
Universidad del Salvador
Buenos Aires, Argentina
David Hitchin wrote:
> --On 23 June 2001 06:30 -0700 j driscoll <johnd2222@YAHOO.COM> wrote:r
> > A reviewer suggested that manovas were not correct
> > tests for likert scale items. (The author of the scale
> > constructed and validated the measure with parametric
> > stats...1976). Indeed I found an spss article
> > further documenting this.
> > http://www.uni.edu/its/us/document/stats/spss2.html
> > The problem is that I have 84 items that form
> > 24 composites. yes, I could do chi-square on 84
> > items, but want to look at the 24 composites. Does
> > anyone have a reference for combining ordinal
> > data into composites...is that possible...advisable...
> > meaningful?
> > thanks!
> When analysing single items there are very good reasons for using
> non-parametric tests. Likert scales are not necessarily equal interval
> scales and they are not normal variables because they have limited ranges,
> can take only integer values, and may be J, L or U-shaped. Even if some
> variables in some groups seem to have a nice symmetric bell-shaped
> distribution, it is very untidy to treat these as normally distributed
> (thus appropriate for t-tests) and to treat the others by non-parametric
> The Mann-Whitney test has an efficiency of 95% compared with a t-test on a
> truly normal variable, so little is lost by using it even on normal
> variables; when the variables are not normal the comparison isn't worth
> doing because the normal tests cannot be strictly justified - although
> t-tests and Mann-Whitney tests amy produce very similar p-values in many
> You can still use non-parametric tests with two or three variables at a
> time (rank correlations and partial correlations) and Kendall's coefficient
> of concordance can be applied to any number of items, but non-parametric
> between-group tests for multiple variables don't exist.
> The only solution in these cases, for such techniques as factor analysis
> and the multivariate analysis of variance, is to check that most of the
> variables are reasonably symmetric in distribution, and if they are, to
> treat them as if they were normal, using the usual normal-based tests.
> Yes, the reviewer was right, manovas aren't strictly appropriate, but if
> there aren't any alternatives you may have to use manova as the best method
> which is available.
> When you take linear combinations of independently distributed variables,
> the combinations tend to e closer to the normal distribution than the
> individual variables. This doesn't quite amount to the Central Limit
> Theorem which requires rather more precise conditions, but in practice the
> normal-based tests work quite well, and give useful results.
> If, for example, you get a calculated p-value of 0.00001 or 0.80 you KNOW
> what is going on, even if the true values might have been a little
> different; true p-values of 0.005 or 0.75 don't tell a different story.
> David Hitchin