|Date: ||Fri, 28 Feb 2003 08:46:19 -0500|
|Sender: ||"SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>|
|From: ||Stephen Eyres <Eyres.SAT@forces.gc.ca>|
|Subject: ||Hampshire question: testing for significant systematic effects o
f two survey methods|
|Content-Type: ||text/plain; charset="ISO-8859-1"|
Stephen Hampshire (26 Feb 03) is "dealing with a survey that has been
administered by two
different methods (telephone and post)" and "would like to be able to test
for any significant systematic effect caused by this difference. His
question concerns the appropriate statistical analysis:
"My gut tells me that we would need to look for a consistent effect across
all of the relevant variables, rather than testing for a significant
difference on each one - is that right?"
There are two kinds of systematic effects in which you may be interested:
a. differences in response/return rate as a function of method of
b. differences in patterns of responses to individual questions as a
function of method of delivery.
The first of these is a straight forward chi-square problem.
The second of these requires a more complicated analysis--but to put it
simply, you are right: you have one independent variable with two levels
(telephone vs postal delivery), and multiple, most likely correlated
dependent variables, a multivariate problem. The appropriate statistic is
Hotelling's T-squared--a multivariate analogue to the univariate t-test.
In the same way as the t-test is mathematically equivalent to one-way
analysis of variance, so is t-squared mathematically equivalent to a
multivariate analysis of variance and you can find that within SPSS.
Lieutenant-Colonel Stephen A.T. Eyres, Ph.D.
Director Human Resources Research and Evaluation 4
(Operational Effectiveness and Leadership)
National Defence Headquarters
CANADA K1A 0K2