Date: Tue, 10 Dec 2002 14:46:51 -0500
Reply-To: "Burleson,Joseph A." <email@example.com>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: "Burleson,Joseph A." <firstname.lastname@example.org>
Subject: Re: Interaction terms
Content-Type: text/plain; charset="iso-8859-1"
No, you do not need to reverse the one variable, although if you did, it
would mean simply that you will see the reverse of the sign in the main
effect and, subsequently, the reverse of the sign of any interacion
involving that variable.
Note: Aiken, L. S., & West, S. G. (1991) Multiple regression: testing and
interpreting interactions. Newbury Park, CA: Sage Publications. It really
gets down to details on all sorts of interactions.
From their advice:
1. Transform continuous variables so that they are not appreciably skewed.
Failing that, dichotomize them, or group some other way.
2. Center each main effect first (subtract the mean from the original
variable), such that the new mean is zero.
3. Multiply the two main effects, using COMPUTE, to get the interaction.
4. consider entering the interaction sequentially, AFTER the antecedent main
effects have been put into the equation, and note the additional variance
(r-square) accounted for.
From: Beckman, Anthony [mailto:Anthony_Beckman@URMC.Rochester.edu]
Sent: Tuesday, December 10, 2002 9:54 AM
Subject: Interaction terms
I have a methodological question regarding the building of interaction terms
for multivariate linear regression. There are two variables I wish to
combine in order to examine their interaction. However they way each
relates to the response variable is different. the first (x) has a positive
correlation with response variable depression. The second (y) has a
negative correlation with the response depression. Is there a specific way
to account for this when creating an interaction term other than simply the
product xy? Ought I transform y by reverse coding?
Any and all input will be helpful.