Date: Thu, 17 Feb 2005 13:49:16 -0800
Reply-To: Gregory Hildebrandt <firstname.lastname@example.org>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: Gregory Hildebrandt <email@example.com>
Subject: Re: corr vs regression
Content-Type: text/plain; charset=us-ascii
If the data are standardized, the partial multiple regression coefficients would equal the partial correlation coefficient. Both would be unit free measures.
If the data are not standardized, then, for a particular x, the partial regression coefficient = partial correlation coefficient times the standard deviation of the part of y uncorrelated with the other independent variables divided by the standard deviation of the part of the x uncorrelated with the remaining independent variables. As a result, with non-standardized data, the partial regression coefficient depends on the units while the partial regression coefficient does not.
I believe that performing a statistical test that the partial correlation coefficient is not equal to zero is the same as the t test that the partial regression coefficient is not equal to zero.
A practical factor is that unless the audience receiving the information about partial correlation coefficients is very sophisticated statistically, they may have a difficult time understanding how the partial correlation coefficient can be quite low, with a fairly large sample, at the same time the partial regression coefficient is statistically significant.
Typically, I would recommend using the partial regression coefficient rather than the partial correlation coefficient. However, if the independent variables are highly correlated, the partial correlation coefficient may provide some insights about multicollinearity.
"Chelminski, Iwona" <IChelminski@lifespan.org> wrote:
Would it be ever appropriate to run 5 separate partial correlations (between
one variable and 5 others) instead of regression analysis with this one
variable as DV and 5 others as predictors?
Thanks in advance!