LISTSERV at the University of Georgia
Menubar Imagemap
Home Browse Manage Request Manuals Register
Previous messageNext messagePrevious in topicNext in topicPrevious by same authorNext by same authorPrevious page (December 1997)Back to main SPSSX-L pageJoin or leave SPSSX-L (or change settings)ReplyPost a new messageSearchProportional fontNon-proportional font
Date:   Thu, 4 Dec 1997 21:01:15 GMT
Reply-To:   Richard F Ulrich <wpilib+@PITT.EDU>
Sender:   "SPSSX(r) Discussion" <SPSSX-L@UGA.CC.UGA.EDU>
From:   Richard F Ulrich <wpilib+@PITT.EDU>
Organization:   University of Pittsburgh
Subject:   Re: Noncentrality & Power

I think it was about 6 months ago that this NetGroup had a discussion about the MANOVA power statements. I can make comments by assuming that GLM in 7.5 is doing what MANOVA did in 6.1. You might look in DejaNews for more detail. I hope David Nichols or someone will correct me if I don't repeat the earlier conclusions, or if GLM does different from MANOVA.

Burton L. Alperson (balpers@calstatela.edu) wrote: : Version 7.5 automatically includes "Noncent. Parameter" and "Observed Power : on GLM output."

: Why should I care about these values for data that have already been : collected and analyzed?

: According to the SPSS Advanced Stat manual, "The power gives the : probability that the F test will[sic!] detect the differences between : groups equal to those implied by the sample differences." Since I already : have the p value of F in the output, what do I gain by knowing "Noncent. : Parameter" and "Observed Power?"

: What am I missing?

- "Observed Effect" is what is tested. It includes an underlying effect, and a contribution of bias with is bigger with smaller samples, or with bigger designs.

"Underlying Effect" is what you usually do a power analysis on, so what SPSS provides is unusual, and needs careful attention, to figure how it does make sense, since it is not obvious.

Since R-squared is always positive, and it is bigger (by chance) with more variables, consider the logical equation -

Observed= Underlying + Bias

- these can be regarded, approximately, as simply adding terms of variance, or a version of R-squared. For simple regression, the Bias or expected R^2 is p/(n-1) where p is the number of variables and n is the sample size.

Let us say an *Observed* regression has R^2 of .30, where the Bias is the whole Observed effect. Then the test statistic is not at all significant, because it is just chance. But for the *same* sample size and design, what would the POWER if the *Underlying* effect were .30?

If the Underlying effect were that big, then the projected, hypothetical outcome would be sum of the Underlying and the Bias - properly combined, there is an R^2 of .5 or .6, and it would have notably better power than the experiment that is being reported on, with Underlying=0.

- That may seem silly, but that is how it works. I suggest that you ignore the "power" section of the computer output, unless you are sure you understand everything about what I am calling "Bias", the capitalization on chance owing to degrees of freedom of design. For simple designs, and large n, you might not be enormously wrong, if you try to guess (otherwise) what it is that the printout should be telling you. - I was misled, and misled other people on my early, occasional uses of MANOVA, because I made the mistake of assuming that the power-statement should be something useful and intuitively meaningful; but it is not.

I recommend the very late chapters of the 1989 edition of Cohen's book on power analysis, for more information on estimating multivariate power. (Actually, even more strongly, I recommend that you reduce your problem to something simple enough that you do not need to read up on "multivariate" considerations.)

Rich Ulrich, biostatistician wpilib+@pitt.edu http://www.pitt.edu/~wpilib/index.html Univ. of Pittsburgh


Back to: Top of message | Previous page | Main SPSSX-L page