LISTSERV at the University of Georgia
Menubar Imagemap
Home Browse Manage Request Manuals Register
Previous messageNext messagePrevious in topicNext in topicPrevious by same authorNext by same authorPrevious page (August 2010, week 1)Back to main SAS-L pageJoin or leave SAS-L (or change settings)ReplyPost a new messageSearchProportional fontNon-proportional font
Date:   Fri, 6 Aug 2010 16:54:52 -0400
Reply-To:   Jim Groeneveld <jim.1stat@YAHOO.COM>
Sender:   "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From:   Jim Groeneveld <jim.1stat@YAHOO.COM>
Subject:   Re: Significant vs Non-Significant Effects
Comments:   To: Billy Thompson <bill.thompson@BROOKS.AF.MIL>

Hi Bill,

What does significance mean and how or rather when do you investigate it?

First of all the borderline of significance is usually set to 95%. What does that mean? It means that your __preceding__ hypothesis stating any difference has a probability of at least 95% to be true in reality; that is a generally accepted probability.

The 95% assurance does not mean that there is a difference, or that the difference occurs in at least 95% of the cases. It only means that your conclusion on accepting a difference in reality has a minimally 95% chance to be true and a 5% (or less) chance of being false.

You have a __preceding__ hypothesis, don't you? Without a __preceding__ hypothesis there is no significance testing. The hypotheses can be general, global (like a full ANOVA design) or detailed (concerning specific difference between certain groups or so) or both.

Before testing you determine what to test and in what order. You __never__ deviate from that plan, agreed? If you deviate your conclusions are not valid, but manipulative. Significance testing is not exploratory research, you can't search for "significance".

Now, let's assume you had a hypothesis stating that the overall design would show a difference with at least a 95% chance to be really true, thus significance and that you also have a hypothesis stating that a certain detail effect would be significant in the same way.

If you would plan to test those hypotheses in that order and if you agree to stop testing any further once the overall effect would not show to be significant, than you should stop indeed. If you yet continued to test detail effects, that apparently turn out "significant", that significance has no value. You would fool yourself by thinking that your conclusion on the detail effects have a chance of at least 95% to be really true.

The only thing that you could do is, when you find detail "significance", to revise your hypotheses and test those with completely new data. Significance testing does not test differences or other effects (like equivalence) but it only tests hypotheses, initial expectations, verdicts on those differences or effects. And there is always the (generally accepted) small (<5%) chance of still being wrong with your conclusions.

Statistics is not magic, no hocus-pocus, it never 'proves' anything 100% sure, it only shows its likeliness. So I would stick to the rules of statistics and the preceding test plan and I would not fiddle around until finding something "significant". If you search long enough you can by chance always find something "significant" (e.g. in 5% of your tests) without being true actually, purely a coincidental finding without any significance, any meaning.

Beware! I hope these warnings are clarifying.

Regards - Jim. -- Jim Groeneveld, Netherlands Statistician/SAS consultant http://jim.groeneveld.eu.tf

On Fri, 6 Aug 2010 13:15:31 -0400, Billy Thompson <bill.thompson@BROOKS.AF.MIL> wrote:

>If there is a non-significant overall ANOVA you would not preform post hoc >analysis on individual factors. However, let's say in the results you get >a non-significant overall p value, yet you get a significant effect on one >or more factors in the Type IV source table, wouldn't you consider these >significant effects and perform post hoc on those factors found >significant?


Back to: Top of message | Previous page | Main SAS-L page