```Date: Wed, 19 May 2010 12:36:31 -0400 Reply-To: Art@DrKendall.org Sender: "SPSSX(r) Discussion" From: Art Kendall Organization: Social Research Consultants Subject: Re: Significant difference in "wrong" direction Comments: To: "Ginkel, Joost van" In-Reply-To: <77B1A99F61865243991312E886F7B1F30B6B4249@FSWEXC01.fsw.leidenuniv.nl> Content-type: text/html; charset=ISO-8859-1 To elaborate on this good advice.
The null hypothesis is that one should stay with the {prevailing, status quo, default} decision about a {theory, practice, policy}. In an Anglo-American justice analog, the defendant null hypothesis  is presumed innocent {true, useful} unless there is sufficient evidence otherwise. In some other justice systems failure to reach the criterion would mean the charge is "not proven".

The number of tails determines what is sufficient evidence.

Remember statistical "significance" only tells us the a {difference, relation} is statistically distinguishable from randomness.   It is necessary but not sufficient for a decsion to go with the alternative {theory, practice, policy}.

Art Kendall
Social Research Consultants

On 5/19/2010 10:32 AM, Ginkel, Joost van wrote:
Dear Allan,

Technically, the one-sided p-value you get represents the probability that you find this mean difference A - B or larger. Now if the mean of B is larger than A, you get a negative difference and you want to know the probability that that you find this negative difference or larger. If the results were in the direction you would expect and SPSS would report a p-value of, say, 0.04, the one-sided p-value would be 0.02. However, when the mean difference goes in opposite direction, you have to look at the other side of the distribution. Thus, in that case your p-value would become 1-0.02 = 0.98. This is probably not new to most statisticians.
About the philosophical part: what you should do depends entirely on the context, I think. For example, if one medicine cures people more slowly than a placebo although you expect it to cure people faster, then that would be a reason to say: well, this medicine does more harm than good, so the null-hypothesis should be rejected but not in the way you would have wanted. Thus, say in the discussion that if you had done a two-sided test or a one-sided test in the opposite direction, the result would have been significant and that this implies that this medicine is actually harmful. On the other hand, if you suspect a soda manufacturer of fraude, for example, by putting less soda in each bottle on average than what the bottle says, and now it turns out that there is actually more in each bottle than what the bottle says, the manufacturer doesn't have to be sued. So stick to your original one-sided alternative hypothesis that the manufacturer puts less soda in the bottle. To summarize: I would look at the context.

Best regards,

Joost van Ginkel

Joost R. van Ginkel, PhD
Leiden University
Faculty of Social and Behavioural Sciences
PO Box 9555
2300 RB Leiden
The Netherlands
Tel: +31-(0)71-527 3620
Fax: +31-(0)71-527 1721

From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU] On Behalf Of Allan Lundy, PhD
Sent: 19 May 2010 15:56
To: SPSSX-L@LISTSERV.UGA.EDU
Subject: Significant difference in "wrong" direction

Dear Listers,
Like most statisticians, if I am predicting a result in a particular direction (mean of Group A larger than that of Group B, for example), I use a 1-tailed significance level.  For example, in SPSS t-test output, if it reports p= .080 in a 2-tailed test, I simply count that as p= .040.  However, from time to time, and again with a current client, I get an embarrassing result:  like the above but in the "wrong" direction; that is, with the Group B mean larger.  What does one do in this case?  Obviously the hypothesis was not supported, but it has always seemed to me that such a result is meaningful -- not only is there is no effect as predicted, but there is evidence for an effect in the opposite direction.  I have generally treated this conservatively and reported this as a kind of super-rejection of the hypothesis.  I figure that if you are using a 1-tailed test, it should apply just as much to harm you as help you.  Of course, in a sense, this violates the presumption behind the concept of the 1-tailed test -- that is, there is not supposed to be any chance of getting a result in the far-left tail of a 1-tailed distribution.  How to handle this must be a profound philosophical question in statistics, but I don't think I have ever seen this addressed in writing.  Anybody know of anything on this?  Any thoughts of your own?
Thanks!
Allan

Allan Lundy, PhD
Research Consulting
Allan.Lundy@comcast.net

Business & Cell (any time): 215-820-8100
Home (8am-10pm, 7 days/week): 215-885-5313
Address:  108 Cliff Terrace, Wyncote, PA 19095
Visit my Web site at www.dissertationconsulting.net ===================== To manage your subscription to SPSSX-L, send a message to LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

**********************************************************************

This email and any files transmitted with it are confidential and

intended solely for the use of the individual or entity to whom they