Date: Tue, 1 May 2007 09:20:08 -0700
Reply-To: "Ornelas, Fermin" <FOrnelas2@azdes.gov>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: "Ornelas, Fermin" <FOrnelas2@azdes.gov>
Subject: Re: shapiro-wilks
Content-Type: TEXT/plain; charset="iso-8859-1"
Well it is human to make mistakes and it is also human to admit it when that happens especially when one replies too fast.
I had the ho and ha reversed. It should have been:
Ho: distribution is normal
Ha: distribution is non normal
But the rest of the argument should be OK.
Fermin Ornelas, Ph.D.
Management Analyst III, AZ DES
Tel: (602) 542-5639
From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU] On Behalf Of Marta García-Granero
Sent: Tuesday, May 01, 2007 8:35 AM
Subject: Re: shapiro-wilks
Monday, April 30, 2007, 6:17:07 PM, You wrote:
OF> I think there is room for confusion here.
OF> Ho: the distribution is non-normal,
OF> Ha: the distribution is normal.
NO, NO, definitely NO. The null hypothesis for a normality test is
that the variable IS normal (believe man, I'm a statistics
teacher...). Null hypotheses, as I said in my previous mail, say that
there are no differences, no effects... In this particular case, it
says that the observed distribution is NOT different from the one we
would expect had the sample been drawn from a normal population.
If p-value >> alpha then conclude Ha.
p-value >> alpha means "accept H0".
If p-value >> It may not be
If p-value >> wise to give more weight to the graph especially if one
If p-value >> is unfamiliar with the shapes of the distributions (long
If p-value >> tails, short tails).
In big samples, tails are of very little importance (leptokurtosis effect
dilutes faster with sample size than skewness, I even read a math demo
of that effect time ago). Skewness, on the other hand, is important
and is easily spotted with a histogram
OF> Hi Christian
OF> Saturday, April 28, 2007, 4:30:08 AM, You wrote:
CH>> What it the null hypothesis of shapiro wilks test of univariate
CH>> normality. That is if p < .05, does this indicate normality, or non-
OF> In general, the null hypotheses for any statistical test is "no
OF> effect", "no differences". This means that for a normality test, the
OF> null hypothesis is "No differences from a normal distribution". p<.05
OF> means NON-NORMALITY.
OF> Anyway, remember that the p-value is not really informative. Normality
OF> tests have low power if sample size is low (don't use them for sample
OF> sizes below 10-12 cases), and are over sensitive for vey big samples
OF> (if n is bigger than 100 then take a look at the histogram with a
OF> normality curve plotted over it and decide if the variable looks
OF> normal enough).
OF> Dr. Marta García-Granero,PhD mailto:email@example.com
OF> "It is unwise to use a statistical procedure whose use one does
OF> not understand. SPSS syntax guide cannot supply this knowledge, and it
OF> is certainly no substitute for the basic understanding of statistics
OF> and statistical thinking that is essential for the wise choice of
OF> methods and the correct interpretation of their results".
OF> (Adapted from WinPepi manual - I'm sure Joe Abrahmson will not mind)
OF> NOTICE: This e-mail (and any attachments) may contain
OF> PRIVILEGED OR CONFIDENTIAL information and is intended only for
OF> the use of the specific individual(s) to whom it is addressed. It
OF> may contain information that is privileged and confidential under
OF> state and federal law. This information may be used or disclosed
OF> only in accordance with law, and you may be subject to penalties
OF> under law for improper use or further disclosure of the
OF> information in this e-mail and its attachments. If you have
OF> received this e-mail in error, please immediately notify the
OF> person named above by reply e-mail, and then delete the original
OF> e-mail. Thank you.
Dr. Marta García-Granero,PhD mailto:firstname.lastname@example.org
"It is unwise to use a statistical procedure whose use one does
not understand. SPSS syntax guide cannot supply this knowledge, and it
is certainly no substitute for the basic understanding of statistics
and statistical thinking that is essential for the wise choice of
methods and the correct interpretation of their results".
(Adapted from WinPepi manual - I'm sure Joe Abrahmson will not mind)