Date: Mon, 20 Mar 2006 14:18:31 -0500
Reply-To: Peter Flom <Flom@NDRI.ORG>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Peter Flom <Flom@NDRI.ORG>
Subject: Master Ian and Master David
Content-Type: text/plain; charset=US-ASCII
>>> David L Cassell <davidlcassell@MSN.COM> 3/20/2006 1:45:11 pm >>> wrote
Master Ian sagely pondered:
>I know beans about statistics, but I read newspapers and have experience
>in education. I would ask:
> 1) How do you know the tests measure anything worth while?
> 2) How do you know the tests were scored correctly?
>Perhaps David can supply the correct procedures to answer these questions, >or perhaps such questions are no longer relevant in the US.
Ian, next you need to hold a light saber and ask the questions with the
words inverted. "Worthwhile the questions are?" :-) :-)
Unfortunately perspicacious of you. As I pointed out, the meaning of the scores depends on lots of things which have little to do with the intent of the scoring. But there's a general assumption that tests actually test the intended goal. Anna may spend the next 40 years fighting that battle. It's a major issue in a lot of sociological literature. And questionnaire design is a fundamentally important part of survey sampling, just because it is so easy to mess up and not get the results you are after.
As for whether the tests were scored properly, typically we punt on that.
Bertrand Russell famously remarked that "mathematics is the only subject in which we never know what we are talking about, nor whether what we are saying is true".
Of course, he probably didn't read much educational research.....
nor sociology.....nor psychology......
oh well, we're social scientists, we can explain anything :-)
Peter L. Flom, PhD
Assistant Director, Statistics and Data Analysis Core
Center for Drug Use and HIV Research
National Development and Research Institutes
71 W. 23rd St
New York, NY 10010
(212) 845-4485 (voice)
(917) 438-0894 (fax)