Date: Thu, 31 Mar 2005 09:37:49 -0500
Reply-To: "Chelminski, Iwona" <IChelminski@lifespan.org>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: "Chelminski, Iwona" <IChelminski@lifespan.org>
Subject: Re: Measuring diagnostic accuracy
Content-Type: text/plain; charset="iso-8859-1"
If you have the so called "gold standard" why not discussing your data in
terms of sensitivity, specificity, positive predictive value, negative
predicitve value and kappa?
From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU]On Behalf Of
Sent: Wednesday, March 30, 2005 3:37 PM
Subject: Measuring diagnostic accuracy
I'm hoping someone can suggest the most appropriate way to test whether
the proportion of accurate diagnoses made by one set of MDs is
statistically different from the proportion of accurate diagnoses made
I have three variables which record the doctor's diagnosis. The first
records the diagnosis by a generalist, the second records the diagnosis
of the specialist, and the third variable is the final diagnosis, or the
"gold standard" against which I measure the diagnostic accuracy of both
the generalist and the specialist.
Next, I create two variables to record whether the generalist's
diagnosis matched the final; likewise for the specialist. I use logic
in a COMPUTE statement to create the binary variables, thus:
COMPUTE gp_accuracy = gp_diagnosis = final_diagnosis.
COMPUTE spec_accuracy = spec_diagnosis = final_diagnosis.
So, where the diagnosis (generalist or specialist) matches the final
diagnosis, the accuracy variable records a "1"; a mis-match records a
"0". The proportion of accurate diagnoses for the generalists is
approximately 25%; for the specialists it's approximately 78%. Would a
straight forward ChiSq analysis here, using the two binary variables
above, be appropriate?
Many thanks in advance.