|Date: ||Thu, 2 Jul 1998 16:40:54 -0500|
|Reply-To: ||"Nichols, David" <nichols@SPSS.COM>|
|Sender: ||"SPSSX(r) Discussion" <SPSSX-L@UGA.CC.UGA.EDU>|
|From: ||"Nichols, David" <nichols@SPSS.COM>|
|Subject: ||Re: prblm w. mkappasc macro?|
As Rich Ulrich suggested, I'll be glad to clarify the calculations if given
the data. I've tested the macro numerous times and I'm fairly certain the
calculations are accurate. I don't really recommend that measure though, as
it's really a measure of consistency rather than absolute agreement.
Principal Support Statistician and
Manager of Statistical Support
From: mark swiencicki [SMTP:MAS93012@UCONNVM.UCONN.EDU]
Sent: Thursday, June 18, 1998 11:19 AM
Subject: prblm w. mkappasc macro?
Has anyone else had a problem with running the MNKAPPASC macro for
overall kappa scores for multiple judges. When i run a job with
6 coders who have a 91% aggreement spread out over 3 values, it gave me a
kappa of .47 when it should have been about .88. Could it be overlooking
cases with full agreement? I downloaded the file off the spss. web site
saved it as an spss text file, then used the proper commands in the syntax
window: INCLUDE MKAPPASC.SPS.
MKAPPASC VARS=varlist (with my 6 variables named here). PS I used
consecutively numbered integers from 1 - up, so that can't be the problem.
Signed, Help me in Connecticut!
Mark Swiencicki - Univ. of Conn. Sociology