Date: Wed, 12 Jul 2006 14:28:11 -0400
Reply-To: Gene Maguin <firstname.lastname@example.org>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: Gene Maguin <email@example.com>
Subject: Re: interrater agreement, intrarater agreement
Content-Type: text/plain; charset="us-ascii"
I haven't see any other replies. Yes, I think your data is set up correctly.
As shown you have it arranged in a multivariate ('wide') setup. From there
you can do a repeated measures anova or use reliability. However, I think
there are a number of different formulas to use depending on whether you
have the same raters rating everybody, or, as you have, two raters are
randomly selected to rate each person. I'd bet anything the computational
formulas are different and I'll bet almost anything that spss can't
accommodate both. There's a literature on rater agreement and on intraclass
correlation. If you haven't looked at that, you should. However, I can't
help you on that. One thing you might do is google 'intraclass correlation'
and there's a citation in the top 20 or 30 that references a book by, I
think, Fleiss (or Fliess) and Shrout. Another term to google is 'kappa'
(which is available from spss crosstabs).
I'm hoping that you have other responses that are more helpful than I am
able to be.