LISTSERV at the University of Georgia
Menubar Imagemap
Home Browse Manage Request Manuals Register
Previous messageNext messagePrevious in topicNext in topicPrevious by same authorNext by same authorPrevious page (July 2006)Back to main SPSSX-L pageJoin or leave SPSSX-L (or change settings)ReplyPost a new messageSearchProportional fontNon-proportional font
=========================================================================
Date:         Wed, 12 Jul 2006 14:28:11 -0400
Reply-To:     Gene Maguin <emaguin@buffalo.edu>
Sender:       "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From:         Gene Maguin <emaguin@buffalo.edu>
Subject:      Re: interrater agreement, intrarater agreement
In-Reply-To:  <200607121216.k6CAkP4b030775@mailgw.cc.uga.edu>
Content-Type: text/plain; charset="us-ascii"

Vassilis,

I haven't see any other replies. Yes, I think your data is set up correctly. As shown you have it arranged in a multivariate ('wide') setup. From there you can do a repeated measures anova or use reliability. However, I think there are a number of different formulas to use depending on whether you have the same raters rating everybody, or, as you have, two raters are randomly selected to rate each person. I'd bet anything the computational formulas are different and I'll bet almost anything that spss can't accommodate both. There's a literature on rater agreement and on intraclass correlation. If you haven't looked at that, you should. However, I can't help you on that. One thing you might do is google 'intraclass correlation' and there's a citation in the top 20 or 30 that references a book by, I think, Fleiss (or Fliess) and Shrout. Another term to google is 'kappa' (which is available from spss crosstabs).

I'm hoping that you have other responses that are more helpful than I am able to be.


Back to: Top of message | Previous page | Main SPSSX-L page