LISTSERV at the University of Georgia
Menubar Imagemap
Home Browse Manage Request Manuals Register
Previous messageNext messagePrevious in topicNext in topicPrevious by same authorNext by same authorPrevious page (April 2003)Back to main SPSSX-L pageJoin or leave SPSSX-L (or change settings)ReplyPost a new messageSearchProportional fontNon-proportional font
Date:         Wed, 16 Apr 2003 14:18:22 -0400
Reply-To:     Art@DrKendall.org
Sender:       "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From:         "Arthur J. Kendall" <Art@DrKendall.org>
Organization: Social Research Consultants
Subject:      Re: SV: Kappa or something like that ?
Comments: To: Staffan Lindberg <Staffan.Lindberg@fhi.se>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

For any of these kind of reliability you would consider each measure to be a measure of the "same thing" (item/coder/judge are repeats) that you are going to use the average or total as the final measure of the construct. to use RELIABILITY you would either standardize the variables first, or concentrate on the standardized alpha.

You might decide to represent the construct with both variables in any further analysis or you might standardize them and sum them.

In this type of thing I see no reason to look at exact agreement.

Depending on your research interests, if you were thinking of using the self-report as a proxy for the device in future research, you might try some transforms on the self -report to see if you can bring it more in line with the "gold standard" device.

Art Art@DrKendall.org Social Research Consultants University Park, MD USA (301) 864-5570

Staffan Lindberg wrote: > Thank you Art! > > I should have provided more information. The underlying measures are on a > roughly ratio scale (with absolute zero)both of which are somewhat skewed. > The correlation (Pearson) is .47. The scattergram shows a cone shape with > the point at origo (not very homoscedastic). > > In the absence of external objective references we have chosen to divide > each measure into three equal groups (hi,med,lo)and crosstabulate these > trichotomized measures. > > I guess we could report the degree of association as r for the underlying > variables or the contingency coefficient for the crosstable but I feel there > could be a better statistic in the area of "intercoder/interrater/interitem > reliability" of which I am not so familiar > > We've looked at kappa that looks pleasingly transparent (0=no agreement, > 1=perfect agreement)but one us has read something about weighted kappa (what > is the difference?) > > Your suggestion of Reliablity sounds interesting but I don't quite > understand the practicalities of it. I have mostly used Reliability to get a > Cronbach's alpha for multi-items attitudinal scales. What would be the > equivalent of an item in this case? > > Grateful for any suggestions > > best > > Staffan > National Institute of Public Health > Sweden > > > > > -----Ursprungligt meddelande----- > Fren: Arthur J. Kendall [mailto:Art@DrKendall.org] > Skickat: den 16 april 2003 17:30 > Till: Staffan Lindberg > Kopia: SPSSX-L@VM.MARIST.EDU > Dmne: Re: Kappa or something like that ? > > > In any case start with a scatterplot and a big crosstab with/statistics=all. > > Are the ratings (Hi/med/lo)based on collapse of a variable with more > levels? Does the gadget produce interval or ratio level data? If both of > these are true scatterplot the two variables and use ordinary > correlations. If the are not severely discrepant from interval also use > correlation. In either of these instance, treat it as a two item scale > and use RELIABILITY. > > There are many different ways to look at intercoder/interrater/interitem > reliability depending on the assumptions you make about level of > measurement and degree of agreement needed. > > If you need more please provide more detail about the level of > measurement and number of values of both variables and whether you need > exact or close agreement. > > Art > Art@DrKendall.org > Social Research Consultants > University Park, MD USA > (301) 864-5570 > > > > Staffan Lindberg wrote: > >>Dear list! >> >>I have a study of school children and their exercise habits. According >>to one criterium (based on self-reported questions) they are >>classified in three equal groups as "low/middle/high" and likewise >>with another criterium (based on a gadget they carry around). I want >>to calculate a measure of the agreement between these criteria. Which >>would be the most appropriate statistic here? >> >>best >> >>Staffan Lindberg >>National Institute of Public Health >>Sweden >> > > > >


Back to: Top of message | Previous page | Main SPSSX-L page