LISTSERV at the University of Georgia
Menubar Imagemap
Home Browse Manage Request Manuals Register
Previous messageNext messagePrevious in topicNext in topicPrevious by same authorNext by same authorPrevious page (April 1999)Back to main SPSSX-L pageJoin or leave SPSSX-L (or change settings)ReplyPost a new messageSearchProportional fontNon-proportional font
Date:   Sat, 24 Apr 1999 01:19:32 -0300
Reply-To:   hmaletta@overnet.com.ar
Sender:   "SPSSX(r) Discussion" <SPSSX-L@UGA.CC.UGA.EDU>
From:   "Hector E. Maletta" <hmaletta@OVERNET.COM.AR>
Subject:   Re: reliability for a knowledge scale
Comments:   To: Karen Scheltema <karen.scheltema@NORTHMEMORIAL.COM>
Content-Type:   text/plain; charset=us-ascii

The meaning of reliability scores for a knowledge scale depends upon the contents of the scale. Low reliability in such a scale indicates that people who knows one thing do not generally tend to know also the other things covered by the scale. If the topics covered are truly independent from each other, the scale may have low reliability and nonetheless the overall score (% correct answers) may be fine. For instance, suppose you ask about 12 different recent events in the media (Kosovo war, Colorado killings, Monica Lewinsky, etc.) asking students whether they know about them: it is perfectly possible that some people will know everything about Monica and northing about Kosovo. If more matters are known, one would get a higher score on 'knowledge of recent events', no matter which particular events one is or not aware of.

Instead, if all that knowledge should go together, because it makes a comprehensive description of one subject, then reliability might be required from the scale to be valid. Each particular topic would be but an indicator of a single underlying variable, 'knowing the subject matter'. People knowing the most difficult topic should also know the easier parts of the subject. For instance, if one question is about the nature of Monica/Bill relationship, another about the blue dress, another about Linda Tripp's role, etc., there should be a higher reliability for those different questions.

On the other matter touched upon in Karen's message, wheter the scale's validity is affected by some questions having more possible answers than others, I think it's irrelevant: as long as all questions are coded in a binary washion (correct / wrong answer), all are treated as dichotomous irrespective of the number of choices given in the questionnaire.

Hector Maletta Universidad del Salvador Buenos Aires, Argentina

Karen Scheltema wrote: > > A physician I work with has a knowledge scale that he developed. It > is a series of 12 questions. Some are True/False; others are multiple > choice. The 12 questions were coded as correct/incorrect, and then a > percent correct for the entire 12 items was calculated. I know there > are problems with that given the different number of response items, > but this seemed like the best solution, given that the data had > already been collected. A reviewer for a journal wants evidence of > reliability. When I ran Kuder-Richardson on the correct/incorrect > responses, the alpha was .3. I seem to recall something about it not > being necessarily desirable for knowledge scales, in particular, to > have internal consistency because the concepts being measured are > independent of each other. What justification is there for having a > knowledge scale with low internal consistency? It seems overkill to > report the 12 items separately. Any and all thoughts appreciated. > > Karen Scheltema, MA, MS > Statistician > North Memorial Health Care > 3300 Oakdale Ave N > Robbinsdale, MN 55422 > (612) 520-2744 (612) 520-4686 (fax) > mailto: karen.scheltema@northmemorial.com


Back to: Top of message | Previous page | Main SPSSX-L page