Date: Wed, 13 Oct 2010 03:25:44 -0700
Reply-To: crossover <firstname.lastname@example.org>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: crossover <email@example.com>
Subject: Re: internal consistency reliability. [Sec: UNOFFICIAL]
Content-Type: text/plain; charset=UTF-8
I think I am doing both.
I used another pilot study with similar within –subject design to develop
this scale, involving 33 participants. Originally it was a 15-item scale. I
tested the internal consistency reliability for each stimuli condition as
I then deleted two items and used this 13-item scale in the main study to
differentiate positive affects that were elicited in these three types of
stimuli. I think since 2 items were deleted for the main study, so I would
better test and show its reliability again in the main study. Then
responses to this scale in each stimulus condition also provide the
information to discriminate the different types of affects elicited by
different types of stimuli.
I will group 4 certain types of affects from these 13 items based on the
inspection obtained from the pilot study and theoretical support from other
research. This is the reason why I tried to use ANOVA and post hoc to
discriminate the differences among conditions but found that my data in each
condition is not normal distributed since my sample is only 36. The
non-paramatric test showed similar results as ANOVA.
Mm In this case, should I indicate the normality and then choose to use
ANOVA post hoc to report my restuls?
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/internal-consistency-reliability-tp3207418p3210245.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.
To manage your subscription to SPSSX-L, send a message to
LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
For a list of commands to manage subscriptions, send the command