Date: Tue, 12 Feb 2008 08:26:58 -0800 Robin High "SAS(r) Discussion" Robin High Re: Error bars charts To: Kai A<2c7edd0b-854a-4e52-a1e0-b1ef1050e526@i7g2000prf.googlegroups.com> text/plain; charset="us-ascii"

Kai,

The issues on making plots with error bars (all too often made as "side by side sky scrapers with antennas") for two (or more) conditions with data collected from one group is one that seems rarely understood by researchers; you will find mean plots with "error bars" placed in all sorts of journal articles, documents, and presentations from data collected with repeated measures designs, that is where the same subjects are measured on two or more different conditions or measurements taken over time. The fallacy of making such a plot rarely seems to be understood. They usually do show vertical variability OK, though you can't use them for comparisons of the "means" through the error bars across the conditions, since the error bars indicate nothing about the covariance term (usually positive) present in the formula that should be _Memorized_ by anyone who conducts repeated measures designs:

VAR(LSMEAN_1 - LSMEAN_2) = VAR(LSMEAN_1) + VAR(LSMEAN_2) - 2*COVAR(LSMEAN_1,LSMEAN_2)

With between groups designs, the COVAR() term is 0, since data are assumed to be independent, so an error bar plot with that type of data does have some interpretive value (with an exception).

The plot that does make sense for difference in means from repeated measures designs is the new 'diff' plot available in the experimental GLIMMIX procedure, so I recommend looking into it.

Oh yes, the exception I alluded to above is if you make error bar charts for between groups designs where the subjects in each group have multiple observations, the variance of a mean will be underestimated if you compute it from PROCs MEANS or TABULATE. Repeated measurements that are 'clustered' indicates you should compute the LSMEANS and standard errors with PROC MIXED, assigning an appropriate REPEATED or RANDOM statement to show how the clustering actually inflates the standard errors of the means.

Robin High University of Oregon

-----Original Message----- From: SAS(r) Discussion [mailto:SAS-L@LISTSERV.UGA.EDU] On Behalf Of Kai Sent: Tuesday, February 12, 2008 3:05 AM To: SAS-L@LISTSERV.UGA.EDU Subject: Error bars charts

Hi,

I was wondering if anyone can help me, I'm currently conducting a study employing a within-participants design or a repeated measure design. I have two conditions which one group of samples have took part in, i.e., each participant has took part in both conditions.

I'm looking at creating an error bar chart; however I'm not sure what type of error bar chat would be appropriate for this type of design, or which indeed would not be appropriate.

My dilemma is, my what seems confusing course material, gives an example, it says if I had two different samples, i.e., two different groups, testing one group in one condition and the other group in the other condition and for example one group gained scores higher from the other group, This suggests that there is a reliable difference between the to conditions,

However the difference between the score could be due to sampling error, i.e. when your 95% confidence interval overlaps on an error bar chart in SPSS, its likely that both samples are from the sample population and therefore the difference between the score is due to sampling error and not a real difference between the two conditions. Now I understand this to a point insofar that this is a between- participants design or independent measures design, however, can I apply this 95% confidence interval error bar chart to my repeated measure design, my common sense is telling me not because in a repeated measures design we only have one group, so we are not equating weather both groups are from the same population and if so, therefore any difference is due to sampling error we are only have one group therefore all we have to do is make sure the one group is representative on the population, err I think....

So to summarize, using a 95% confidence interval bar chart, as in the example above. When you have two groups and you want to see weather the difference in the condition that each group went through was not down to sampling error, i.e., both groups are from the same population, you use an confidence interval error bar chart and if the, e.g., lower level of the first tail condition and the upper level of the second tail condition overlap then this is due to sampling error and a real difference between the groups,

Can this be applied to a repeated measures design with only one group?

Any help or pointing in the right direction would be greatly appreciated,

Thanks

Kai

Back to: Top of message | Previous page | Main SAS-L page