Date: Mon, 2 Mar 2009 09:48:40 -0500
Reply-To: Kevin Viel <citam.sasl@GMAIL.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Kevin Viel <citam.sasl@GMAIL.COM>
Subject: Re: Goodness of fit measure usable for both ols and logit
On Mon, 2 Mar 2009 11:32:03 +0100, =?ISO-8859-1?Q?Thomas_Fr=F6jd?=
>>> I am using proc genmod for fitting a linear regression on one continous
>>> variable and one logistic regression with a related binary variable
>>> that basically measures the same thing.
>> Please explain this in more detail. For *most* studies, we usually
>> prefer the continuous variable, as it has more information.
>It is for a psycological stydy where one of the variables is the score
>of a scale and the other one is the answer of a yes/no question.
The use of instruments and their scaling is usually the subject of
validation studies. I am wary of scaling because it is still inexact. It
sounds like you are not obtaining the scale and then dichotomizing it at
some cutpoint, which is something I would not usually support.
>>> I would like to compare the goodness of fit between the two models.
>>> What is a good measurement to compare them. Preferably it would be a
>>> statistic that gives an absolute measurement on how well the models
>>> fits and not only a comparison between the two. Maybe something like
>>> the Hosmer -Lemeshow test or R2. Any ideas?
>> If you have the same dataset for both analysis consider something like
>> Akaike's Information Criteria (AIC) or the Bayesian Information Criteria
>> (BIC). As for as an absolute measurement, I am not sure one exists.
>> That might assume the true fit can be known. It usually relative, as
>> in: this model is relatively less *bad* than the other...
>Isn't R2 a absolute measurement for example since it always take a value
>between no fit (0) and perfect fit (1)?
What defines a perfect fit? With enough measured variables, you could fit
the model perfectly in several different ways. If you simulated enough
variables, you could also get a perfect fit. Your point is accurate,
>Can I compare AIC and BIC between the models even if the dependant
>variable differs and as in this case is one continious and one binary?
In this case, I think not. If you had a score and a dichotomization of
that score, I think you might be able to. I could be wrong, but I think
most cases that I have seen compare different covariates or forms of the
covariates (cubic splines versus polynomial models, for instance).
Be sure to follow this thread a bit as others, such as Peter or Robin (the
more frequent contributors) may have further insight or corrections.