Date: Thu, 1 Oct 2009 09:14:16 0500
ReplyTo: Warren Schlechte <Warren.Schlechte@TPWD.STATE.TX.US>
Sender: "SAS(r) Discussion" <SASL@LISTSERV.UGA.EDU>
From: Warren Schlechte <Warren.Schlechte@TPWD.STATE.TX.US>
Subject: Re: Model Comparison using AIC
ContentType: text/plain; charset="usascii"
Using different samples suggest a crossvalidation approach. Modeling
averaging seems what is desired, and based on my reading, Burnham and
Anderson (1998) use an Akaike weighting within the model averaging to
get estimates of parameters.
So, what I guess I'm saying is maybe the AIC can be used within a model
averaging realm, which seems to be what is going on here.
Obviously, I await responses from Dale and others.
Warren Schlechte
Original Message
From: Peter Flom [mailto:peterflomconsulting@MINDSPRING.COM]
Sent: Wednesday, September 30, 2009 8:52 PM
Subject: Re: Model Comparison using AIC
Well, you're right that that link suggests doing this, but I still think
it's a mistake, and I think Dale supplied the reason it is a mistake.
In my thinking, the AIC is a measure of model fit. If you test
different models on the same sample, then you see which model fits best,
given a penalty for complexity of model. But if you test the same model
on different samples, then you get a tst of which sample fits the model
best. What use is that? And if it's both different samples AND
different models, then ... well, ten I don't know what you get.
I could be wrong about all this; if I am, well, I'll apologize. But I
don't see where the mistake could be.
Peter
Original Message
>From: SAS User <sasusr@GMAIL.COM>
>Sent: Sep 30, 2009 9:33 PM
>To: SASL@LISTSERV.UGA.EDU
>Subject: Re: Model Comparison using AIC
>
>*AIC* is used for the comparison of models *from different samples* or
>nonnested models. Ultimately, the model with the smallest *AIC* is
>considered the best, although the *AIC* value itself is not meaningful.
>From: http://www.ats.ucla.edu/stat/sas/output/SAS_logit_output.htm.
>
>As I said to Dale a teacher suggest me to compare using AIC different
>logistic models from different samples (but some of the observations
used to
>make are the same for all the models).
>
>2009/9/30 Dale McLerran <stringplayer_2@yahoo.com>
>
>> Oh, I had no doubt the the variable which you are modeling
>> is the same across the five scoring models, but when you
>> use different samples for the five models you end up with
>> different response VALUES that are used for assessing your
>> model. When you have different response VALUES, I don't
>> believe that an AIC criterion can be employed.
>>
>> If you apply different models to one sample (where there
>> are no missing values for any of the predictors), then
>> you can compare AIC values for the different models
>> (where the model includes intercept + covariates).
>>
>> Dale
>>
>> 
>> Dale McLerran
>> Fred Hutchinson Cancer Research Center
>> mailto: dmclerra@NO_SPAMfhcrc.org
>> Ph: (206) 6672926
>> Fax: (206) 6675977
>> 
>>
>>
>>  On Wed, 9/30/09, SAS User <sasusr@gmail.com> wrote:
>>
>> > From: SAS User <sasusr@gmail.com>
>> > Subject: Re: Model Comparison using AIC
>> > To: "Dale McLerran" <stringplayer_2@yahoo.com>,
SASL@listserv.uga.edu
>> > Date: Wednesday, September 30, 2009, 4:40 PM
>> > The response is the same for all the models.
>> > I made 5 scoring models with different samples and with
>> > different variables. I want to use AIC but a doubt arise
>> > when I saw: AIC (intercept + covariates) and AIC (only
>> > intercept)
>> >
>> > Ed.
>> >
>> > 2009/9/30 Dale McLerran <stringplayer_2@yahoo.com>
>> >
>> > Since the AIC is a likelihoodbased statistic and since
>> >
>> > the likelihood depends on the specific observed values of
>> >
>> > the response, I don't believe there is any way to
>> > employ
>> >
>> > AIC to compare models which are constructed from different
>> >
>> > samples.
>> >
>> >
>> >
>> > Why do you need to compare the 5 models fitted to data
>> >
>> > from different samples? What is the point of the
>> > analysis?
>> >
>> >
>> >
>> > Dale
>> >
>> >
>> >
>> > 
>> >
>> > Dale McLerran
>> >
>> > Fred Hutchinson Cancer Research Center
>> >
>> > mailto: dmclerra@NO_SPAMfhcrc.org
>> >
>> > Ph: (206) 6672926
>> >
>> > Fax: (206) 6675977
>> >
>> > 
>> >
>> >
>> >
>> >
>> >
>> >  On Wed, 9/30/09, SAS User <sasusr@GMAIL.COM>
>> > wrote:
>> >
>> >
>> >
>> > > From: SAS User <sasusr@GMAIL.COM>
>> >
>> > > Subject: Model Comparison using AIC
>> >
>> > > To: SASL@LISTSERV.UGA.EDU
>> >
>> > > Date: Wednesday, September 30, 2009, 4:10 PM
>> >
>> > > Hello,I need to
>> > compare 5 models with
>> >
>> > > AIC (they are from different samples)
>> >
>> > > and I have a doubt.
>> >
>> > > Proc logistic has two values for each AIC (intercept
>> > and
>> >
>> > > intercept +
>> >
>> > > covariates)
>> >
>> > > I don't know how to compare: Do I have to use all
>> > the aic's
>> >
>> > > of the intercept
>> >
>> > > + covariates models? and then choose the one with
>> > lowest?
>> >
>> > > or maybe, do I
>> >
>> > > have to make the difference between Intercept +
>> > covariates
>> >
>> > > AIC and Intercept
>> >
>> > > alone AIC and then compare that values choosing the
>> >
>> > > slowest?
>> >
>> > >
>> >
>> > > Thanks,
>> >
>> > > Ed.
>> >
>> > >
>> >
>> >
>> >
>> >
>>
Peter L. Flom, PhD
Statistical Consultant
Website: www DOT peterflomconsulting DOT com
Writing; http://www.associatedcontent.com/user/582880/peter_flom.html
Twitter: @peterflom
