**Date:** Thu, 14 Aug 2003 16:51:47 -0500
**Reply-To:** Nick I <ni14@MAIL.COM>
**Sender:** "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
**From:** Nick I <ni14@MAIL.COM>
**Subject:** Comparing Scores From Two Predictive Models_Stat Question
**Content-Type:** text/plain; charset="iso-8859-1"
Hello SAS,

I have a question for our fine statisticians on this list. I think this is an easy question if there ever was an easy stat question pertaining to real life.

I have someone's logistic model (a prepayment behavioral model whose purpose is to rank order the propensity of mortgage holders to prepay their mortgage) scores ranging from 0 to 100. I also have a vendor's scores (MGIC scores if you are familiar with those). The idea is to compare the vendor's scores to this person's scores to see if they are compatible (close enough) to each other. If the answer is yes, i.e. this person's logistic model predicts as good as the vendor's model, then we can drop the vendor's model, replace it with our logistic model, and save ourselves a ton of money.

Q: What statistical test or tests would be appropriate in this case? I have a few in mind but I wish to double check and receive your inputs so that I don't make a blunder. Is there a technique you would recommend for me to follow in order to make sure we can substitue MGIC scores with our in-house scores? Applying the in-house model in real life and seeing how it performs is certainly a good idea and we will do this. However, I am more interested in statistical steps I can perform to prove to others that indeed we can make the substitution.

Thank you for your advise.

nick
--
__________________________________________________________
Sign-up for your own personalized E-mail at Mail.com
http://www.mail.com/?sr=signup

CareerBuilder.com has over 400,000 jobs. Be smarter about your job search
http://corp.mail.com/careers