Date: Wed, 2 Nov 2005 07:48:52 -0500
Reply-To: Jonas Bilenas <jonas.bilenas@CHASE.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Jonas Bilenas <jonas.bilenas@CHASE.COM>
Subject: Re: Test - Control Design question - Direct marketing /
Content-Type: text/plain; charset=ISO-8859-1
Hard to believe that TEST vs CONTROL designs are still being used as the
standard testing methodology in the consumer credit industry. With the
power of SAS, why hurt yourself by looking at a 2 dimensional view of the
An excellent quote from J. Stuart Hunter on one factor tests:
“The statistical design of experiments had its origins in the work of Sir
Ronald Fisher… Fisher showed that, by combining the settings of several
factors simultaneously in special arrays (experimental designs), it was
possible to glean information on the separate effects of the several
factors. Experiments in which one factor at a time was varied were shown to
be wasteful and misleading.”
Source: J. Stuart Hunter (1987). Applying Statistics to Solving Chemical
Problems. Chemtech, 17, 167.
For an introduction to Experimental Design (Multi Factor Design), check:
JP Morgan Chase
On Tue, 1 Nov 2005 11:02:02 -0500, Talbot Michael Katz <topkatz@MSN.COM>
>I don't present myself as the voice of authority here, but from what
>you've described, I would stand squarely with your senior colleague on
>this one, for exactly the reasons given. Your control group is likely to
>contain individuals you cannot mail, just as your test group did. Since
>you don't know who they are, you can't use the information in your
>This does not mean that there is no information to be gleaned from the
>prospects you were unable to mail. Besides helping you cull your
>database, if you have a sizeable number of them, you may be able to build
>a model to identify such unmailable prospects in the future and save
>yourself some unprofitable mailing costs for your next promotion (I
>wouldn't hold out a lot of hope for this, but it might be worth a try).
>-- TMK --
>"The Macro Klutz"
>On Mon, 31 Oct 2005 08:49:07 -0800, Buzz <vijay.jayanti@GMAIL.COM> wrote:
>>I need some statistician to respond to this question.
>>In our Direct mail Campaigns, we pull a 10% random control/ Hold out
>>We send the test/ treatment group to a mail fulfillment house.
>>They post back a variable to the database which says "Able to mail"
>>which is a Yes/ NO column, which captures the information, if that
>>address was mailed or not.
>>CURRENT MEASUREMENT METHODOLOGY:
>>Measure response rate for both treated and control cells and subtract
>>(TEST - CONTROL) to get the lift. Test for significance.
>>While calculating the response rate for TEST (= RESPONSES/ BASE), we
>>include ALL THE CUSTOMERS in the base (including those who were flagged
>>"NO" in "ABLE TO MAIL" field.
>>The prevalent method in analyzing the responses is to ignore this
>>variable. The reason being quoted by my senior colleague is that "Since
>>the control group does not have a "ABLE TO MAIL" field, (since they do
>>not go thru the mailing process), we have to include all the customers
>>irrespective of their "ABLE TO MAIL" value. So if we exclude those who
>>did not get a mail, we are NOT STATISTICALLY CORRECT, AS IT IS NOT
>>RANDOM ANYMORE. (MEANING CONTROL IS NOT ANYMORE EQUIVALENT TO TEST).
>>Also I am told that from a Financial/ ROI point of view, including
>>everyone in the base gives a more correct picture.
>>I think it is wrong. I have over 4+ years of experience as a marketing
>>analyst (I am not a statistician). I think we should exclude those who
>>did not receive a mail and report the correct response rate. Other wise
>>we will be depressing the true resp. rate.
>>what is right way to compute the response rate? INCLUDE THOSE WHO DID
>>NOT GET A MAIL IN THE BASE? OR EXCLUDE THEM?