Date: Mon, 28 Jan 2008 11:28:29 -0800
Reply-To: Melodyp <pearsonmelody@GMAIL.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Melodyp <pearsonmelody@GMAIL.COM>
Subject: Re: Efficiency with large datasets
Content-Type: text/plain; charset=ISO-8859-1
On Jan 28, 1:46 pm, liuwen...@GMAIL.COM (Wensui Liu) wrote:
> Hi, JB,
> I have a macro to do the same thing you want to do but can't show you here.
> however, the idea is very simple. i am using a macro to loop through
> each ind variable and using data step instead of proc freq to
> calculate the percent and count.
> for thousands of variables, it still takes at least a day to run on unix.
> On Jan 28, 2008 1:35 PM, JB <john_ban...@yahoo.com> wrote:
> > I have a data set with a target variable and 45000 other variables.
> > I wish to produce an Information Value for each variable.
> > In order to cut the processing time down I have cut the number of records
> > down to 2000 but each run through the data to produce a crosstab still takes
> > ages.
> > proc freq data = tran.var_deriv_stg3 noprint;
> > tables &tvar1 * target
> > /missing norow nocol nopercent nocum
> > out = temp1(keep = &tvar1 target count);
> > run;
> > Is there any way I can produce these xtabs without the whole file being
> > processed each time?
> > Thanks
> WenSui Liu
> Statistical Project Manager
> ChoicePoint Precision Marketing
> ===============================- Hide quoted text -
> - Show quoted text -
could you share your macro with me as well? I am working on a dataset
with 659 variables, and anything to reduce the process time will be