Date: Mon, 28 Jan 2008 14:29:48 -0500
Reply-To: LQLIU <lingqun@GMAIL.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: LQLIU <lingqun@GMAIL.COM>
Subject: Re: Efficiency with large datasets
Content-Type: text/plain; charset=ISO-8859-1
Hi JB and Wensui,
I did something similar too. To speed the process, I first categorized
variables into groups based on the variable types and the associated
formats. For continuous variables, producing Xtable is meaningless
and waste lots of time.
Hope this helps,
On Jan 28, 2008 1:46 PM, Wensui Liu <firstname.lastname@example.org> wrote:
> Hi, JB,
> I have a macro to do the same thing you want to do but can't show you here.
> however, the idea is very simple. i am using a macro to loop through
> each ind variable and using data step instead of proc freq to
> calculate the percent and count.
> for thousands of variables, it still takes at least a day to run on unix.
> On Jan 28, 2008 1:35 PM, JB <email@example.com> wrote:
> > I have a data set with a target variable and 45000 other variables.
> > I wish to produce an Information Value for each variable.
> > In order to cut the processing time down I have cut the number of records
> > down to 2000 but each run through the data to produce a crosstab still takes
> > ages.
> > proc freq data = tran.var_deriv_stg3 noprint;
> > tables &tvar1 * target
> > /missing norow nocol nopercent nocum
> > out = temp1(keep = &tvar1 target count);
> > run;
> > Is there any way I can produce these xtabs without the whole file being
> > processed each time?
> > Thanks
> WenSui Liu
> Statistical Project Manager
> ChoicePoint Precision Marketing