LISTSERV at the University of Georgia
Menubar Imagemap
Home Browse Manage Request Manuals Register
Previous messageNext messagePrevious in topicNext in topicPrevious by same authorNext by same authorPrevious page (August 2011, week 4)Back to main SAS-L pageJoin or leave SAS-L (or change settings)ReplyPost a new messageSearchProportional fontNon-proportional font
Date:         Thu, 25 Aug 2011 20:05:12 +0000
Reply-To:     "Fehd, Ronald J. (CDC/OCOO/ITSO)" <rjf2@CDC.GOV>
Sender:       "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From:         "Fehd, Ronald J. (CDC/OCOO/ITSO)" <rjf2@CDC.GOV>
Subject:      Re: step stats
In-Reply-To:  <>
Content-Type: text/plain; charset="us-ascii"

Hi Mark: Your long or thorough explanation has already hit the bit bucket. You did cover the main points:

* many steps, and when any start slowing down, how to notify users?

as always we come back to the choices of which we have most of:

* accurate * cheap * fast

My suggestions are:

* small or short is better:

deconstruct big jobs and write intermediate work data sets to Library that allows saving small logs, running whichever version of logparse or parselog (my fav) flips the bits.

* gross time of step is pretty good, mostly

that requires adding code between steps to build a transaction data set which can be monitored

personally I would go with deconstruction and log scrapping because all that intervention has the potential to slow down the job. not to mention adding major amounts of fluff to testing, debugging, and maintenance.

I am curious to know what kind of metrics you have for all these steps. You mentioned knowing that some extraction took single digits of minutes and needed to know when the time went into double digits.

The subjective 'taking waaay too long' does not mean that the elapsed time is greater than mean+ >3 std

even servers sweat, sometimes.

Ron Fehd who knows tall is good, small is better (insert ? in appropriate place in .sig)

Back to: Top of message | Previous page | Main SAS-L page