|Date: ||Thu, 25 Aug 2011 20:05:12 +0000|
|Reply-To: ||"Fehd, Ronald J. (CDC/OCOO/ITSO)" <rjf2@CDC.GOV>|
|Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>|
|From: ||"Fehd, Ronald J. (CDC/OCOO/ITSO)" <rjf2@CDC.GOV>|
|Subject: ||Re: step stats|
|Content-Type: ||text/plain; charset="us-ascii"|
Your long or thorough explanation has already hit the bit bucket.
You did cover the main points:
* many steps, and when any start slowing down, how to notify users?
as always we come back to the choices of which we have most of:
My suggestions are:
* small or short is better:
deconstruct big jobs and write intermediate work data sets to Library
that allows saving small logs, running whichever version of logparse
or parselog (my fav) flips the bits.
* gross time of step is pretty good, mostly
that requires adding code between steps to build a transaction data set
which can be monitored
personally I would go with deconstruction and log scrapping
because all that intervention has the potential to slow down the job.
not to mention adding major amounts of fluff to testing, debugging, and maintenance.
I am curious to know what kind of metrics you have for all these steps.
You mentioned knowing that some extraction took single digits of minutes
and needed to know when the time went into double digits.
The subjective 'taking waaay too long' does not mean that the elapsed time is
greater than mean+ >3 std
even servers sweat, sometimes.
Ron Fehd who knows tall is good, small is better
(insert ? in appropriate place in .sig)