Date: Wed, 13 Jun 2001 11:06:50 -0600
Reply-To: Jack Hamilton <JackHamilton@FIRSTHEALTH.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Jack Hamilton <JackHamilton@FIRSTHEALTH.COM>
Subject: Re: Excessive use of PDF files
Content-Type: text/plain; charset=us-ascii
My guess of what the author meant, based on my experience with PDF, is
Performing simple tasks such as reading a document, finding a
text string in a document, or copying text into the paste buffer
takes three times longer in a PDF-based web page than performing
similar tasks in an HTML-based web page.
That seems about right to me, especially if the PDF file has multiple columns and you have a smallish screen. I always end up doing a lot of scrolling up, down, and sideways to read a PDF file, which is rarely necessary with HTML. There doesn't seem to be a way to set "continuous page view", which makes scrolling more difficult. Searching a PDF web page uses a uses a different set of commands than searching in an HTML page, and that takes some getting used to.
I don't think that measuring the usability of PDF by itself is necessary for his comparison. The interface for PDF files read in Acrobat Reader is different from that used in browsers, so a separate measurement would not be relevant.
I agree that a more complete description of his methods would be helpful.
Development Manager, Technical Group
METRICS Department, First Health
West Sacramento, California USA
>>> "Stanley A. Gorodenski" <vvgsgor@DE.STATE.AZ.US> 06/13/2001 8:21 AM >>>
My initial response was more playing the devil's advocate, but it was based
on some valid observations of the article. A reader who knows an author, or
knows an author to be reputable, may find it easier to accept without
question what is written. Also, if a reader has some knowledge of the
subject matter, the reader can sort of fill in the blanks. I know nothing
of the author of this article, and I admittedly know little of this subject
matter. As a result I base my judgments on the content of the article
itself. In the context of again 'sort of playing the devil's advocate'
(which means a light hearted questioning without laying my head on the block
or calling anyone's integrity into question ) I have the following
> Actually, the terms are described in the second paragraph of the
> This is my rough estimate, based on watching users perform
> similar tasks on a variety of sites that used either PDF or
> regular Web pages....the number is big and reflects significant user
> suffering in terms of increased task time and more frequent
Admittedly, this part of the paragraph is something of a definition of
'website usability', but I find it an unsatisfactory one. What were these
'similar tasks' at these 'sites'? Note, it says 'similar', not 'same'.
Were there different tasks between sites and pdf-html combinations? What
were the operating systems? We already know that the 'failures' were the
result of low end computers and old Macs. What were the 'tasks'? How do we
know these 'tasks' are a valid measure of what users do in the real world?
Etc. and of course etc.
Also, the four periods, '....', leave out an important sentence: "Because I
have not performed a detailed measurement study of PDF on its own, I can't
calculate the precise usability degradation". This calls into question the
validity of the numbers being thrown around in the article.
> environment. A bounds for error is suggested: 280%-320% , if Neilsen
Because the author has '...not performed a detailed measurement study of PDF
on its own...' it cannot be possible to establish bounds for error, at least
in any probabilistic sampling sense. Based only on what is in the article
itself, the 280%-320% appears to be a complete guess, and appears to have
been put in the article to give some credence, by virtue of allegedly being
able to state error levels, to the 300% statistic (guess) thrown out.
> usability tests are a waste of resources. The best results come from
> testing no more than 5 users and running as many small tests as you
> can afford.
This is new grounds for me. Based on my background, 5 is an awfully small
sample size to be able to establish error bounds with reasonable confidence,
but maybe in the computer world it is not.
Stan, again playing the devil's advocate (-: