At 08:44 AM 6/27/2005, Margaret MacDougall wrote:
>I would be most grateful if someone could tell me how to determine the
>maximum number of rows of data which can be contained in an SPSS
>A secondary concern would relate to the maximum number of columns.
I append an excellent posting by Jon Peck of SPSS, Inc., that discussed
the limits in detail. In brief,
- The ABSOLUTE limits for variables ("columns") and cases ("rows") will
almost certainly never affect you
- SPSS is able to handle a great many cases ("rows"), if you have the
disk space. (You should have several times the free disk space your
data needs.) For most operations, it will take longer about in
proportion to the number of rows, but no worse than that.
- SPSS operations take longer in something like proportion to the
number of variables ("columns"), UP TO A POINT. After that (not a hard
and fast point, of course), performance will deteriorate much more
drastically. You'll also find the data harder to view, and to work
with. Certainly not a hard and fast point here, either, but you'll
probably find anything more than about 50 is awkward, and under about
20 the easiest. Many studies that seem to need much more than that, can
be split easily into files with few variables.
>Date: Thu, 5 Jun 2003 09:25:37 -0500
>From: "Peck, Jon" <firstname.lastname@example.org>
>Subject: Re: Is there a limit of number of variables for recent
>versions of SPSS
>There are several points to making regarding very wide files and huge
>First, the theoretical SPSS limits are
>Number of variables: (2**31) -1
>Number of cases: (2**31) - 1
>In calculating these limits, count one for each 8 bytes or part
>thereof of a string variable. An a10 variable counts as two
>variables, for example.
>Approaching the theoretical limit on the number of variables, however,
>is a very bad idea in practice for several reasons.
>1. These are the theoretical limits in that you absolutely cannot go
>beyond them. But there are other environmentally imposed limits that
>you will surely hit first. For example, Windows applications are
>absolutely limited to 2GB of addressable memory, and 1GB is a more
>practical limit. Each dictionary entry requires about 100 bytes of
>memory, because in addition to the variable name, other variable
>properties also have to be stored. (On non-Windows platforms, SPSS
>Server could, of course, face different environmental
>limits.) Numerical variable values take 8 bytes as they are held as
>double precision floating point values.
>2. The overhead of reading and writing extremely wide cases when you
>are doubtless not using more than a small fraction of them will limit
>performance. And you don't want to be paging the variable
>dictionary. If you have lots of RAM, you can probably reach between
>32,000 and 100,000 variables before memory paging degrades performance
>3. Dialog boxes cannot display very large variable lists. You can use
>variable sets to restrict the lists to the variables you are really
>using, but lists with thousands of variables will always be awkward.
>4. Memory usage is not just about the dictionary. The operating
>system will almost always be paging code and data between memory and
>disk. (You can look at paging rates via the Windows Task
>Manager). The more you page, the slower things get, but the variable
>dictionary is only one among many objects that the operating system is
>juggling. However, there is another effect. On NT and later, Windows
>automatically caches files (code or data) in memory so that it can
>retrieve it quickly. This cache occupies memory that is otherwise
>surplus, so if any application needs it, portions of the cache are
>discarded to make room. You can see this effect quite clearly if you
>start SPSS or any other large application; then shut it down and start
>it again. It will load much more quickly the second time, because it
>is retrieving the code modules needed at startup from memory rather
>than disk. The Windows cache, unfortunately, will not help data
>access very much unless most of the dataset stays
>in memory, because the cache will generally hold the most recently
>accessed data. If you are reading cases sequentially, the one you
>just finished with is the LAST one you will want again.
>5. These points apply mainly to the number of variables. The number
>of cases is not subject to the same problems, because the cases are
>not generally all mapped into memory by SPSS (although Windows may
>cache them). However, there are some procedures that because of their
>computational requirements do have to hold the entire dataset in
>memory, so those would not scale well up to immense numbers of cases.
>The point of having an essentially unlimited number of variables is
>not that you really need to go to that limit. Rather it is to avoid
>hitting a limit incrementally. It's like infinity. You never want to
>go there, but any value smaller is an arbitrary limit, which SPSS
>tries to avoid. It is better not to have a hard stopping rule.
>Modern database practice would be to break up your variables into
>cohesive subsets and combine these with join (MATCH FILES in SPSS)
>operations when you need variables from more than one subset. SPSS is
>not a relational database, but working this way will be much more
>efficient and practical with very large numbers of variables.