In the GET DATA syntax for text files, there is a subcommand for where to start the data:
where "n" is the first row of data. You might try experimenting with this.
Senior Market Analyst
LodgeNet Entertainment Corporation
From: Snider-Lotz, Tom [mailto:TSnider-Lotz@qwiz.com]
Sent: Wednesday, July 23, 2003 9:07 AM
Subject: How to deal with "junk" lines in raw data file
I often analyze data on our tests, drawn from a data warehouse. I receive the data in a CSV (comma separated) file, and each file has two rows at the top that don't contain test data: Row 1 contains headers (most of them not usable in SPSS) and Row 2 contains the test key (correct answers). Row 1 is of no use at all in the analysis, Row 2 is used at the beginning of the analysis and then discarded.
I'm trying to set up a system for reading the CSV file directly into SPSS, junk rows and all. Right now I use a DATA LIST command. It ultimately works fine, but of course it generates a lot of error messages because of the non-data in the first two rows -- especially the header row.
Is there a way to read this CSV file without getting these error messages? I don't want to turn off all error messages, just the ones based on these two rows. For example, is there a way to tell SPSS to ignore the first row when reading the data?
Thanks for your help.
-- Tom Snider-Lotz
Thomas G. Snider-Lotz, Ph.D.
1805 Old Alabama Road
Roswell, GA 30076
Remember that a lone amateur built the Ark. A large group of professionals built the Titanic.
-- Dave Barry