Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
AskSQLTeam
Ask SQLTeam Question
0 Posts |
Posted - 2005-03-07 : 08:13:18
|
| Rowan writes "I am using bulk insert to read data into a temp table and to insert from that table into the database for real.The following is my code: set @SQL = 'Bulk Insert #ImportDataComp FROM ''' + @@PathName + '\' + @ImpFile + ''' with (datafiletype = ''char'', fieldterminator = '','', rowterminator = ''\n'', firstrow = 10)'exec (@SQL)I am using the 'firstrow' argument to eliminate some leading lines which I don't want. But the file also includes some trailing carriage returns (4) and a final row of hyphens/dashes.If I leave these on the end of the file, Bulk Insert returns an EOF error so I am being forced to edit the files before the import process can be run.I am attempting to make this process automatic with no human intervention. Can anyone suggest a SQL method for handling an unedited file? The files are quite large (too large to be opened in Excel) and contain a varying number of records. The one positive is that, as with the header lines, the extra number of line is constant.Is it practical to read the data and to determine the actual no of lines (including the ones I don't want) and the to use the 'lastrow' argument.My thanks in advance for your time and efforts.RegardsRowanPS SQL Server 2000 - sp3a" |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2005-03-07 : 08:15:25
|
| You can use a command-line utility to find the non-blank lines and redirect them to another file, then import that new file instead. Findstr will do the trick:findstr /R "[a-z0-9]" file1.txt >file2.txtThat will find any line containing an alphanumeric character and output it to file2.txt (change this name to suit you) Blank lines will not contain these characters and will be skipped. |
 |
|
|
|
|
|