Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-03-18 : 16:04:42
|
Ok, I have a situation where my inbound file has fewer columns than the destination table. Lets say there are 3 columns in the source file and 10 in the table defined as follows (extra columns are standard auditing columns we use). My actual file is more like 100+ columns so I just broke down the example. This is an import into a staging table so all the data is going to NVARCHAR columns of the same size as the source file.Source file (fixed length ASCII):- FNAME - 30 length
- MNAME - 15 length
- LNAME - 30 length
My table is defined as follows:- InternalGUID UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID()
- FirstName NVARCHAR(30) NOT NULL DEFAULT ''
- MiddleName NVARCHAR(15) NOT NULL DEFAULT ''
- LastName NVARCHAR(30) NOT NULL DEFAULT ''
- UserAdded NVARCHAR (20) NOT NULL DEFAULT (LEFT(SUSER_SNAME(), 20))
- DateAdded DATETIME NOT NULL DEFAULT (GETDATE())
- HostAdded NVARCHAR (20) NOT NULL DEFAULT (LEFT(HOST_NAME(), 20))
- UserModified NVARCHAR (20) NOT NULL DEFAULT (LEFT(SUSER_SNAME(), 20))
- DateModified DATETIME NOT NULL DEFAULT (GETDATE())
- HostModified NVARCHAR (20) NOT NULL DEFAULT (LEFT(HOST_NAME(), 20))
I've followed the BOL examples of using BCP to create a format file but the output does not seem correct. Before I manually edit the file to what I think it should be I'd like a little verification :). This is what I think the file should look like:8.030 SQLUNIQUEID 0 0 "" 0 InternalGUID ""1 SQLNCHAR 0 30 "" 2 FirstName SQL_Latin1_General_CP1_CI_AS2 SQLNCHAR 0 15 "" 3 MiddleName SQL_Latin1_General_CP1_CI_AS3 SQLNCHAR 0 30 "\r\n" 4 LastName SQL_Latin1_General_CP1_CI_AS0 SQLNCHAR 0 0 "" 0 UserAdded SQL_Latin1_General_CP1_CI_AS0 SQLDATETIME 0 0 "" 0 DateAdded ""0 SQLNCHAR 0 0 "" 0 HostAdded SQL_Latin1_General_CP1_CI_AS0 SQLNCHAR 0 0 "" 0 UserModified SQL_Latin1_General_CP1_CI_AS0 SQLDATETIME 0 0 "" 0 DateModified ""0 SQLNCHAR 0 0 "" 0 HostModified SQL_Latin1_General_CP1_CI_AS **EDIT* I messed up the server order of column 3, had 5 on accident**Notice the columns not being imported have a 0 for the host file order, prefix length, host file length, server order and nothing for a column terminator. The last column I expect in the file has the "\r\n" terminator. Am I missing anything? |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-03-18 : 17:45:08
|
| Have a look athttp://www.nigelrivett.net/BCP_quoted_CSV_Format_file.htmlIt includes omitting columns using a format file.Post back if it doesn't solve the problem.I create format files using query analyser or notepad.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-03-19 : 09:36:11
|
| Nigel,Thanks for the links. They did assist me in getting the BULK INSERT statement to work. Sort of. What I did was simply remove the columns from the format file that are not in the import file. However, I am now having another issue and I believe it's related to the fact the destination columns are defined as NVARCHAR. Basically, the data comes in as funky character codes rather than readable values. I think it's related to the destination columns being NVARCHAR because I changed both the table defintions and the format file to be VARCHAR and SQLCHAR respectively and the BULK INSERT works perfectly and the data is what it should be. Any tips on dealing with this? I need the destination columns to be NVARCHAR.FYI - I did try the DATAFILETYPE = 'WIDECHAR' option but all that did was generate a message saying the file did not have a unicode signature so the option will default to CHAR. |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-03-19 : 09:46:58
|
| Sorry - missed this when I first read it.If you have a staging table why not make the staging table the same format as the file? Then you don't need a format file at all. The missing columns are handled by the SP that moves the data to the production table. It also means that if the production tables change you only have to change the move SP and if the file format changes you only have to change the staging table.Actually for a fixed length file I would import into a single column table and use substring to get the data.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-03-19 : 14:50:26
|
| I have decided to make the destination columns VARCHAR instead of NVARCHAR. Obviously, if they're sending me a straight ASCII file then I'm really guessing I don't have any concerns over unicode data :). It all works like a charm now. Thanks Nigel! Although, I did not go with the single column method due to the number of files and columns I will be dealing with. I think it would become unmanagable fast. |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-03-21 : 21:43:47
|
| If I have a lot of different files to deal with I import into a single table (maybe a global temp table) and put the format of the file and destination table and column mappings into a definition tablesomething likehttp://www.nigelrivett.net/MoveImportData.htmlThen adding a new file and table import is just a matter of adding entries to the definition tables. Sometimes put a front end around it so that non techies can add them themselves.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
|
|
|
|
|