Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
DestinyJack
Starting Member
22 Posts |
Posted - 2007-06-04 : 22:40:53
|
Hi All,We are using DTS to import various csv files (about 50++ files, various file size, but some of them are up to few GB) into SQL server databse. The first few file doesn't give any errors, so I just leave it to run for the night. When I came back this morning, I saw an "Import failed" errors from SQL server. I checked the event viewer and found out the following errors:5145 :Autogrow of file 'dabase1_Log' in database 'dabase1' took 92062 milliseconds. Consider using ALTER DATABASE to seta smaller FILEGROWTH for this file.17053 :LogWriter: Operating system error 33(error not found) encountered.event ID : 17052Error: 9001, Severity: 21, State: 1The log for database 'dabase1' is not available. It seems like the log file grows too large. Do anyone knows what happen here and any solution to resolve this issue? Recovery model of the databse is set to Simple, Transaction log is set to autogwrowth and autoshrink.Thank you. |
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2007-06-04 : 23:03:56
|
Try run 'backup log db_name with truncate_only' between steps, or set db to bulk-logger recovery model before loading data. |
|
|
DestinyJack
Starting Member
22 Posts |
Posted - 2007-06-04 : 23:44:48
|
Thanks for the reply, but is there any reason that I need to set the db to bulk-logger recovery model? I've set it to simple because I don't really need to use the log file and also to minimize the log file.Thanks again :) |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-06-05 : 04:51:30
|
The Default Extension will be 10%. For a multi-GB database this is a LARGE extension, and it can take so long to perform (esp. SQL 2000 rather than SQL2k5) that things start timing out.Change that to a fixed extension size of, say, 100MB. It will mean more extensions, and thus more fragmentation, but less interruptive time to processes that are running.Use CONTIG.EXE (from Sysinternals) to physically defrag the file once it has grown to a size that is in equilibrium.Setting the initial size of the database to the size that you anticipate it will grow to would be good too Kristen |
|
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2007-06-05 : 10:20:54
|
Bulk-logger recovery model generates less log entries. Simple recovery model will truncate committed transactions in checkpoint, but doesn't do anything if there is long running transaction. Sql still logs everything even the db is in simple recovery model. |
|
|
DestinyJack
Starting Member
22 Posts |
Posted - 2007-06-06 : 02:45:12
|
I see, i will try on the suggested solution. Thanks. |
|
|
|
|
|
|
|