Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
Bex
Aged Yak Warrior
580 Posts |
Posted - 2009-12-04 : 05:24:33
|
I have an archive process which copies 27GB worth of data from one DB to another. However, the process keeps failing with the following error message:The transaction log for database 'CRDMPointOfSale_Archive' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databasesThe source and destination databases are both simple mode. The process that copies the data is a series of insert into statements for 18 tables. The insert statements are not wrapped in a transaction.The Log is alreaday 45GB in size, and was truncated the evening before the process was run.Can anyone explain why the log is getting so large, and especially as why it is growing beyond the size of the data it is copying (substantially so in fact).What can we do to resolve this?ThanksHearty head pats |
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2009-12-04 : 08:08:02
|
1. stop truncating the log.2. you have it set to restricted growth, thus it's running out of room. can change to unrestricted growth.the log is growing to that size because that's how much room is required for the operations you're performing.any way you can do these inserts in bulk? |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2009-12-04 : 08:38:30
|
quote: Originally posted by BexThe transaction log for database 'CRDMPointOfSale_Archive' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So what does the log_reuse_wait_desc column in sys.databases say?--Gail ShawSQL Server MVP |
 |
|
Bex
Aged Yak Warrior
580 Posts |
Posted - 2009-12-04 : 11:58:13
|
Hi GuysThank you for your respones, and sorry for not getting back to you sooner. We have developed a solution once we understood how SQL Server flushes the log in SIMPLE recovery mode.The issue appeared to be the way SQL Server checkpoints the point as to when to next free up the log space (see below from msdn).Based on the statement below, we realised that the log space used did not reach to over 70% for some of the insert statements, so the log would not be cleared. However, it may have reached up to 65% (for example). Therefore, when the next insert statement was issued, this would take it over the 70% threshold for checkpointing (as each statement was copying large amounts of data). So the next statement would then blow the log (take it over 100% full) before sql server could checkpoint/flush the log.We resolved the issue by separating each insert statement with a CHECKPOINT command, which would explicitly clear the log after each statement. According to the MSDN library http://msdn.microsoft.com/en-us/library/ms189573(SQL.90).aspx :-Automatic Checkpoints The SQL Server Database Engine generates automatic checkpoints. The interval between automatic checkpoints is based on the amount of log space used and the time elapsed since the last checkpoint. The time interval between automatic checkpoints can be highly variable and long, if few modifications are made in the database. Automatic checkpoints can also occur frequently if lots of data is modified.Use the recovery interval server configuration option to calculate the interval between automatic checkpoints for all the databases on a server instance. This option specifies the maximum time the Database Engine should use to recover a database during a system restart. The Database Engine estimates how many log records it can process in the recovery interval during a recovery operation. The interval between automatic checkpoints also depends on the recovery model:• If the database is using either the full or bulk-logged recovery model, an automatic checkpoint is generated whenever the number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.• If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values: • The log becomes 70 percent full.• The number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.Hearty head pats |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2009-12-05 : 06:19:41
|
quote: Originally posted by Bex We resolved the issue by separating each insert statement with a CHECKPOINT command, which would explicitly clear the log after each statement.
Every single insert in the entire app? Sure that's a good idea? Have you checked what IO impact that'll have?--Gail ShawSQL Server MVP |
 |
|
Bex
Aged Yak Warrior
580 Posts |
Posted - 2009-12-07 : 04:35:31
|
Hi GailThe process that i am refering to is an archive procedure. Therefore, it executes only the one time each day, and bulk copies data from 18 tables into the archive DB. Therefore, there are only 18 insert statements.That said, testing was only done on a subset of data, and it appears that the job failed again with the same error message. So it may be that one of the insert statements may be blowing the log. We do have an ITEM table which contains millions/10 millions of records per day.We are thinking of reducing the select to a series of batch selects.Is it right though, for the log to fill when copying only the one table? This table is definately no more than 15-20GB at the very most, and the log is 45GB (and 45GB is free at the time the process is kicked off)?Hearty head pats |
 |
|
|
|
|
|
|