Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 Transact-SQL (2000)
 Query to grow the transation log.

Author  Topic 

cornall
Posting Yak Master

148 Posts

Posted - 2009-02-26 : 11:41:31
Hi,

I want to demonstrate how a transaction grows the transaction log. Can anyone suggest a SQL query that will cause rapid growth of the transaction log?

P.S. I am not trying to get the SQL command to programaticaly grow the log I want to simulate how it grows when transactions run.

Cheers Dave

souLTower
Starting Member

39 Posts

Posted - 2009-02-26 : 12:08:11
-- Check transaction log size here
Begin Transaction
declare @ct int
set @ct = 0

while @ct < 1000000
begin
insert into myTable(fldA, fldB, fldC)
SELECT top 1 fldA, fldB, fldC from myTable
set @ct = @ct + 1
end

-- Check size here
rollback transaction

God Bless
Go to Top of Page

cornall
Posting Yak Master

148 Posts

Posted - 2009-02-27 : 04:47:03
Thanks,

This leads perfectly onto my next questions :-)

Where as the code above grows the log if I remove the transaction lines (as below) the log doesnt grow. Why?


declare @ct int
set @ct = 0

while @ct < 1000000
begin
insert into myTable(fldA, fldB, fldC)
SELECT top 1 fldA, fldB, fldC from myTable
set @ct = @ct + 1
end



Also I would expect this to cause the log to grow less (databse in simple recovery mode) Note I have moved the commit inside the loop. However the ransaction log grow at exactly the same rate. Why?


declare @ct int
set @ct = 0

while @ct < 1000000
begin
BEGIN TRANSACTION
insert into myTable(fldA, fldB, fldC)
SELECT top 1 fldA, fldB, fldC from myTable
COMMIT TRANSACTION
set @ct = @ct + 1
end



Thanks again all just trying to understand the BEGIN AND COMMIT to try and optimise soem queries that make the Transaction Log grow quickly.
Go to Top of Page

souLTower
Starting Member

39 Posts

Posted - 2009-02-27 : 06:48:18
The function of the transaction log is not necessarily to document what has happened but it is to preserve what is happening. This is how the database can recover if there's a huge system failure.

When you begin a transaction you are telling the DB that everything that is happening during the transaction must happen fully and completely until you commit. The only way the DB can do this is to log the events, in this case the inserts, until you commit. That's why the log grows more when you are saving 1 million insert statements as opposed to a single insert (which by the way doesn't need a begin transaction statement. Single commands occur within a transaction by default).

You say that you are trying to limit the trans log from growing quickly. Is it growing too big? There are ways of changing the max size in enterprise manager. If there are operations that are happening within a transaction I would be careful and think it through before changing them. The transaction log is your friend if something bad happens. Are you using a daily, hourly, or less backup strategy? Your backup can trim the logs as well.

Happy to help

God Bless
Go to Top of Page

cornall
Posting Yak Master

148 Posts

Posted - 2009-02-27 : 10:46:35
Hi Thanks again for the reply,

We aren't backing up the transaction logs the databases is in simple mode. We take nightly backups and don't need point in time restore ability. We are running a bunch of queries that grow the transaction logs massively so I want to show how the owners of the import scripts can improve their scripts. I was hoping the addition of begin transation and commit transaction would help them break the process down into transactions and grow the log less.

I still don't understand why this grows the log?


declare @ct int
set @ct = 0

while @ct < 1000000
begin
BEGIN TRANSACTION
insert into myTable(fldA, fldB, fldC)
SELECT top 1 fldA, fldB, fldC from myTable
COMMIT TRANSACTION
set @ct = @ct + 1
end



Thanks Dave
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2009-02-27 : 11:24:43
See if this makes a difference:
declare @ct int
set @ct = 0

while @ct < 1000000
begin
BEGIN TRANSACTION
insert into myTable(fldA, fldB, fldC)
SELECT top 1 fldA, fldB, fldC from myTable
COMMIT TRANSACTION
set @ct = @ct + 1
CHECKPOINT
end
My guess is the explicit transaction alters the normal checkpointing process.
Go to Top of Page

souLTower
Starting Member

39 Posts

Posted - 2009-02-27 : 12:14:00
Right. Sorry, I missed that part. Checkpointing is a good call.

God Bless
Go to Top of Page

cornall
Posting Yak Master

148 Posts

Posted - 2009-02-27 : 13:02:44
Thanks,

So is there any good rule for when I shoudl advise they run the CHECKPOINT command and group code into begin and end transactions?

Dave
Go to Top of Page

souLTower
Starting Member

39 Posts

Posted - 2009-02-27 : 13:15:01
I wouldn't look to the size of the log as an indicator of a good or bad script. Have you explored the log options in enterprise manager? The goal of a script is to do what is intended as efficiently as possible, not to make sure that the transaction log stays in check.

God Bless
Go to Top of Page

cornall
Posting Yak Master

148 Posts

Posted - 2009-03-02 : 09:54:37
Thanks,

Although I genraly agree with what you are saying. I think there are some cases where the script should be tuned like mass deletes. Use somehting like :


WHILE EXISTS (SELECT * FROM test WHERE test = 'test')
BEGIN
SET ROWCOUNT 100000
DELETE test WHERE test = 'test'
SET ROWCOUNT 0
END

Go to Top of Page

souLTower
Starting Member

39 Posts

Posted - 2009-03-02 : 10:06:18
I'm certainly a fan of tuning scripts for performance. That has to be one of the ugliest statements I've seen. Am I crazy or does this system have some type of issue where a simple one line,

delete test where test = 'test'

would not suffice.

I would suggest though instead of focusing on the trans log you should run SQL profiler and find out exactly which statements are causing bottlenecks.

Onward and Upward

God Bless
Go to Top of Page

cornall
Posting Yak Master

148 Posts

Posted - 2009-03-02 : 10:14:22
Its just a silly example a table called test with a column called test where all the values are 'test'

If I use delete test where test = 'test' on 40 million rows the transaction log grows to 10s of gigabytes. If I use the above to batch the deletes into say 1 million rows at time then the log stays at a manageable size.
Go to Top of Page

souLTower
Starting Member

39 Posts

Posted - 2009-03-02 : 10:24:05
Thanks, now I get it.

God Bless
Go to Top of Page
   

- Advertisement -