Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 Database Design and Application Architecture
 Better handling of the transaction Log file

Author  Topic 

Drammy
Starting Member

13 Posts

Posted - 2009-02-19 : 05:52:55
Hi all,

I have a solution that takes an OpenEdge database and extracts all the data, loads it into SQL Server (2000 - 2008) and then transforms the data into a reporting database.

The process is designed as a once a day batch process. It is a "take everything" approach and any tables are dropped immediately before being recreated.

As the database is regenerated every night I have the recovery model set to simple.

I find that the transaction log becomes very large after every run. Is there anyway to prevent anything being written to the transaction log? I simply do not need it...


Thanks,
Drammy

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2009-02-19 : 11:00:12
There is no way to prevent anything from being written into the transaction log. That's just how SQL Server works.

You could do your process in smaller batches, but that's about all that can be done to manage the size of the tlog file.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2009-02-19 : 11:49:01
Well, instead of dropping tables, why not truncate them? (assuming the structures and names are static) Dropping and re-creating a table will generate more log entries.

You should also consider generating the data that goes into the final tables for bcp output, which you can then import using bcp or BULK INSERT. This lets you take advantage of the bulk-loading optimizations and will reduce transaction logging.
Go to Top of Page

Drammy
Starting Member

13 Posts

Posted - 2009-02-19 : 12:22:19
Thanks for the replies guys.

Unfortunately the schema is not static and could change at every execution, hence the drop and recreate.

As for the bcp output/input idea, this is interesting. On the face of it I would expect performance degradation if outputting to the filesystem, then importing. Is this the case or does the fact it isn't logged cancel the degradation out?
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2009-02-19 : 13:35:13
It is logged, but minimally. It only logs page allocations, not each row of data like a regular INSERT. You're still paying about the same in I/O, whether it's to an external file or the transaction log.
Go to Top of Page

Drammy
Starting Member

13 Posts

Posted - 2009-02-19 : 18:42:20
OK, so probably not something I really want to do. There are a number of steps involved in the transformation.

Is there any weight in truncating the log file prior to/after each step? What is the quickest way of doing this?
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2009-02-19 : 18:44:32
Truncating the transaction log will not help as you are using simple recovery model already. That recovery model already clears out the transaction/batch after it completes, whether it be from a commit or a rollback.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2009-02-19 : 18:45:07
How big does the transaction log file get after the process has completed?

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Drammy
Starting Member

13 Posts

Posted - 2009-02-19 : 18:52:01
This solution is installed at a number of locations. I only seem to get complaints from SQL 2000 users. The last complaint was that the transaction log had inflated to 36GB.

Its almost as if the recovery model is ignored...
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2009-02-19 : 19:10:23
The recovery model isn't ignored, that's just how it works. The transaction log will always be used, regardless of recovery model. Whether or not the transactions get stored in it after the transaction completes depends on the recovery model.

The only way to keep the file size down is to do your transactions in smaller batches. This way the tlog will not need to expand much, if at all, between batches.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2009-02-19 : 19:11:45
Also, a 36GB file isn't too big of a file.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2009-02-19 : 19:17:25
See this why transaction log grows even in simple recovery model
http://msdn.microsoft.com/en-us/library/ms189573.aspx
Go to Top of Page

Drammy
Starting Member

13 Posts

Posted - 2009-02-19 : 19:21:41
Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2009-02-19 : 19:52:31
The only answer we can provide is for the process to be rewritten to do the modifications in small batches.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Drammy
Starting Member

13 Posts

Posted - 2009-02-19 : 19:53:02
Thanks again
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2009-02-19 : 19:59:11
You're welcome.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2009-02-19 : 20:03:12
quote:
Originally posted by tkizer

Also, a 36GB file isn't too big of a file.

Tara Kizer



Alrighty then



Brett

8-)

Hint: Want your questions answered fast? Follow the direction in this link
http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx

Add yourself!
http://www.frappr.com/sqlteam



Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2009-02-19 : 20:06:00
Dude,

In any case, you need to supply some details....

And getting data the bcping the data is the fastest way possible

Drop your db
Rebuild the db tables w/out constraints
bcp the data in
add the constraints

Give us some examples of what you are doing, otherwise, EVERYONE, is guessing



Brett

8-)

Hint: Want your questions answered fast? Follow the direction in this link
http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx

Add yourself!
http://www.frappr.com/sqlteam



Go to Top of Page

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2009-02-19 : 20:08:52
quote:
Originally posted by Drammy

Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution.



36 GB in 1 night(OH LORD ). You must be running in 1 transaction.
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2009-02-19 : 20:35:00
quote:
Originally posted by sodeep

quote:
Originally posted by Drammy

Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution.



36 GB in 1 night(OH LORD ). You must be running in 1 transaction.



Deep...very deep

Anyone bcp a 36gb file into a table with no constraints? How long does it take?



Brett

8-)

Hint: Want your questions answered fast? Follow the direction in this link
http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx

Add yourself!
http://www.frappr.com/sqlteam



Go to Top of Page

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2009-02-19 : 21:09:52
quote:
Originally posted by X002548

quote:
Originally posted by sodeep

quote:
Originally posted by Drammy

Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution.



36 GB in 1 night(OH LORD ). You must be running in 1 transaction.



Deep...very deep

Anyone bcp a 36gb file into a table with no constraints? How long does it take?



Brett

8-)

Hint: Want your questions answered fast? Follow the direction in this link
http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx

Add yourself!
http://www.frappr.com/sqlteam







Maybe Ages
Go to Top of Page
    Next Page

- Advertisement -