Author |
Topic |
Drammy
Starting Member
13 Posts |
Posted - 2009-02-19 : 05:52:55
|
Hi all,I have a solution that takes an OpenEdge database and extracts all the data, loads it into SQL Server (2000 - 2008) and then transforms the data into a reporting database.The process is designed as a once a day batch process. It is a "take everything" approach and any tables are dropped immediately before being recreated.As the database is regenerated every night I have the recovery model set to simple.I find that the transaction log becomes very large after every run. Is there anyway to prevent anything being written to the transaction log? I simply do not need it...Thanks,Drammy |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2009-02-19 : 11:49:01
|
Well, instead of dropping tables, why not truncate them? (assuming the structures and names are static) Dropping and re-creating a table will generate more log entries.You should also consider generating the data that goes into the final tables for bcp output, which you can then import using bcp or BULK INSERT. This lets you take advantage of the bulk-loading optimizations and will reduce transaction logging. |
|
|
Drammy
Starting Member
13 Posts |
Posted - 2009-02-19 : 12:22:19
|
Thanks for the replies guys.Unfortunately the schema is not static and could change at every execution, hence the drop and recreate.As for the bcp output/input idea, this is interesting. On the face of it I would expect performance degradation if outputting to the filesystem, then importing. Is this the case or does the fact it isn't logged cancel the degradation out? |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2009-02-19 : 13:35:13
|
It is logged, but minimally. It only logs page allocations, not each row of data like a regular INSERT. You're still paying about the same in I/O, whether it's to an external file or the transaction log. |
|
|
Drammy
Starting Member
13 Posts |
Posted - 2009-02-19 : 18:42:20
|
OK, so probably not something I really want to do. There are a number of steps involved in the transformation.Is there any weight in truncating the log file prior to/after each step? What is the quickest way of doing this? |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Drammy
Starting Member
13 Posts |
Posted - 2009-02-19 : 18:52:01
|
This solution is installed at a number of locations. I only seem to get complaints from SQL 2000 users. The last complaint was that the transaction log had inflated to 36GB.Its almost as if the recovery model is ignored... |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2009-02-19 : 19:10:23
|
The recovery model isn't ignored, that's just how it works. The transaction log will always be used, regardless of recovery model. Whether or not the transactions get stored in it after the transaction completes depends on the recovery model. The only way to keep the file size down is to do your transactions in smaller batches. This way the tlog will not need to expand much, if at all, between batches.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-19 : 19:17:25
|
See this why transaction log grows even in simple recovery modelhttp://msdn.microsoft.com/en-us/library/ms189573.aspx |
|
|
Drammy
Starting Member
13 Posts |
Posted - 2009-02-19 : 19:21:41
|
Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Drammy
Starting Member
13 Posts |
Posted - 2009-02-19 : 19:53:02
|
Thanks again |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
X002548
Not Just a Number
15586 Posts |
|
X002548
Not Just a Number
15586 Posts |
Posted - 2009-02-19 : 20:06:00
|
Dude,In any case, you need to supply some details....And getting data the bcping the data is the fastest way possibleDrop your dbRebuild the db tables w/out constraintsbcp the data inadd the constraintsGive us some examples of what you are doing, otherwise, EVERYONE, is guessingBrett8-)Hint: Want your questions answered fast? Follow the direction in this linkhttp://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspxAdd yourself!http://www.frappr.com/sqlteam |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-19 : 20:08:52
|
quote: Originally posted by Drammy Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution.
36 GB in 1 night(OH LORD ). You must be running in 1 transaction. |
|
|
X002548
Not Just a Number
15586 Posts |
Posted - 2009-02-19 : 20:35:00
|
quote: Originally posted by sodeep
quote: Originally posted by Drammy Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution.
36 GB in 1 night(OH LORD ). You must be running in 1 transaction.
Deep...very deepAnyone bcp a 36gb file into a table with no constraints? How long does it take?Brett8-)Hint: Want your questions answered fast? Follow the direction in this linkhttp://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspxAdd yourself!http://www.frappr.com/sqlteam |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-19 : 21:09:52
|
quote: Originally posted by X002548
quote: Originally posted by sodeep
quote: Originally posted by Drammy Thanks guys. The problem is the database itself isn't even a tenth of that size, 36GB is only from one night's execution.
36 GB in 1 night(OH LORD ). You must be running in 1 transaction.
Deep...very deepAnyone bcp a 36gb file into a table with no constraints? How long does it take?Brett8-)Hint: Want your questions answered fast? Follow the direction in this linkhttp://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspxAdd yourself!http://www.frappr.com/sqlteam
Maybe Ages |
|
|
Next Page
|