Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Logging file filling up too quickly

Author  Topic 

M.E.
Aged Yak Warrior

539 Posts

Posted - 2002-07-04 : 11:44:32
Heres the query I'm runnning..

Insert into Resource_data_profile
select admin.Resource_Entity_Id,
admin.Resource_Data_Seq ,
txt.Resource_Entity_Type ,
txt.Resource_Data_Source ,
admin.Resource_Data_Type ,
txt.Interpretation_Method ,
txt.Applicable_Strat_Name_Set ,
admin.Applicable_Strat_Unit ,
txt.Interpretation_Remarks ,
convert(smalldatetime,txt.New_Interpretation_Date_Time) ,
txt.Interpretation_By ,
convert(smalldatetime,txt.New_Revised_Date_Time) ,
txt.Revised_By ,
txt.Well_UWI ,
txt.Oil_Sands_Hole_Id ,
txt.Resource_Study_Id ,
txt.Coal_Hole_Id,
txt.Excavation_Approval_Number,
txt.Field_Station_Id,
txt.UWI
from resource_data_seq_administration admin inner join resource_data_profile_txt txt on
admin.Resource_entity_id = txt.Resource_entity_id and
admin.Resource_data_type = txt.Resource_data_type and
admin.Applicable_Strat_Unit = txt.Applicable_Strat_Unit
GO


The two tables being joined:
resource_data_seq_administration
resource_data_profile_txt

If these tables have a small number of entries... Its fine, but unfortunately each table has 850k entries. The error I get after the insert runs for nearly 55 minutes:

Server: Msg 9002, Level 17, State 2, Line 1
The log file for database 'Corporate_Development' is full. Back up the transaction log for the database to free up some log space.


This is a one time only table load, after that few, if any, changes will be made to the table. Any ideas for what might be the best solution to this problem?

-----------------------
Take my advice, I dare ya

YellowBug
Aged Yak Warrior

616 Posts

Posted - 2002-07-04 : 11:59:52
You can try using bcp.
bcp the data of the select statement to a native format file and
then bcp into the table.

From BOL 2000:
Logged and Minimally Logged Bulk Copy Operations
quote:
When using the full recovery model, all row-insert operations performed by bcp are logged in the transaction log. For large data loads, this can cause the transaction log to fill rapidly. To help prevent the transaction log from running out of space, a minimally logged bulk copy can be performed if all of these conditions are met:

The recovery model is simple or bulk-logged.
The target table is not being replicated.
The target table does not have any triggers.
The target table has either 0 rows or no indexes.
The TABLOCK hint is specified. For more information, see Controlling the Locking Behavior.
Any bulk copy into an instance of Microsoft® SQL Server™ that does not meet these conditions is logged.

Before doing bulk copy operations, it is recommended that you set the recovery model to bulk-logged if you usually use full recovery. This will prevent the bulk copy operations from using excessive log space and possibly filling the log. However, even with bulk-logged recovery, some transaction log space will be used. You may want to create transaction log backups during the bulk copy operation to free up transaction log space.

When bulk copying a large number of rows into a table with indexes, it can be faster to drop all the indexes, perform the bulk copy, and re-create the indexes. For more information, see Optimizing Bulk Copy Performance.

Note Although data insertions are not logged in the transaction log when a minimally logged bulk copy is performed, SQL Server still logs extent allocations each time a new extent is allocated to the table.




If you can't achieve a minimally logged operation, adjust the batch size down and selecting the SIMPLE recovery option for the duration of the operation. Another option is to use the -F and -L options to pull things one block at a time.


Edited by - YellowBug on 07/04/2002 12:17:54
Go to Top of Page

AndrewMurphy
Master Smack Fu Yak Hacker

2916 Posts

Posted - 2002-07-04 : 12:07:55
couple of suggestions....


1...increase the disk-space available to your database transaction log.....


2....break the insert down into multiple smaller datasets.....by qualifying the select by some increasing value, which you can control....


ie...instead of one massive 850k insert, have 85 x 1k inserts....with transaction commit statements wrapped around each....that way each batch will not need as large a transaction log.....it also means you can break the problem up into small sets without the worry that it will rollback 1 record from the very end.....(failing 1 record from the end of a 850k insert, will mean the 849,999 records require rolling back....with a corresponding rollback time!!!!)


is there any sequence/ascending key field in the input data...that could be changed in the SELECT statement between runs?

Go to Top of Page

M.E.
Aged Yak Warrior

539 Posts

Posted - 2002-07-04 : 12:08:08
Heh, so kinda an export and then re-import type deal... Interesting yellow-bug, I'll look into that

-----------------------
Take my advice, I dare ya
Go to Top of Page

M.E.
Aged Yak Warrior

539 Posts

Posted - 2002-07-04 : 12:12:10
I could split it into 3 seperate inserts using where clauses... with transaction commit statements wrapped around each... is the key bit there. Thanks fer the input

-----------------------
Take my advice, I dare ya
Go to Top of Page
   

- Advertisement -