Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 COMMIT on an INSERT statement

Author  Topic 

allend2010
Starting Member

28 Posts

Posted - 2003-08-20 : 17:16:44
Hello:

I am trying to insert a huge chunk of data (700,000) records from a SQL Server table into DB2 via a linked server:

INSERT INTO LS..DB2SRV.TABLENAME(DB2Col1, DB2Col2...)
(
SELECT SQLCol1, SQLCol2.. FROM SQLSRVTable WHERE ...
)
GO

Our admins want us to do a COMMIT or ROLLBACK after every thousandth record due to locking issues and speed on DB2 and I am not even sure where to begin. My understanding of SQL Server is that the default commit takes place after the entire transaction completes. Does anybody happen to know a way based on the sql statement I provided above to allow a commit to take place after every nth record. Any help you can provide would be greatly apprecieated.

Thanks in Advance,
Allen D.


nr
SQLTeam MVY

12543 Posts

Posted - 2003-08-20 : 17:35:09
Any sql statement is atomic - i.e. has to be contained in a transaction so for an insert there is no way to split it up other than by separating into many insert statements.

You could try using dts (I hate to say it) for the transfer and using it's batch control.
Not sure how it works as it always rolled back the whole transfer when I tested it but give it a try.
Other than that write an application to do it.



==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-08-20 : 17:40:20
You could use SET ROWCOUNT so that it only inserts a certain amount of records at a time. If you have an identity column or something similar, it would be pretty easy. I'll post an example in a couple of minutes.

Tara
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-08-20 : 17:48:23
I didn't need SET ROWCOUNT, but it could be used if coded differently.

This should work (untested though):


DECLARE @SomeNumber INT
DECLARE @MaxNumber INT

SET @SomeNumber = 1000

SELECT @MaxNumber = MAX(IdentityColumn)
FROM SQLSRVTable

WHILE @SomeNumber <= @MaxNumber
BEGIN
BEGIN TRAN

INSERT INTO LS..DB2SRV.TABLENAME(DB2Col1, DB2Col2...)
SELECT SQLCol1, SQLCol2.. FROM SQLSRVTable
WHERE IdentityColumn <= @SomeNumber

IF @@ERROR = 0
COMMIT TRAN
ELSE
ROLLBACK TRAN

SET @SomeNumber = @SomeNumber + 1000
END



Tara
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-08-20 : 17:55:27
BTW, the speed of the INSERTs is going to be slowed down using this method, but it might solve the locking issues and speed on DB2.

Tara
Go to Top of Page

setbasedisthetruepath
Used SQL Salesman

992 Posts

Posted - 2003-08-20 : 18:02:06
Well, it's not the # of rows by themselves that makes a data set large, it's the total byte count (row count * row width). 700,000 rows may be a pittance of data or it may be exabytes.

Your DB2 admins are using an idea that doesn't quite carry over to SQL Server. The INSERT is atomic as mentioned earlier so there is no transaction to speak of, never mind committing or rolling one back. The transaction idea itself is dubious if the reason for chunking the data is to avoid locking; that's precisely what running DML in transactions does ...

DTS is the answer. You can specify a "batch size" in the data pump task to push the data in chunks. Note you do not need to run the package transactionally if you don't want to.

Jonathan
{0}
Go to Top of Page

byrmol
Shed Building SQL Farmer

1591 Posts

Posted - 2003-08-20 : 19:05:18
This looks like a great candidate for BCP..

Use BCP to export your data from SQL Server and then use DB2's LOAD utility specifying the SAVECOUNT parameter for the number of rows per consistency point..

On second thoughts, this method would be so fast the locking concerns would probably be null and void and you could avoid the SAVECOUNT option all together...

EDIT: DB2 also has a IMPORT command with a COMMITCOUNT parameter.

DavidM

"SQL-3 is an abomination.."
Go to Top of Page

allend2010
Starting Member

28 Posts

Posted - 2003-08-21 : 16:17:42
Thanks for all of your responses:)
Go to Top of Page
   

- Advertisement -