Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Lock problems

Author  Topic 

vasu
Starting Member

1 Post

Posted - 2002-02-28 : 10:37:56
Hi all,

I am facing some problems with the most common scenario.

I have a situation where i need to refresh/update/upload information into my database through bulk inserts and this happens once in a hour.

Though the bulk insert part does not take much time, it is the stored proc for merging the difference data which takes around 2 mins to complete. Now the problem is that during this 2 min period, there are some online users trying to access some information only as a readonly query. When this happens because of the transaction for 2 mins all the queries are getting timed out.

What is the work around for this scenario. Any clues???

nr
SQLTeam MVY

12543 Posts

Posted - 2002-02-28 : 10:44:54
Split the merge into batches so that each batch is very quick and allow queries between batches.

You could use nolock on the read but I'm not keen on that.

==========================================
Cursors are useful if you don't know sql.
Beer is not cold and it isn't fizzy.
Go to Top of Page

nizmaylo
Constraint Violating Yak Guru

258 Posts

Posted - 2002-02-28 : 17:31:11

NR,
could you please provide more details on how you would split the following transaciton into batches allowing queries to execute in between.

CREATE TABLE tblProduct(
product_id varchar(10),
availability_id smallint,
inv_count int)

CREATE TABLE tblInventory(
product_id varchar(10),
inv_count int,
date_updated DATETIME)

tblInventory is updated by getting the data from an inventory control database using DTS.

On succees a stored proc runs

CREATE proc sp_updateInvCount
AS
IF (SELECT(DATEDIFF(hour, date_updated, getdate()) = 1
BEGIN
BEGIN TRAN
UPDATE tblProduct WITH(UPDLOCK) SET inv_count=inv.inv_count
FROM tblProduct tp LEFT JOIN tblInventory inv
ON tp.product_id=inv.product_id
IF @@ERROR > 0 GOTO on_error
UPDATE tblProduct WITH(UPDLOCK) SET availability_id=CASE inv_count
WHEN = 0 THEN ..
....
END
IF @@ERROR > 0 GOTO on_error

COMMIT TRAN
RETURN (0)
END
on_error:
ROLLBACK TRAN
RETURN (2)


How to re-write this proc using you suggestion?

helena
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2002-02-28 : 17:40:19
Use SET ROWCOUNT to limit the number of rows affected by each execution of the SP, loop each execution while testing @@ROWCOUNT, and continue until it reaches zero:

CREATE proc sp_updateInvCount
AS
DECLARE @rows int
SELECT @rows=1
IF (SELECT(DATEDIFF(hour, date_updated, getdate()) = 1
BEGIN
SET ROWCOUNT 5000 --change this number to match the batch size you want
BEGIN TRAN

WHILE @rows>0
BEGIN
UPDATE tblProduct WITH(UPDLOCK) SET inv_count=inv.inv_count
FROM tblProduct tp LEFT JOIN tblInventory inv
ON tp.product_id=inv.product_id
IF @@ERROR > 0 GOTO on_error
UPDATE tblProduct WITH(UPDLOCK) SET availability_id=CASE inv_count
WHEN = 0 THEN ..
....
END
IF @@ERROR > 0 GOTO on_error
SELECT @rows=@@ROWCOUNT
END

COMMIT TRAN
RETURN (0)
END
on_error:
ROLLBACK TRAN
RETURN (2)


Go to Top of Page

nizmaylo
Constraint Violating Yak Guru

258 Posts

Posted - 2002-02-28 : 17:44:18
Rob,
don't you think that BEGIN TRAN and COMMIT TRAN should be moved inside
WHILE loop?

I beleive it's going hold locks until the entire transaction is commited.

helena
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2002-02-28 : 17:49:36
That's up to you, I didn't want to presume you wanted individual batches committed.

Go to Top of Page

nizmaylo
Constraint Violating Yak Guru

258 Posts

Posted - 2002-02-28 : 18:07:10
Rob,
let me re-phrase the question. I don't think the situation will improve unless the individual batches are commited.

Is that right?

helena
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2002-02-28 : 18:53:17
Books Online indicates that UPDLOCK is held until the end of the transaction, so yes, you'd move the BEGIN TRAN...COMMIT TRAN inside the loop. My only concern is that several batches could be committed but then one could fail, and the integrity of the utmost transaction could be compromised.

You might be able to use SAVE TRANSACTION to specify a point to which to rollback, but I don't think it will work with nested COMMIT statements. The example in Books Online doesn't illustrate this, you'd have to experiment.

It might make more sense to use ROWLOCK, either by itself or in conjuction with UPDLOCK. That way batches can be updated relatively quickly, and without UPDLOCK they'd be released when the UPDATE statement completes.

Go to Top of Page

nizmaylo
Constraint Violating Yak Guru

258 Posts

Posted - 2002-02-28 : 19:19:01
Rob,
sorry for posting more and more questions.
I forgot to mention that tblProduct has about 20 triggers, which I MUST desable in order to get fast update. Also, I didn't mention that not all my transactions are just READONLY queries. If a customer is placing an order, a stored proc might attempt to alter inventory count in tblProduct.

helena
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2002-02-28 : 20:02:19
OK, I gotta ask:

1. WHY do you have 20 triggers? What do they do?
2. How many rows are usually affected by this SP? How many rows in the whole table?
3. How long does this procedure run right now? How much does it need to be shortened?
4. How many concurrent users for this procedure? How many more are hitting the table?

I'm not sure what you mean about readonly, but unless you've written your SELECT statements to use TABLOCK/TABLOCKX/PAGLOCK or another lock hint, or a REPEATABLE READ or SERIALIZABLE transaction level, they won't have an adverse effect on UPDATE...WITH ROWLOCK. If you are going to specify a lock hint in the UPDATE statement, start at ROWLOCK and move your way up (PAGLOCK, TABLOCK, UPDLOCK) only if the performance is worse than before.

From what you've described you can't risk having batches of rows being committed if the entire process doesn't commit, so the BEGIN TRAN...COMMIT has to come out of the loop.

I really can't guess any further as to how to speed this up unless you can post your code and table specifics.

Go to Top of Page

nr
SQLTeam MVY

12543 Posts

Posted - 2002-02-28 : 20:09:59
Disclaimer - I've had a few beers. Bad luck you people who have to work.

>> I forgot to mention that tblProduct has about 20 triggers, which I MUST desable in order to get fast update

doh!
I'm wondering about adding triggers to cursors.
If the triggers aren't always necessary then maybe they shouldn't be there.
Suspect that having 20 triggers means that someone is getting carried away.

You are merging tblInventory with the production data (I'll ignore dts - good architecture that the import is separate from the merge though).

Makes things a lot easier if you add an identity field (id) to tblInventory - populate with a view so that this field is not available to the import.

What does the datadiff do?
Not sure why you are doing two updates - with that many triggers I would expect the availablity to be kept in sync with the on-line updates. See beers above.

have another table

create table LastID (id int)
insert LastID select 0

CREATE proc sp_updateInvCount
AS

declare @count int
set @count = 1000
declare @id int
select @id = id from LastID
while exists (select * from tblInventory where id > @id)
begin
select @id = min(id) from tblInventory where id > @id
begin tran

UPDATE tblProduct
SET inv_count=inv.inv_count ,
availability_id=CASE inv_count when ... end
FROM tblProduct tp JOIN tblInventory inv
ON tp.product_id=inv.product_id
where inv.id between @id and @id + @count
if @@error <> 0 goto on_error
update LastID set id = @id + 1000
if @@error <> 0 goto on_error
commit tran
set @id = @id + @count + 1
waitfor delay '00:00:05'
END

RETURN (0)

on_error:
ROLLBACK TRAN
RETURN (2)



==========================================
Cursors are useful if you don't know sql.
Beer is not cold and it isn't fizzy.
Go to Top of Page

nizmaylo
Constraint Violating Yak Guru

258 Posts

Posted - 2002-03-02 : 17:33:53
Rob and Nr,
Thank you both for helping.
I'll answer questions first.
1.I have 20 trigger because table product has a lot of fields, which describe product, and almost every update triggers some action behind the sciences. There are 20 of them just for a simple reason that it's hard to read the source code, if all of it is contained in one or two triggers.

2. DTS and merge are separated. I have two jobs. One does DTS, another one checks the success of the first job and starts update of tblProduct only if DTS was successful. DATEDIFF is used for double checking. Since I update once a day, date difference between getdate() and date when tblInventory was updated should be 0.

3. I need 2 updates, since in reality it's more complicated than in my post. There are different types of products and I need to use temp tables to store groups and update their status using UDFs.

4. "Read-only queries" was my not very smart way to say SELECT queries (stored procs using SELECT).


5. Usually there are from 200 to 600 people trying to run SELECT on tblProduct.

6. There are about 12000 rows in the table and the entire update takes about 2 min (I get from 10 to 25 queries timed out). It's not a big deal, but I want to be perfect.

And finally, this is what I'm going to do.
I'll get the count of rows in the table from sysindexes, set ROWCOUNT to that count, change UPDLOCK TO ROWLOCK, put WAITFORDELAY at the end of the WHILE loop.

I'll post the results after I'm finished.

NR, I like my beer at room temperature, not cold. How should add I add that quote to my signature? I'm sure I'll entertain some people here.

Thank again.


helena
Go to Top of Page
   

- Advertisement -