Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Performance

Author  Topic 

anuradhay
Starting Member

41 Posts

Posted - 2004-10-19 : 01:20:19
i have a set of stmt like this
case 1set rowcount 500

while(1=1)
delete report
where ent_id in (select f.ent_id
from report f(nolock) ,queue q dept d (NOLOCK),branch b (NOLOCK)
where f.ent_id=q.ent_id and q.dept_id = d.dept_id and d.branch_id=b.branch_id and b.branch_code<>"721")

if @@rowcount = 0
break

This actually selects 500 rows and delete the records and then it selects next 500 and deletes and so on... this is taking 1 hr to execute the proc..
case 2
but instead of doing like this, it is just taking 3 mins..
delete report
where ent_id in (select f.ent_id
from report f(nolock) ,queue q dept d (NOLOCK),branch b (NOLOCK)
where f.ent_id=q.ent_id and q.dept_id = d.dept_id and d.branch_id=b.branch_id and b.branch_code<>"721")

it will delete all the records in one shot. no need to select 500 records.. no while loop is required

which is one is advicable?? i was told if i follow case no 2 i will run out of logs.. is it so?? if i run dbcc dbreindex will it increase the performance even when i retain case i..?
pls help me

Kristen
Test

22859 Posts

Posted - 2004-10-19 : 01:45:00
500 is way to small, IMO, for SQL. We tend to do housekeeping type thingies in batches of around 50,000 rows.

Usually we adjust the SET ROWCOUNT @BatchSize as the process runs, according to the elapsed time - add/remove 50% if it is more than 5 seconds outside the target time, add/remove 25% if more than 2 seconds adrift. (Usually we allow a 10 second target operation time to prevent other things timing out)

We also put a WAITFOR DELAY '00:00:05' in the loop - so that there is a 5 second gap between each iteration for everything else to have-a-go.

The more you delete, the more log you will use! If you have SIMPLE recovery model then I think each batch will re-use the log space. If you have FULL recovery model then the next transaction backup will cause log space to restart from the begining - so you'd need to be backing up every few minutes for that to make a difference.

Scheduling and deleting a few each hour might be another approach.

Kristen
Go to Top of Page
   

- Advertisement -