Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 Database Design and Application Architecture
 Table locks on inserts

Author  Topic 

nwalter
Starting Member

39 Posts

Posted - 2010-08-12 : 01:38:06
I have an application that does an incredible about of logging to a SQL server database. One of the problems we have run accross is purging these logs, because there are millions of rows it can take hours to purge a months worth of data. When this process to purge runs even though this process is using row locking our application repeatedly fails trying to insert new records because the insert seems to be requesting an exclusive table lock to do so which it cannot do while the purge process is taking row locks.

My question is, how can we keep these inserts from taking a whole table lock to do a single insert? Is it because we are using an identity column as a primary key? How can we get it so that this table is basically a dumping ground mostly optimized for fast inserts but without sacrificing proper indexing to be able to search and properly purge the log?

Thanks for any help!

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2010-08-12 : 05:24:04
Well, if we're talking very frequent logging the best way to go would be some kind of partitioning. Removing old partitions happen in a blink of an eye and is excellent for this kind of stuff. Unfortunately partitioning is an enterprise feature but if you're on enterprise you can start here: http://technet.microsoft.com/en-us/library/ms188730.aspx

If you're not on enterprise there are still quite good possibilities of using partitioning techniques but it's not as nice:
http://thefirstsql.com/2010/06/18/partitioning-the-old-fashion-way/

- Lumbago

My blog (yes, I have a blog now! just not that much content yet)
-> www.thefirstsql.com
Go to Top of Page

nwalter
Starting Member

39 Posts

Posted - 2010-08-12 : 12:56:44
Interesting thought on table partitioning, unfortunately this is just an SE server. The other method would probably work but seems a bit unnecessarily complicated.

I'm really curious to understand why SQL takes a whole table lock to insert one record at the end of the clustered index.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-08-12 : 15:29:59
We do:

Insert PK IDs of all rows to be deleted into TEMP table (either #TEMP or @TEMP)

Determine how many rows to delete per batch - say 1,000 or 10,000

DELETE from the main table using JOIN from TEMP Table on PK for a range that is the batch size

Evaluate how long the operation took. If longer than the permitted threshold then reduce batch size by 50%, if shorter then increase by 10%

Use WAITFOR to wat for 5 seconds to allow other processes to run

LOOP if more rows to process.

Every 2 minutes trigger a TLog backup (normally only happens once every 15 minutes) to reduce chance of LDF file being extended.

We delete between 2 million and 5 million rows a day (once a day) using this process and I'm not aware that we have any problems with lock blocking - I've not had any complaints! and there is nothing in our timeout log that suggests it is causing a problem.

Make sure you Indexes are Rebuilt or Defragged and the Statistics recently updated.

IME doing

SET ROWCOUNT 10000
DELETE MyTable WHERE MyDate < @CutoffDate

will be very slow - the time to find rows where "MyDate < @CutoffDate" is significant on each iteration of the loop, whereas using a JOIN on the PK ID for the next batch of 10,000 rows seems to be much less intrusive
Go to Top of Page
   

- Advertisement -