| Author |
Topic |
|
maximus_vj
Yak Posting Veteran
88 Posts |
Posted - 2005-02-15 : 16:11:22
|
| Hello,I have a table which has 1443004 rows and it is heavily used. By heavily used I mean the web application will delete rows from this table and add rows to it very frequently.This is my table schemaLink_ID int PK (Identity)Order_ID int Link_value intLink_ID is the PK and identity column.What happens whenever end user saves some information is..First we delete all the Links for a Order_ID like this "DELETE from tbl_Link where Order_ID = 123" and then I will insert several links for that Order_ID.The problem is as the users are increasing I am getting deadlocks on this table.As the first delete can result in many rows to be deleted, I get Page Locks and ultimately result in deadlocks.What is the best way to overcome this problem? Do I have to create non clustered index on Order_ID on this table? As there are lot of Inserts and Deletes in this table I am not sure if that is such a good idea.Please advice.Thanks in Advance,maximus |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2005-02-15 : 16:24:29
|
| Your table is narrow with only a sinlge index on it, the primary key. I would put that Order_id index on the table. That will significantly reduce the time needed for the delete which will, in turn, reduce your deadlocking. Are you performing these deletes/inserts via a single call to a stored proc? If not, how are you handling the whole transaction?Be One with the OptimizerTG |
 |
|
|
maximus_vj
Yak Posting Veteran
88 Posts |
Posted - 2005-02-15 : 16:35:19
|
| Thanks for the quick response.I will try indexing on the order_id field.In my application,the deletes and inserts are two seperate calls to the database.Thanksmaximus |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2005-02-15 : 16:45:20
|
| That index should help. Horizontal Partitioning on OrderID might help down the road with you have a few million more rows in that table.Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2005-02-15 : 17:13:28
|
| BTW, if you're wrapping the deletes and inserts in a "remote" transaction (controlled from your app), that could be unnecessarily extending the lock time.Be One with the OptimizerTG |
 |
|
|
maximus_vj
Yak Posting Veteran
88 Posts |
Posted - 2005-02-15 : 17:29:33
|
| In my application, I am calling the SP to delete the rows and after that I am calling another SP to add Rows. Do you think this will extend the lock time?Thanksmaximus |
 |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2005-02-15 : 17:47:36
|
| No, because unless you are explicitly wrapping both calls in a transaction, the seperate calls keeps each "batch" smaller. (shorter transaction duration)You are probably also doing some sort of select on that table to present the data. If you can afford dirty reads, that will also help a lot in minimizing deadlocks.ie: use "with (nolock)" table hints in your selectSelect <columns> From Tbl_Link with (nolock)Or alternatively, in your select procedure:set transaction isolation level read uncommittededit: that way your inserts and deletes won't be waiting on the selects to unlock rows.Be One with the OptimizerTG |
 |
|
|
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2005-02-16 : 03:21:58
|
| You should look at the query plan for the selects, inserts and deletes you are doing...TG's index-suggestion should probably give you a boost. And I'm a firm beliver in the WITH (NOLOCK) locking hint, I use it across my entire database and it has made my life sooo much easier. I'd say that 90% of the selects in any given database don't need locking and you'd save it alot of work using this hint. It will also increase the performance of your selects quite a bit...--Lumbago"Real programmers don't document, if it was hard to write it should be hard to understand" |
 |
|
|
maximus_vj
Yak Posting Veteran
88 Posts |
Posted - 2005-02-18 : 01:25:24
|
Hello,Thanks for the tips. I have a few more questions.1) quote: I would put that Order_id index on the table. That will significantly reduce the time needed for the delete which will, in turn, reduce your deadlocking.
But the selectivity for Order_ID column on this table is only 7%. Do you still think that this will help me if I create a non-clustered index on this column?2) What is the best way to insert a few records when we have to keep the transactions short? quote: No, because unless you are explicitly wrapping both calls in a transaction, the seperate calls keeps each "batch" smaller. (shorter transaction duration)
A) Building all the insert statements in the application and sending it once to the server (without wrapping in a transaction) ORB) Loop through all the Links in the application, sending only one insert statement for each iteration to the SQL Server. (This will have unnecessary Network Traffic)Thanksmaximus |
 |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2005-02-18 : 11:21:57
|
| 1) Having a table with 1.5 mil rows where you are joining or selecting based on a column with no index is almost always wrong. If you don't create the index, SqlServer has no choice to make. It will always do a table scan. SqlServer will use multiple factors (NOT JUST selectivity) to determine whether or not to use the index for a given statement. The choice it makes is usually correct. Create the index and give SqlServer the freedom to choose. (patriotic music playing in background)2)That really depends on the nature of your application. However, given the choices you laid out above, shipping mulitple Sql Statments in one communication package versus making several round trips makes no differenct to Sql server. Those seperate statements will be seperate implicit transactions either way (which is good for contention issues). So go for choice "A)"Be One with the OptimizerTG |
 |
|
|
maximus_vj
Yak Posting Veteran
88 Posts |
Posted - 2005-02-18 : 11:28:46
|
| Thanks again TG!!maximus |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-02-19 : 04:17:03
|
| I reckon you'd be better off changing the way that this works.Presuambly when you delete all the order items, and then create new items, some/many of the items are unchanged. I think you'd be much faster to do a selective delete.I would transfer the information to a stroed procedure as, say, a comma delimited list - all items, all columns for each item.Then I would have the SProc "split" the list into a temporary table.Then I would:BEGIN TRANSACTIONDelete from Table where not in NewListUpdate table where in list, but differentInsert into table where "new"COMMIT (or ROLLBACK if error detected)"Delete everything and re-insert" is putting strain on Indexes and Log. If you can UPDATE in-situ (and IF the record does not get bigger such that it has to be re-located), particularly IF some/all of the index entries do NOT change, will dramtically reduce the amoutn of work - thus the task will run quicker, thus there will be fewer opportunities for deadlocks etc.Kristen |
 |
|
|
|