Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Some information about deadlocks

Author  Topic 

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-09-27 : 15:26:02
Hi there,

I was searching for some information about deadlocks... Because my system is giving 2 or 3 deadlocks a day... And i think that is too much, and essally since i want to grow....

Now, i found this information from NR:

"Always poor design.
You cannot totally eliminate deadlocks but can easily design to minimise them.

Common cause - trying to run reports from an on-line updatable database."
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=20561&SearchTerms=deadlock

Now, this is exactly what my system is, a online updateble system from wich i run reports/multiple query's...

But this is done all the time on the internet, isn't it? So how do you avoid this problem with update db, and running realtime reports from it... (I;am also trying to isolate the problem with the deadlocks, but that is something i have to do mysself, but if you have a tip, your always welcome :))

grz.
Bjorn

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-09-28 : 00:13:51
Have you looked at any of the information in SQL Server Books Online about avoiding deadlocks? It's well documented in there. Just go to deadlocks in BOL in the index, then to avoiding.

Tara
Go to Top of Page

p2bl
Yak Posting Veteran

54 Posts

Posted - 2003-09-28 : 06:26:34
I met just the same question as u before.
U need generate a lot of reports ,do u?And these reports do not need be undate immidiately whenever data modified,do they?I think u can create some snapshot(tables) for your reports,and theses snapshots tables can be updated via SQL Server job or just using "publish and subscribe".
BTW,as BOL says,if u have deadlocks, there must be some trap in ur sql patches

========================
look!
Go to Top of Page

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-09-28 : 07:49:35
I've read "Inside SQL server 2000", and some info in the books.. the only problem is, i find it pretty difficult to see if something is the case in my system.. Deadlocks is one of the hardest parts i think... (and with it, db design ofcourse...)

@p2bl:
One of the things that makes us 'better' then the rest is that we have real-time reports... so, i don't want to trough this away as first (but i think the rest is doing this to prevent deadlocks, so maybe it is the only option)

One thing i have been thinking about, is it a good idea to re-sent the information if it has been selected by a deadlock, with a delay of a 1sec. or something? or will this slow my db to much down?

Bjorn
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2003-09-28 : 08:27:48
For the reports side, you do have one other option...using WITH (NOLOCK) on the tables in your SELECT statements. You do however need to thoroughly examine the consequences as NOLOCK can provide incorrect (uncommitted) data. This can be a bigger problem in a really busy transactional database, unless you know that all transactions commit all the time.

If none of these are problems though, NOLOCK should pretty much avoid deadlocks whereever a report is involved. They don't help UPDATE/INSERT/DELETE however, and should NEVER be used in a non-report process.
Go to Top of Page

nr
SQLTeam MVY

12543 Posts

Posted - 2003-09-29 : 04:11:23
I would be concerned about using nolock as it can be worse than reading uncommitted data.
It can give contradictary results for aggregates and do interesting things if a rollback is going on.

If you have a transactional system then you should be cautious about running reports off the same database. The requirements are usually very different. It's usually better to extract the data to a separate server and run the reports from there.
Even if you have something simple which works at the moment it will be likely to grow and cause problems. Any query with an aggregate is likely to cause deadlocks.

>> But this is done all the time on the internet, isn't it?
Not on a heavily updated system. To have a chance of working aggregate data should be maintained for the report so that it does not have to query it.
e.g. for this site where the number of posts is displayed for each user I would assume that the count is held in a table and updated for each post and that it is not calculated when each user displays a thread. That would be very slow and cause deadlocks.

==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-09-30 : 14:44:35
Ok thanks again for all the advice... The system is for website statistics so no lock could be usefull, but i think should consider the thought of a database model where (not all) the data is realtime, but with an interval of a day or something. But running a query to copy data from table to table (or database to database) will cause a long runtime query, an so, a change for deadlocks/long waiting time for registrating a visitor, or will a "publish and subscribe" system avoid the shared locks? and thereby the waiting times/deadlocks?
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2003-09-30 : 14:52:15
DUMP and RESTORE OR ATTACH the database will be the fastest way...

How many tables? How big is the db?

You might want to compress it before shipping it....unless you're going to carve out another instance...



Brett

8-)

SELECT @@POST FROM Brain ORDER BY NewId()

That's correct! It's an AlphaNumeric!
Go to Top of Page

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-09-30 : 15:03:18
wow, were just starting, were now looking @ 60mb of total space, and we don't have a server of our own, because the investment risk is to high at the moment, first we want more customers... there are two tables for the log (one for visitor and one for pageviews), one for pagenames, and about ten for browser names, os, etc.

But Dump and restore sounds so "cruwel"... is this really made for doing this every day? (ill do some research about in the bol)..
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-09-30 : 15:05:05
How about replication instead of backup/restore or attach/detach?

Tara
Go to Top of Page

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-10-14 : 15:10:05
I could open a new topic, it's still about the same subject, so iam placing it here...

The last few weeks i noticed that my system was a bit slow, so i desided to use dbreindex, but then the trouble started... The system wasn't really faster, but i thought, ill look at it another day. The morning after, a woke up, and saw that the system was making deadlocks all over the place... Instead of the 20 deadlocks a day, i recieved 500 (!) how is this possible? and even better, what (not the sugestions above, but a short term sollution)

thanks
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-10-14 : 15:48:24
Does that mean that you aren't going to do any of the suggestions mentioned above? Some very good suggestions were given. If you don't want to do any of them, then you'll either have to upgrade your hardware so that the queries can complete faster without having a deadlock situation occur or you'll need to rewrite your queries. Do you know which queries are the ones that are causing the problem? Trace flags will help with this:

sqlservr –d"F:\MSSQL\Data\master.mdf" –T1204

Look up trace flags in BOL for more information.


Tara
Go to Top of Page

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-10-14 : 16:27:45
ow yes, iam going to do the thing above, but not at the moment, because i don't have the time at the moment :@@
but, could you give me a answer about the 500+:20 ratio with deadlocks, after dbreindex
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-10-14 : 16:32:19
I don't have answer. It could be that performance was so bad on the system that queries were taking considerably longer. The deadlock victim query doesn't go into a deadlock situation until it is done waiting for lock to disappear.

I would not assume that DBCC DBREINDEX is directly related to the deadlock count increasing. It could just be a coincidence. Aren't you running DBCC DBREINDEX regularly anyway? If not, you should consider doing this as your indexes are probably heavily fragmented if your system experiences a lot of changes.

Tara
Go to Top of Page

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-10-14 : 16:46:04
is it possible that the problem is created because i used another fillfactor?
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-10-14 : 16:54:44
Did you set your fillfactor too high? If so, inserts might be slower. On systems with heavy inserts, a lower fillfactor should be considered. How low depends on testing.

Tara
Go to Top of Page

bjornh
Yak Posting Veteran

87 Posts

Posted - 2003-10-14 : 17:02:30
i thought it used the right one, but maybe this is the error... thanx for your time
Go to Top of Page
   

- Advertisement -