Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 Transact-SQL (2000)
 Consensus on NOLOCK

Author  Topic 

Kristen
Test

22859 Posts

Posted - 2006-06-05 : 23:51:49
Lumbago posted this in http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=67237

SELECT *
FROM table 1 a WITH (NOLOCK)
INNER JOIN table2 b WITH (NOLOCK)
ON a.ID = b.ID

I've studiously avoided NOLOCK, but I now wonder if that's a good idea!

What are the Pros and Cons pls?

Kristen

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-06-06 : 00:03:06
why not try using ROWLOCK instead for your updates/inserts?




-ec
Go to Top of Page

cmdr_skywalker
Posting Yak Master

159 Posts

Posted - 2006-06-06 : 02:15:24
It depends on what you do with the result and how much risk you/customer can take. The NOLOCK is similar to READ UNCOMMITED isolation level. It is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction.

May the Almighty God bless us all!
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-06-06 : 02:19:28
"values in the data can be changed and rows can appear or disappear"

That's always stopped me using it. But I wonder if for performance reasons it might be acceptable at some times ...

Kristen
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-06-06 : 03:21:57
we have had to use NOLOCK on occasion. Usually though, we have had to use it because of a poor decision with the data model or some code that we have difficulty changing.

I would certainly explore other alternatives to remedying your performance problems before settling on using NOLOCK.


-ec
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-06-06 : 04:20:04
Thanks ec, that confirms what I've always felt - its should be an option of last resort, and if its being used as a tool of choice then look at why that is!

Kristen
Go to Top of Page

mmarovic
Aged Yak Warrior

518 Posts

Posted - 2006-06-06 : 05:32:03
I worked for big job seeking company that had 1000s of page hits per second. Using noLock hint was a rule there. IMO it was right decision. For their business model it didn't matter if seeker occassionally miss one position before trying search again. Occasionally it might happen (but it was really rare) that the position was not avaialable any more.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-06-06 : 06:16:33
Position could become unavailable between display and the user clicking it to drill-down too ...

Some of our sites are pretty busy and I've never had these sorts of performance problems, but I wonder if we could dramatically speed some stuff up with (NOLOCK). There are probably some reads on busy tables where the data isn't going to change "underfoot" etc. that might benefit.

Decisions Decisions ...
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-06-06 : 06:42:09
Has anyone actually done any testing that proves there is a dramatic performance improvement under normal conditions with a well designed data model and application? Has anyone seen any articles about this with more than anecdotal evidence, where someone ran controlled tests with documented numbers?

Things like this are often spread around as accepted truth, without any real support. From the link Kristen posted: "The performance-increase on an intense system can be dramatic." Nothing concrete like, "We tested this and found our queries ran in half the time."

Perhaps the dramatic increases in performance only happen on systems when the application was poorly implemented, and is a catch all to hide the effects of poor transaction control.

Call me a skeptic, but it is always good to have a solid basis for your actions.





CODO ERGO SUM
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-06-06 : 12:47:08
quote:
Originally posted by mmarovic

I worked for big job seeking company that had 1000s of page hits per second. Using noLock hint was a rule there. IMO it was right decision. For their business model it didn't matter if seeker occassionally miss one position before trying search again. Occasionally it might happen (but it was really rare) that the position was not avaialable any more.




1000s of page hits per second is not the norm. [understatement]That is a busy system[/understatement], and is really a type of environment that SQL Server has had difficulty handling in the past. Using NOLOCK in that environment is no doubt a must to achieve acceptable performance.

On a system that busy and if the data was a little more critical, it might be worthwhile looking at other RDBMS solutions like Oracle or even DB2.



-ec
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-06-06 : 12:59:15
quote:
Originally posted by Michael Valentine Jones

Has anyone actually done any testing that proves there is a dramatic performance improvement under normal conditions with a well designed data model and application? Has anyone seen any articles about this with more than anecdotal evidence, where someone ran controlled tests with documented numbers?

Things like this are often spread around as accepted truth, without any real support. From the link Kristen posted: "The performance-increase on an intense system can be dramatic." Nothing concrete like, "We tested this and found our queries ran in half the time."

Perhaps the dramatic increases in performance only happen on systems when the application was poorly implemented, and is a catch all to hide the effects of poor transaction control.

Call me a skeptic, but it is always good to have a solid basis for your actions.

CODO ERGO SUM



If your rdbms doesn't have to deal with locking then your system is going to run faster. I don't think there is any debate there.

We could always ask mmarovic to disable NOLOCK on his system to see what happens. I suggest barracading your office door before you run this test. That should hold off your management long enough for you to run a few tests and post the results on sqlteam for us



-ec
Go to Top of Page

nosepicker
Constraint Violating Yak Guru

366 Posts

Posted - 2006-06-06 : 13:44:20
Using NOLOCK can be dangerous. I have a nightly job that does a simple INSERT ... SELECT. I used to use NOLOCK on the SELECT, but about once a month it would actually dupe some of the records. It was selecting from a relatively high transaction table. Bottom line: NOLOCK can produce unpredictable results at times, so be careful when you use it.
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-06-06 : 14:23:22
quote:
Originally posted by eyechart
If your rdbms doesn't have to deal with locking then your system is going to run faster. I don't think there is any debate there...


I'm willing to debate it.

The question of whether the RDBMS is locking or not locking is moot; it is doing locking and that can’t be turned off in SQL Server. The question is when will NOLOCK in queries make any noticeable improvement in performance.

What I am saying is that most of what I have heard about this falls in the "everyone knows that" or "old wives tale" category.

If you have a system that is experiencing heavy blocking due to long running transactions that update a lot of rows, that is one situation. If you have an application this is mostly short inserts of a few rows, that is another situation. It could be that there is very little benefit to using NOLOCK when there are few long running transactions.

mmarovic only said that using NOLOCK was the rule there; he didn't make any statements about what happened before and after this was implemented, what the nature of the transactions were, or if he was even there before it was implemented. That’s hardly the kind of information you should make an informed decision on.

I would like to see a lot more in the way of solid information based on testing under controlled conditions before implementing something like always using NOLOCK. I’m willing to believe NOLOCK can make performance improvements in some situations. I’m just unwilling to believe it’s some dramatic cure-all without seeing hard evidence.







CODO ERGO SUM
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-06-06 : 14:59:20
"I'm willing to debate it."

I'm willing to watch!

OK, so how am I going to test this?

I reckon we have a couple of logging tables which are very busy - well, for us at least!, a couple of million inserts a day. There are two facets to this:

1) Each insert has a corresponding update - it a closed loop, so nothing else can possibly be doing the update

2) Each insert does a Lookup to convert a text name parameter to an INT. The lookup table is only ever added to.

There's an opportunity to add a NOLOCK there, and presumably I could devise a way to time the effort - any suggestions?

I could try this on our automated test suite. Its only single user, so no contention as such, but a single run is about 4 hours, so it should be measurable - and if it all goes Bang! I can just restore and No Harm Done.

Kristen
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-06-06 : 16:09:57
this is how I understand this to work:

Normally, a SELECT on a table will result in SQL Server placing a Shared lock on that table. A shared lock allows other processes to read from the table, but does not allow any inserts/updates/deletes to occur. insert/updates/deletes put an exclusive lock in place, the exclusive lock cannot be granted if a shared lock already exists.

When you use the NOLOCK hint, SQL Server does not issue a shared lock. Therefore, insert/updates/deletes can occur. It also does not honor exclusive locks, which are put in place by already running insert/update/deletes. This is why you can read uncommitted data with a NOLOCK hint.

Now, since it is a hint, SQL Server sometimes doesn't go along with that and it issues locks anyway. So, I guess YMMV.

I'm up for someone putting together a benchmark to test this, but I think the outcome is obvious.



-ec
Go to Top of Page

rockmoose
SQL Natt Alfen

3279 Posts

Posted - 2006-06-06 : 16:38:58
Each lock consumes 96 bytes of memory according to BOL.
Fewer locks (NOLOCK, READ UNCOMMITTED) = less memory management overhead, more memory available for other things.
If this is noticeable (on a well designedsystem) or not..... ?

I think that NOLOCK is fine when it is OK to have dirty reads, and it will certainly help performance in situations when SELECT's are being done against tables where UPDATE's are taking place. The reads & writes will not lock and have to wait for each other.


I reckon the most common case where NOLOCK is required is when reporting (range reads) is done against production oltp databases.


rockmoose

Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-06-06 : 17:55:00
quote:
Originally posted by eyechart

this is how I understand this to work:

Normally, a SELECT on a table will result in SQL Server placing a Shared lock on that table. A shared lock allows other processes to read from the table, but does not allow any inserts/updates/deletes to occur. insert/updates/deletes put an exclusive lock in place, the exclusive lock cannot be granted if a shared lock already exists.
...


From SQL Server 2000 Books Online, Locking Architecture:
"The duration of share locks used to protect reads depends on the transaction isolation levels. At the default transaction isolation level of READ COMMITTED, a share lock is held only as long as it takes to read a page. In scans, the lock is held until a lock is acquired on the next page in a scan. If the HOLDLOCK hint is specified, or the transaction isolation level is set to either REPEATABLE READ or SERIALIZABLE, the locks are held to the end of the transaction."

This seems to indicate that a shared page lock is held only long enough to read a page and aquire a lock on the next page. I think this means that it might wait for a page that is locked for an update or insert, and an update to a page would wait for the read of the page to complete.

I could see that it you have an application that locks a lot of pages for updates and takes a long time to complete, that you could end up waiting, but if the updates are more OLTP like, with fast running updates or inserts to a few pages, there should be very few lock collisions, and those should be very brief.

This makes me suspect that a "always use NOLOCK rule" gets used to hide the results of poor database and/or application design. A better rule might be, "Always do a good job with database and/or application design. Remember NOLOCK for when you get tossed into the middle of a mess."













CODO ERGO SUM
Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2006-06-06 : 18:21:25
I tried to set up a little test to "prove" my anecdotal intuition:

Here is a very simple test which you guys can correct but this is the kind of scenario that I believe most people resort to using nolock (or read uncommitted transaction isolation level). The situation is where a lot of small transactions are written to the same tables that are included in some fairly intensive searches. The search performance isn't really affected but the writes are the things that benefit.

This may just prove that a bad design can benefit from using nolock but I think this is a common situation


/*
Set up the test

use pubs
if object_id('tempdb..##junk') > 0
drop table ##junk

create table ##junk (rowid int identity(1,1) primary key clustered, id int, name sysname)
insert ##junk
select a.id, a.name
from sysobjects a
cross join sysobjects b
cross join sysobjects c
*/


--in 1 window do the selects
--run this and at the same time:
--run the other statements in another window

--then uncomment the "with (nolock)"
--switch the other window's "values" to "noLock"
--and restart both windows

declare @i int
set @i = 1
while @i < 27
begin
select *
from ##junk --with (nolock)
where name like char(96 + @i) + '%'
set @i = @i + 1
end


/*
--in another window do the writes
set nocount on
-- if object_id('tempdb..#log') > 0
-- drop table #log
-- create table #log (seltype varchar(15), duration int)


declare @i int
,@st datetime
select @i = 1
,@st = getdate()
while @i < 501
begin
set @st = getdate()

insert ##junk (id, name)
values (1, 'tg')

insert #log (seltype, duration)
--values ( 'nolock', datediff(millisecond, @st, getdate()) )
values ( 'lock', datediff(millisecond, @st, getdate()) )

set @i = @i + 1
end

--select seltype, min(duration) [min], max(duration) [max], avg(duration) [Avg], count(*) [count] from #log group by seltype
*/


output:
Write stats while selects are conducted

seltype min max Avg count
--------------- ----------- ----------- ----------- -----------
nolock 0 16 0 500
lock 0 10750 21 500


Be One with the Optimizer
TG
Go to Top of Page

nosepicker
Constraint Violating Yak Guru

366 Posts

Posted - 2006-06-06 : 18:50:59
Has anyone tried selecting with ROWLOCK? In brief tests I've done, selecting with ROWLOCK doesn't block inserts or updates that use ROWLOCK (it may be OK with inserts or updates that don't use ROWLOCK; can't remember for sure), unless of course multiple processes happen to try to lock the same row (although that shouldn't happen too often). Since I've had some bad experiences with data integrity using NOLOCK, it seems like using ROWLOCK would be more reliable. I've been trying it for a little while, but not long enough to know if data integrity issues are overcome.
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-06-06 : 21:22:20
I made a post on the Data Corruption Forum to ask Paul Randal if he would be able to shed some light on the subject of NOLOCK.

http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=67367




CODO ERGO SUM
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-06-07 : 01:56:17
Edit: I think I'm losing it! - if the WRITEs to this table are once-in-a-blue-moon then the whole NOLOCK thing is moot for this example, isn't it - shared locks aren't going to slow anything down are they [although there is the cost of acquiring the lock I suppose]?

If I'm doing a Code Lookup on a table where the rows [almost] never change seems like I could use NOLOCK there - that row might be being looked up concurrently.

If the row does change I either want the new value, or the old - 1 second earlier I would have got the old value anyway - so I don't really care which one I get [unless there is some complex update of multiple tables involved I suppose, but that's hard to envisage for a simple code-lookup-table].

However, what would be important is that I could never get TWO rows returned - might that be possible for a concurrent UPDATE that increased the size of the record and consequently moved it later in the [physical] table? (I suppose I could do SELECT TOP 1 * ... (NOLOCK) if that's a faint possibility)

Kristen
Go to Top of Page
    Next Page

- Advertisement -