Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
sureshot
Yak Posting Veteran
72 Posts |
Posted - 2004-01-28 : 17:53:55
|
| I've read various places that setting the IsolationLevel property on the ADO connection object makes it take effect only once you start a transaction. I'm setting the Isolation level on ADODB.Connection to ReadUncommitted right after opening it.Suppose I have a page with the query "select firstname,lastname from users where lastname>'Smith'" and it's not in a transaction. Is it going to request shared locks anyway unless I explicitly give it locking hints not to do so "select firstname,lastname from users with (NOLOCK) where lastname>'Smith'"? Reason being, I have a large site I inherited where many of the pages can read data that isn't uncommitted and also shouldn't be creating any locks so in my connection include file I'm setting the IsolationLevel to ReadUncommitted. It certainly saves time compared to putting in NOLOCK's everywhere. We still have timeouts occuring though so I just want to make sure that it's actually working as expected.ex)Set Conn = Server.CreateObject("ADODB.Connection")Conn.Open database_dsn,database_user,database_password' Default is &H00000010 (16)Conn.IsolationLevel = &H00000100 ' (256) Read UncommittedSQL = "select firstname,lastname from users where lastname>'Smith'"Set RS = Conn.Execute(SQL)... |
|
|
Stoad
Freaky Yak Linguist
1983 Posts |
Posted - 2004-01-29 : 04:14:01
|
| 1. All connection properties should be set before the connection opening.2. Without "cn.BeginTrans" setting of "cn.IsolationLevel" has no any effect. |
 |
|
|
sureshot
Yak Posting Veteran
72 Posts |
Posted - 2004-01-29 : 16:33:51
|
| Thanks. OK, I guess it's looking like I need to put in those NOLOCK's on all those statements outside of transactions.NOLOCK are a bit frightening though because I've seen people mentioning getting the error "[Microsoft][ODBC SQL Server Driver][SQL Server]Could not continue scan with NOLOCK due to data movement"http://tek-tips.com/gviewthread.cfm/lev2/4/lev3/27/pid/183/qid/745923Essentially I'm just displaying data (non-financial/critical) on these pages and if uncomitted data is part of it, that's fine, but I cannot have the page get an error. I don't think READPAST is an option because it will skip locked data rather than ignore the lock. Ideally I'd like to have a NOLOCK that skips past any errors. Am I dreaming? Any suggestions? |
 |
|
|
setbasedisthetruepath
Used SQL Salesman
992 Posts |
Posted - 2004-01-29 : 16:54:40
|
quote: Ideally I'd like to have a NOLOCK that skips past any errors. Am I dreaming? Any suggestions?
Three things:First, if you search this site, you'll see many gurus here strongly advising against NOLOCK, in any circumstance, ever. That's good advice."Uncommitted" doesn't mean just "data that is good and not yet written", it means "data that is in flux and does not have a current state". Showing uncommitted data is a crapshoot; you may be displaying correct data or not (it will be incorrect if a relevant transaction rolls back).Second, if you're exploring NOLOCK because of timeouts occurring, bear in mind that many factors contribute to long-running queries. Have you definitively found blocking?Third, don't use embedded SQL if you have a choice in favor of stored procedures.JonathanGaming will never be the same |
 |
|
|
sureshot
Yak Posting Veteran
72 Posts |
Posted - 2004-01-29 : 18:07:10
|
| Yeah, new development is done w/ SP's, but we've got some legacy code inlined (generally light lifting sort of stuff).By the nature of our app, certain screens can show data that is from a partially committed transaction and it's OK.I've run sp_lock and then found sp_blocker_pss80http://support.microsoft.com/default.aspx?scid=kb;EN-US;271509and they confirm that there's blocking going on. The statements and SP's blocking are all well indexed and without substantial rewriting with perhaps some caching, they aren't going to get any faster. They're simply deriving aggregates from lots of data (hundreds of thousands of rows potentially in a table with many millions). While rewriting and caching are not out of the question in the future, at this point I was looking at what I can do to keep the server humming nicely until next release. |
 |
|
|
|
|
|
|
|