Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 users and transaction isolation levels

Author  Topic 

aclarke
Posting Yak Master

133 Posts

Posted - 2004-05-22 : 18:37:58
One of my databases uses a "temporal" schema, where records aren't deleted or modified so we have an audit trail. If you edit any data, the database invalidates the existing row and inserts a new one with the same ID as the previous one, but with different time_validFrom and time_validTo. This means that I have to generate new IDs for inserts as the autonumber won't work.

This setup was working fine but as the client is growing, there have been a couple instances where a new order was coming in as an existing order was being updated and the multiple inserts and updates have collided, creating dirty data. I've tested using set transaction isolation level serializable but I don't think it's going to work for my situation because all the web users connect via a single SQL Server user. From http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_set-set_74bw.asp I see the following text:

[Serializable] places a range lock on the data set, preventing other users from updating or inserting rows into the data set until the transaction is complete.

So it seems that since all my queries are done using the same user, that transaction isolation isn't going to work for me. I tested this by running two sets of SQL doing 100,000 simultaneous writes, each SQL set wrapped in a serializable transaction. I presume that if these were really serialized, that one of the queries would write its 100,000 rows, then the other would start after the first transaction had been committed. However, they both wrote at the same time.

How do other people solve this problem? I need to make sure that if two orders come in at the same time, they each get a different order ID.

Thanks,
- Andrew Clarke.

Kristen
Test

22859 Posts

Posted - 2004-05-23 : 05:40:53
Dunno if its any good as an idea, but you could use an "EditNo" column and increment it when you UPDATE a row (via TRIGGER to make sure! or in the UPDATE statemment if done rigourously everywhere).

So in your WHERE clause for an update you could say

UPDATE MyTable
SET MyColumn = 'NewValue'
WHERE MyPrimaryKey = 'SomeValue'
AND MyEditNo = @MyPreviousEditNoValue

You then retrieve the EditNo in the client application when it gets the data from the database; when the data is "saved" if the EditNo has been changed [i.e. someone else has changed the record in the meantime] then the EditNo will be different, and the UPDATE will process 0 rows.

Kristen
Go to Top of Page

aclarke
Posting Yak Master

133 Posts

Posted - 2004-05-23 : 12:38:36
Thanks for your response, Kristen. I'd love to be proven wrong here, but I don't see how this would solve the potential issue. AFAIK triggers don't hold up concurrent processes so they can't really be trusted either. Consider this scenario, where Table_IDs.biggestID holds a field that's incremented for every insert into the main table, which is how we get the new IDs. At the beginning, let's say biggestID = 100

1. Insert #1 is started, and row #1 goes into the database, using the largest value from Table_IDs.biggestID (value 100)
2. At the same time, Insert #2 is started and row #2 goes into the database, also using the largest value from Table_IDs.biggestID (value 100)
3. THEN trigger #1 is run, incrementing Table_IDs.biggestID to 101
4. Trigger #2 is run, incrementing Table_IDs.biggestID to 102.

This is a possible scenario, right? Now we have two rows with ID 100, and the next row that's inserted is going to have an ID of 102. There will be no ID 101. This is the situation I'm trying to avert.

Thanks,
- Andrew.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-05-23 : 14:18:28
quote:
Originally posted by aclarke

4. Trigger #2 is run, incrementing Table_IDs.biggestID to 102

If the Trigger says

UPDATE Table_IDs
SET biggestID = biggestID+1
WHERE biggestID = @MyValue

Where MyValue is "remembered" as being 100, from step #2, then @@ROWCOUNT would be 0, which could be trapped and the transaction ROLLBACK'd and re-attempted.

Of course I may be barking up compltely the wrong tree here!

Kristen
Go to Top of Page

derrickleggett
Pointy Haired Yak DBA

4184 Posts

Posted - 2004-05-23 : 19:07:12
I've had tables that do this also unfortunately. I found the easiest thing to do when the User Connections and T/Sec got high was to create a table just for the purpose of creating new IDs. Just have the ID and a date. Your "autonumber" procedure can just insert a record into this table and capture the SCOPE_IDENTITY(). It's much faster and much more reliable then trying to band-aid together your own system to do this.

MeanOldDBA
derrickleggett@hotmail.com

When life gives you a lemon, fire the DBA.
Go to Top of Page
   

- Advertisement -