Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 About "lost update"

Author  Topic 

vacuum
Starting Member

4 Posts

Posted - 2003-04-08 : 20:45:34
Hi all,

Suppose that an account has $100 balance. Client1 wants to add $10 and
Client2 wants to add $20 to the account. After these additions, balance should be $130.

Both of clients(transactions) read the balance at the same time. They'll see 100. Client1 adds 10 to 100, updates the account to 110. Client2 adds 20 to 100, updates the account to 120. So the final update is 120, because Client2 updated the account after Client1.

I want not only prevent such Lost Updates,But also want higher concurrent.
I think SQL Server have not this feature automatically.I have seen BOL,but I can't find a perfect solution.

Thanks in advance,


robvolk
Most Valuable Yak

15732 Posts

Posted - 2003-04-08 : 21:24:26
You need to read up on "transaction isolation level", it's documented in Books Online.

The major problem is having the client application browsing the data for a long period of time. Even with the most restrictive isolation level, you'll never be able to fully prevent the scenario from occuring, except to have some kind of locking mechanism that cannot be undone until the first person commits their changes...which will MURDER your performance and is just a bad idea anyway.

The best thing to do, assuming that that isolation levels won't work for you, is to take steps to prevent two people from editing the same row at the same time. This is actually a better practice in any case. Another thing to consider is to do exactly as you described, "add 10..." and "add 20..." dollars to the amount, instead of simply updating the value to 110 or 120. This would alleviate the concurrency issues.

Go to Top of Page

jsmith8858
Dr. Cross Join

7423 Posts

Posted - 2003-04-08 : 21:51:13
Also, consider not updating or storing the balances ... but rather just adding rows to a transactions table and having the balance always derived by adding up transactions.

that is, instead of:

Update Account
Set Balance = @NewBalance

you could say:

INSERT INTO
AccountTransactions (fields)
VALUES (values)

and then to get the balance:

SELECT Sum(Activity) from AccountTransactions WHERE ...



- Jeff

Edited by - jsmith8858 on 04/08/2003 21:51:45
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2003-04-08 : 22:04:30
quote:
Also, consider not updating or storing the balances ... but rather just adding rows to a transactions table and having the balance always derived by adding up transactions.
Well, you could do that, but after a while the performance will really hit the skids (yep, I'm working with such a system now, although it does calculate balances regularly and keep them in the table)

You could add a trigger to the transactions table that updates the customer's current balance. That would probably be the least headache for you, and while it doesn't exactly solve the concurrency problem (they'll each still see the original balance) you won't lose any updates.

Go to Top of Page

byrmol
Shed Building SQL Farmer

1591 Posts

Posted - 2003-04-08 : 22:44:49
That really does anoy me when the "correct" solution has performance issues..

I have also done the same with "balances" by making an extra table (1:1) with the just the "balance" and the key.. used a trigger (shudders)

It keeps the update overhead down (ie no updates) on the transaction table...and every night we use to run a check to see if the "balance" was right.

DavidM

"SQL-3 is an abomination.."
Go to Top of Page

vacuum
Starting Member

4 Posts

Posted - 2003-04-09 : 07:48:48
again for example:
a product's unitprice is $5,Now boss say the product's unitprice rise 20%,paul and peter both select(see the old value is $5,and don't know other user is also dealing with this data)and update set unitprice=unitprice*120% where prod_ID=25.
so lost update occur(because result is unitprice*120%*120%).

How to prevent lost update and want higher concurrent?

Go to Top of Page

AndyB13
Aged Yak Warrior

583 Posts

Posted - 2003-04-09 : 09:18:17
Vacuum
As youve pointed out the 2 examples are 2 totally different scenarios

Example 1 adds (updates) values to a balance
Example 2 updates unitprice to a new unitprice

I think you will need to deal with these scenarios in 2 different ways in your client app

Example 1 as robvolk said use a transaction which gets the current balance in the table (not from your "form field/whatever" in your client app) after the transaction begins and then does the addition to the latest value.

Quote from BOL
quote:
Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state


Example 2 as example 1 above but instead of getting the latest value in the table use the value in your "form field/whatever" to set the new unit price. You could also check that the value of the form field is the same as the value in the table prior to committing any changes.

Being a relative newbie to SQL server that would be the way i would approach it. Whether its the right way remains to be seen. But thats why we come to these places - its a learning curve

HTH

Andy

Go to Top of Page

vacuum
Starting Member

4 Posts

Posted - 2003-04-10 : 05:41:22
Sorry,the example 2 isn't lost update,right?

Now I have 2 problem.
1:When I need consider to prevent lost update,in any user update interface?

2:I want a universal solution for prevent lost update and have higher concurrent.

I know some solution:
1:SQL has "optimistic concurrency strategy",it means if I use cursor then I can use optimistic lock,it can prevent lost update and have higher concurrent!but I should avoid use cursor,right?
2:add timestamp field in table.it can solved the lost update,but I think this is not easy.
3:frontend tool,for example:delphi,PB,ASP.NET....
4:use "updlock",I think this is robvolk said "have some kind of locking mechanism"

Which one is better or any other suggestion?
Thanks a lot.

Go to Top of Page

AndyB13
Aged Yak Warrior

583 Posts

Posted - 2003-04-10 : 08:20:35
I would say use a front end to enforce some workflow

Example you have 2 tasks to complete - customers current balance = 100
1 add £10 to customers balance
2 add interest 5% to customers balance

In your front end select customer required then task to do - add £10 or add 5% interest
Eg: radio buttons with variables assigned - could be different stored procedures or just 1 stored procedure with different parameters
Like:
Add £10 selected - execute sproc1 10 - variable @amount
Add 5% interest selected - execute sproc2 100, 5
Or
Add £10 selected - execute sproc 100, 10, Y - variable @addToBalance
Add 5% interest selected - execute sproc 100, 5, N

Then the user will execute the procedure
This is a basic example of some logic

IF @addToBlance = Y THEN

This could be sproc1
TRANSACTION
UPDATE Table
SET Balance = Balance + @Amount (10)

ELSE

This could be sproc2
TRANSACTION
IF @OldBalance (100) = SELECT Balance FROM Table THEN - check to see if balance has changed since loaded into client app
UPDATE Table
SET Balance = @OldBalance (Or Table Balance) + @amount (5%)
ELSE
Send user error/no update message

ETC ETC

Like i said this is the way i would do it being a newbie, it would be interesting to know if some of the gurus on here would do it the same way or am i totally wrong

Hope it makes sense - it does in my little world anyway

Andy




Go to Top of Page

vacuum
Starting Member

4 Posts

Posted - 2003-04-14 : 03:34:32
any suggestion?

Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2003-04-14 : 23:14:16
You want to ensure data integrity and prevent lost or inaccurate updates yet you want more people to make updates to the same rows at the same time. These are mutually exclusive qualities; moving closer towards one necessarily moves further away from the other, they cannot BOTH be increased. You need to decide which one is more important and move towards it, with the understanding that the other will suffer.

In keeping with the old saying "too many cooks spoil the broth" I would suggest that data integrity is more important than having hundreds of people update the same rows at the same time. Why exactly do you need higher concurrency anyway? You never explained if it was a real need or just a desired feature. I've never ever come across a situation where someone was glad that two or more people could update a row if it meant that the data could end up totally wrong once they were done. And I've heard infinite complaints about data being locked by one person while someone else was desperately trying to look at it, not even update it, and they couldn't because of a too-restrictive locking mechanism.

Go to Top of Page
   

- Advertisement -