Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 Transact-SQL (2005)
 Insert and delete record at the same time

Author  Topic 

ksyong17
Starting Member

5 Posts

Posted - 2011-08-18 : 01:58:36
Hi,

I have a SQL database that requires data to be written and deleting at the same time. I have no control over how the records being written as it’s done by software. However, at every 1 hour interval, i have to remove the old data written to it.

I noticed that while SQL is deleting records, it somehow locks the database being written. After delete completed, i lost all the data that is supposed to be written to the database during the deleting period.

How do i get around this problem where i want to have writing and deleting simultaneously?

Thanks
Darrick

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-08-18 : 02:02:05
You have to design the table properly. Please show us the table design as well as the insert and delete queries. Oh and indexes too.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

ksyong17
Starting Member

5 Posts

Posted - 2011-08-18 : 02:28:20
quote:
Originally posted by tkizer

You have to design the table properly. Please show us the table design as well as the insert and delete queries. Oh and indexes too.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog



Hi Tara,

The table created by some 3rd software which i have no control of.

The table design is as below:

Table Name - Data_Log
Columns - Timestamp (PK,datetime,not null), point_id (PK,varchar(55),not null), _VAL(varchar(255),Null)
Keys - PK_DATA_LOG
Contraints - None
Triggers - None
Indexes - PK_DATA_LOG(Clustered), timestamp(Non-unique,Non-Clustered)
Statistics - PK_DATA_LOG,timestamp

PLease let me know if you need more info.

Thanks
Darrick

Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-08-18 : 02:34:19
When the deletes occur, is it a single row delete or a batch? Just how big is the delete?

Deletes and inserts can happen simultaneously on a table, but it depends on a few factors whether or not there will be blocking, such as size of delete (will it put an exclusive lock on the table), ordering of the data (clustered index), etc. It all comes down to locking though.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-08-18 : 02:34:56
"After delete completed, i lost all the data that is supposed to be written to the database during the deleting period. "

If your delete was locking the table, or slowing the server down, the application writing the data should have received an error (such as "Deadlock") or timeout. Did it ignore the errors?

You need to make the delete faster, or more "gentle", or both. Indexes to help the delete find the appropriate rows more quickly, and loops to delete the rows in small batches (with a delay between each iteration), will probably help.
Go to Top of Page

ksyong17
Starting Member

5 Posts

Posted - 2011-08-18 : 03:06:31
quote:
Originally posted by tkizer

When the deletes occur, is it a single row delete or a batch? Just how big is the delete?

Deletes and inserts can happen simultaneously on a table, but it depends on a few factors whether or not there will be blocking, such as size of delete (will it put an exclusive lock on the table), ordering of the data (clustered index), etc. It all comes down to locking though.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog



The size of delete is huge as it takes about 45 mintute to delete the data.

I'm new to SQL locking and is there a possibilibty i could remove these locks?? Or even tune some parameters to make this work??

Also,is transaction log suppose to capture/buffer all these missing data (during the delete) and pushes them in as soon as the delete is completed??

Thanks
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-08-18 : 03:10:47
A huge delete is going to cause problems. It needs to be broken down into small batches. You can't remove the locks. You could try to reduce the locking level, but that would require a code change and no guarantees SQL Server is going to honor it.

Transaction log doesn't do that. Any "missing data" is up to the application to retry. The missing data is almost certainly as a result of blocking which led to a rollback after the timeout occurred. The application is responsible for handling this.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

ksyong17
Starting Member

5 Posts

Posted - 2011-08-18 : 04:43:01
unfortunetaly the application does not do this.

Currently, to query the data for 30s interval would take about 5 min. I can see the problem with the indexes and server perfomance causing this problem.

My original plan was to create a Job schedule to delete the data every hour. However, this job has taken about 45 min to complete and hence causing the writes to be invalid.

By the way, would Job schedule causes problem of not writting to the database? I think i did a testing of manually deleting using normal query and the data still gets written in. Any comment on this?

Thanks again.
Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2011-08-18 : 08:03:35
Do you have a maintenance window when no one is using the application?

Maybe you can run the deletes then, and delete in small batches.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-08-18 : 08:24:11
"However, this job has taken about 45 min to complete and hence causing the writes to be invalid."

Its taking 45 minutes to delete the rows that were adding in a 60 minute interval? (if so your hardware isn't up to the job!)

or

Its taking 45 minutes to delete lots of old, stale rows, and once those are deleted it will run (hopefully!) fast enough. If that's the case schedule it for some down-time and once its got rid of the old stale data then look at what you can do regularly thereafter - hourly is fine, but I would favour doing it during a quiet period (if you have one). Less chance then of it interfering with other users / the logging's INSERT activity.

"The size of delete is huge as it takes about 45 mintute to delete the data"

What do you call "huge"? I can delete 1,000,000 rows an hour, or maybe even 10,000,000 rows/hour, using "gentle delete" methods that have only minor impact on other users / applications.

If you are deleting less than 10M rows then I suspect that all that is needed is to tune the delete to be more "gentle". That may mean getting rid of some FKeys, considering what indexes are having keys deleted, and whether they overlap with inserts such that locking issues are likely, and so on. If there are delete triggers on the table that will need tuning too.

You could rename the table, create a new table in its place, (Logging inserts can then resume), copy back what you want to keep, and then DROP the renamed table. If you are deleting "most" of the data this will be better than trying to delete the data in-situ.
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-08-18 : 08:54:11
There is a good tip here

http://blogs.msdn.com/b/sqlcat/archive/2009/05/21/fast-ordered-delete.aspx

PBUH

Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-08-18 : 09:58:49
Excellent tip - would never have thought of that one
Go to Top of Page

ksyong17
Starting Member

5 Posts

Posted - 2011-08-18 : 17:09:34
Thanks for all the suggestions and tips. I will try to work out the delete in portion.

Thanks again.
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2011-08-18 : 17:18:05
You don't need a view for this. A cte will do just fine
;WITH cteSource
AS (
SELECT TOP(10000) a
FROM dbo.t1
ORDER BY a
)
DELETE
FROM cteSource



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-08-19 : 02:57:15
Although ...

My experience has been that if the Criteria for the delete are complex the "find" part of the delete can take a significant amount of time (compared to the Delete part)

So each time you run the "delete first 10,000" the "Find" part is repeated.

If deleting in a loop I prefer to find all the rows to be delete, into a #TEMP table containing the clustered index PK, and then JOIN that table for the DELETE loop (deleting 10,000 rows per loop iteration, and deleting them in clustered index order so that deletes have physical proximity)
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-08-19 : 12:39:40
Kristen, that's what I do on my very active and big databases.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-08-22 : 10:55:25
I expect that it was you who taught me to do it that way Tara
Go to Top of Page
   

- Advertisement -