Author |
Topic |
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-04 : 05:51:38
|
Hello there guys I have quiet a hectic problem with one of my tables I am trying to delete from it is taking ages to delete from (55 minutes) with the following statementDelete from itemsales_test where saledate = '30/12/2006'normally a similar statement would take 2 minutes because there are only 19000 records that match the above criteria.I have tried to do a controlled delete with no luck either:set rowcount 1000Delete from itemsales_test where saledate = '30/12/2006'but it is still taking 20 minutes before I physically have to press the stop button in QA .Please help any ideas why this is occuring ? |
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2007-01-04 : 06:42:46
|
See if anything is blocking the delete.is there an index on saledate or foreign keys referencing it?==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-04 : 06:54:13
|
thanks nr, this delete seems to be blocking everything else, I have deleted all foreign keys referencing this table and have suceeded in deleting 1000 rows which took 24 minutes which is ages, I have not got a index on the saledate.Could I stop the delete now, alter the table and create a non clustered index on the saledate and try run this delete again ? |
 |
|
mr_mist
Grunnio
1870 Posts |
Posted - 2007-01-04 : 07:16:45
|
If there's no index on the date column then your query will have to check every row in the table to see if it matches. So, depending on how large the table is, you could be better off with an index.-------Moo. :) |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-04 : 08:03:24
|
I know some people on this forum might get upset with me for asking this (to them I apologise) but has anyone got some references or links to how to create a series of non clustered indexes on a exsisting fact table that has only got a PK clustered index on it.The fact table has got about 100000000 rows and gets about 90000 added to it everyday, Would I set a maintenace plan to indexdefrag all indexs after the mass insert gets done daily ? |
 |
|
miranwar
Posting Yak Master
125 Posts |
Posted - 2007-01-04 : 09:33:14
|
In addition Deleting in shorter transactions should be quicker. try something like thisset rowcount 10000Delete from itemsales_test where saledate = '30/12/2006'/*If delete takes place than loop each time deleting 10,000 recs, loop until there's no record left to delete*/while @@rowcount > 0begin Delete from itemsales_test where saledate = '30/12/2006'end |
 |
|
miranwar
Posting Yak Master
125 Posts |
Posted - 2007-01-04 : 09:36:36
|
Hi Jim77,Also make sure the saledate column is indexed |
 |
|
mr_mist
Grunnio
1870 Posts |
Posted - 2007-01-04 : 09:46:50
|
If you set a suitable fill factor for you index, then it won't need to be defragmented as often.-------Moo. :) |
 |
|
MuadDBA
628 Posts |
Posted - 2007-01-04 : 09:55:15
|
quote: Originally posted by Jim77 I know some people on this forum might get upset with me for asking this (to them I apologise) but has anyone got some references or links to how to create a series of non clustered indexes on a exsisting fact table that has only got a PK clustered index on it.The fact table has got about 100000000 rows and gets about 90000 added to it everyday, Would I set a maintenace plan to indexdefrag all indexs after the mass insert gets done daily ?
That's a LOT of data. Why do you think that a similar statement would only take 2 minutes? You're scanning the entire table to find these records, and that's going to take a very long time. If you index it, this will make it quicker but you need to make sure you have the space to do this, especially if you use a lower fillfactor. I would not do an indexdefrag every day, I'd do one every week during a maintenance window to start with. That way you could see how long it takes, and schedule accordingly. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-01-04 : 11:25:56
|
I expect the problem is finding the matching rows, rather than deleting them, but the delete could be troublesome too.Using set rowcount 1000doesn't really help because you need to re-find the matching rows several times before you are done!I normally tackle this problem by doing a:CREATE TABLE #MyTempTable( MyPKColumn int NOT NULL, PRIMARY KEY ( MyPKColumn ))INSERT INTO #MyTempTableSELECT MyPKColFROM itemsales_test WITH (NOLOCK)where saledate = '30/12/2006'and thenset rowcount 10Delete Dfrom #MyTempTable AS T JOIN itemsales_test AS D ON D.MyPKColumn = T.MyPKColumnwhere D.saledate = '30/12/2006' You will get to see how long the SELECT takes of course!! and then you can try the delete; note that I have started with a batch size of 10. If that takes forever then the game probably needs to be played differently, but assuming a Clustered PK on itemsales_test I reckon you'll be ok. Then bump up the batch size to see how you get on.I've stuck a NOLOCK hint on the SELECT because you don't really care too much about the source table, but you want to scan through it as quickly, and with as little blocking, as possible. Assuming there is no new data being inserted for saledate = '30/12/2006' then you aren't going to miss any!Note also that I didn't do:SELECT MyPKColINTO #MyTempTableFROM itemsales_test...because this will lock up TEMPDB for the duration of the Select (not just the time for the Create Table). Create the Temp Table first.You may even want to make a permanent table, in case you lose you connection and all that effort to find the matching rows then has to be repeated. (When I do THAT I do it in a different database so as not to "pollute" the main database)Kristen |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-04 : 11:59:55
|
23068058 rows in my table to be precise, have tried a controlled delete set rowcount 1000 which takes half an hour to do a 1000, the saledate column is now a non clustered indexe which doesnt help matters it just queries faster, two weeks ago this same type of delete on the same table took less than 10 minutes, I thought it was a server issue ie fragmentation but I have defraged and still it is taking agesI really have run out of options but need all dates on the 30/12/2006 to be deleted before I do a re-import there is currently 12520 rows left in the table that match the criteria but so far none of these suggestions have helped.anything else I could of overlooked will be graciously tried. |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-04 : 12:01:56
|
thanks Kristen will give this a try then and let you know. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-01-04 : 12:06:31
|
"the saledate column is now a non clustered index"Check the query plan, and if its not being used trying putting a HINT on the SELECT to force using that index and see if its any quicker."have defraged and still it is taking ages"DEFRAG won't update the stats (but I am sure you know that) and for large tables I fine you sometime need to use FULLSCAN for the Update Statistics to get the optimiser to make a sensible choice (in fact we use that option for all our Update Statistics "just to be sure we are sure" !!)Kristen |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-04 : 14:13:31
|
Kristen you are a legend, thanks for making sense of my issue, will have to run this tomorrow though but I am more optimistic now,Cheers. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-01-04 : 15:07:05
|
I'll not hold my breath just yet ... |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-05 : 05:41:49
|
Weiird thing happened today Kristen I came in this morning and ran the following statement :Delete from itemsales_test where saledate = '30/12/2006'and my server literally took 32 seconds to delete all rows matching this criteria I dont know what I did but sp_updatestats fullscan might have had soomething to do with it after a defrag ? as there were no other jobs running on this server when it was running slowely. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-01-05 : 10:33:58
|
"sp_updatestats fullscan might have had soomething to do with it"If you don't normally use FULLSCAN it might well have - it could have made the difference between an optimal query plan being used, rather than a poor one based on one-eye-closed and one-hand-behind-back!"after a defrag" could help too - indexes pages more optimally stored, thus requiring less disk I/O to traverse.But those won't count if you do Update Stats and Defrag normally.If it makes you feel better I'll send you a bill for being up all night fixing your server for you!!Kristen |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2007-01-05 : 10:38:10
|
bill it to mr incognito, thanks for your help Kristen |
 |
|
sapient007
Starting Member
5 Posts |
Posted - 2007-08-01 : 17:02:58
|
i've posted a question here http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=87148anyhow, i'm doing the delete 10000 at the time strategy so my query look like1. truncate tempduplicate table2. select into tempduplicate from tableYYY with(nolock) where conditions3. set rowcount 100004. while @@rowcount>0 delete tableYYY inner join tempduplicates^^so my question now is how do i delete from the tempduplicate table when i have deleted it already from tableYYY in the same while @@rowcount>0 loop to ensure that all duplicates are removed??tx |
 |
|
|