| Author |
Topic |
|
Job
Yak Posting Veteran
69 Posts |
Posted - 2005-08-19 : 18:27:21
|
| This statement takes forever to run and it shouldn't. Makes me think there is something wrong.--03Update Tbl_Txn_Line_CHSSet Tbl_Txn_Line_CHS.NF_TRVUper_2003 = Tbl_Master_Blnd_RVU_RestNJ_03.NFAC_TRVU FROM Tbl_Master_Blnd_RVU_RestNJ_03 INNER JOIN Tbl_Txn_Line_CHS ON Tbl_Master_Blnd_RVU_RestNJ_03.CPT_26_TC = Tbl_Txn_Line_CHS.CPT_26_TCAny thoughts are appreciated... |
|
|
AjarnMark
SQL Slashing Gunting Master
3246 Posts |
Posted - 2005-08-19 : 18:48:11
|
| If you SELECT the field rather than UPDATE it, how long does the query take to run? It should be comparable.What if you reverse the order of your tables in the FROM clause and put the one that you are actually updating as the first one listed?---------------------------EmeraldCityDomains.com |
 |
|
|
ajthepoolman
Constraint Violating Yak Guru
384 Posts |
Posted - 2005-08-21 : 01:07:25
|
| Is there a WHERE clause you can implement that will limit the number of rows being updated?How many rows are in each table?AjHey, it compiles. |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-08-21 : 02:12:33
|
| Is there a trigger on Tbl_Txn_Line_CHS which is doing lots of work?You could do it in a loop (the WHERE clause might be handy anyway if any of the records already have matching values)[CODE]DECLARE @intRowCount intSET @intRowCount 1 -- Force first loop iterationSET ROWCOUNT 1000 -- Some reasonable batch sizeWHILE @intRowCount > 0BEGIN Update Tbl_Txn_Line_CHS Set Tbl_Txn_Line_CHS.NF_TRVUper_2003 = Tbl_Master_Blnd_RVU_RestNJ_03.NFAC_TRVU FROM Tbl_Master_Blnd_RVU_RestNJ_03 INNER JOIN Tbl_Txn_Line_CHS ON Tbl_Master_Blnd_RVU_RestNJ_03.CPT_26_TC = Tbl_Txn_Line_CHS.CPT_26_TC WHERE Tbl_Txn_Line_CHS.NF_TRVUper_2003 <> Tbl_Master_Blnd_RVU_RestNJ_03.NFAC_TRVU OR (Tbl_Txn_Line_CHS.NF_TRVUper_2003 IS NULL AND Tbl_Master_Blnd_RVU_RestNJ_03.NFAC_TRVU IS NOT NULL) OR (Tbl_Txn_Line_CHS.NF_TRVUper_2003 IS NOT NULL AND Tbl_Master_Blnd_RVU_RestNJ_03.NFAC_TRVU IS NULL)ENDSET ROWCOUNT 0[/CODE]Kristen |
 |
|
|
Job
Yak Posting Veteran
69 Posts |
Posted - 2005-08-22 : 18:27:25
|
| The Tbl_Txn_Line_CHS has about 6.5 MM lines. The Tbl_Master_Blnd_RVU_RestNJ_03 table only has about 14,000. There are no triggers. I dumped 1mm lines into a temp table and it updated quite quickly--like 2 minutes. Doing the math I thought I may up to 10 min to run the entire table, however, I let it run over 8 hours and still it didn't finish. I've run similar updates on 26mm lines and it took around 40 minutes.So it leads me to believe that there is something in the table structure that is incorrect. I'm joining on varchar(50) to varchar(255) and the update field is a decimal and they have different precisions..didn't think that mattered....not sure what to do now.... |
 |
|
|
AjarnMark
SQL Slashing Gunting Master
3246 Posts |
Posted - 2005-08-22 : 19:51:43
|
| Job, how long does a SELECT take using the same criteria?And did you try Kristen's idea? It does updating of the table in batches to ease the strain on the Transaction Log.---------------------------EmeraldCityDomains.com |
 |
|
|
Job
Yak Posting Veteran
69 Posts |
Posted - 2005-08-22 : 21:43:55
|
| Hi all,The Select only takes 18 minutes....Also, when I try to run the looping example it gives an error on line 2..says incorrect syntax near '1'...I'm new at this an probably running it incorrectly... |
 |
|
|
madhivanan
Premature Yak Congratulator
22864 Posts |
Posted - 2005-08-23 : 01:32:14
|
| That should be SET @intRowCount = 1 and try it againMadhivananFailing to plan is Planning to fail |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-08-23 : 03:13:55
|
| D'Oh ... thanks madhivanan!Kristen |
 |
|
|
Job
Yak Posting Veteran
69 Posts |
Posted - 2005-08-23 : 10:33:35
|
| Ok thanks, that is running now. Would it be better/more efficient to never update tables like this and just create views? It only takes minutes to view the select of this statement, and I'm assuming that the updates always take much longer. Or, because it is a view, I will always have the time to re-run the view evertime I use it? |
 |
|
|
AjarnMark
SQL Slashing Gunting Master
3246 Posts |
Posted - 2005-08-23 : 13:12:34
|
| Perhaps if you provided a little more info about what this is doing, in business terms, we could help create a better solution. For example, as AJ mentioned, is there a WHERE clause that can be used to limit the number of records processed. When you do the INNER JOIN, how many total rows do you get? I'm guessing the 14,000 rows of one table are joining multiple times with the 6.5 million rows of the other, and not just returning 14,000 rows. Or does your JOIN syntax need to be modified to more specifically limit the number of rows returned? Now that you have gotten this initial population of data into the 6.5 million row table, will you actually need to run this UPDATE statement ever again? Why? Would it be better put into a scheduled job to run overnight? I don't see how creating a view would handle your problem. If you really need the data in the table, then you UPDATE it. If you just need to view it, then you JOIN the tables. What would a VIEW do for you?---------------------------EmeraldCityDomains.com |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-08-24 : 08:02:36
|
| Or put a trigger on the table(s) so as a row is changed, or inserted, it automatically gets the corresponding value?Kristen |
 |
|
|
Job
Yak Posting Veteran
69 Posts |
Posted - 2005-08-24 : 12:00:36
|
| The update to the table is only a one-time thing. We get large chunks of data, do our analysis and then we won't (at least at this point) use the data much in the future, maybe some benchmarking vs. other clients. Therefore, I do believe that the creation of the view would be sufficient. AjarnMark, with the Join, it returns about 5.5mm records. There isn'nt necessarily a limit to the number of records returned, as this updating is one step in 10 that I use to calculate all of the various fields in order to run the analysis. So I don't actually need the data in the table, just use the field for further calculations.Having said all of this, I still do not understand why it would take so long. I know you get questions like these and it is rather like someone saying what is wrong with my car it won't start, so I apologize for that. I know just enough to be a bit dangerous. However, it doesn't make sense that updating (right or wrong) 5.5 mm lines would take over 8 hours. Also, I'm calculating the Mode on a field and this seems to take much too long. I've run the exact queries on a 2mm line table and it ran in approx. 5 min. Now I have 6.5 mm lines and it runs for over 8 hours and I've stopped it at that point.The table is pretty straight forward, no triggers etc. Just a plain table.Thanks for any thoughts!Job |
 |
|
|
AjarnMark
SQL Slashing Gunting Master
3246 Posts |
Posted - 2005-08-24 : 17:20:53
|
| Well, updates can certainly take longer than SELECTs because of the effects on logging the changes into the Transaction Log file. However, if the 2 million row table takes 5 minutes but a 6.5 million row takes over 8 hours, it sure sounds like something else must be going on (assuming that the 2 million row table was a subset of the 6.5 million row). It sounds like some sort of blocking event that for some reason is triggered with the larger file. If you've got the time and patience, it might be interesting to run tests on 3, 4, and 5 million row tables to see if you can approximate the table size that sets it off.Also, are any of the fields that are being updated involved in a clustered index which migth be causing the records to be moved on disk? And if there is splitting involved, you could see a real ramp-up in time.---------------------------EmeraldCityDomains.com |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-08-25 : 00:21:05
|
| Do you have SP4 installed? and if so are the two sides of your JOIN different numeric precisions?(I presume that would be Tbl_Master_Blnd_RVU_RestNJ_03.CPT_26_TC and Tbl_Txn_Line_CHS.CPT_26_TC)Kristen |
 |
|
|
Job
Yak Posting Veteran
69 Posts |
Posted - 2006-01-22 : 01:30:00
|
| I know this is an old thread and I apologize for never following up. However, I have thought about what was posted last and I'm wondering about your comments regarding the log slowing things down. I've left the logging as default, and I think that it has full rollback on all transactions. 1) Is this completely overkill and inefficient and 2) if so, how do I change it and what are the recommended log settings.Cheers,Job |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-01-22 : 08:32:20
|
| "I think that it has full rollback on all transactions"Do you need to recover to a moment in time? or will restoring to [say] last nights backup be enough?Do you need to be able to preserve the work as you go (i.e. move the backups to another machine every, say, 10 minutes so you could quickly restore and get back to where you were, or is last nights tape backup OK?If last nights backup will do check that your database is set to RECOVERY MODEL = SIMPLE (its in the properties [tab] for the database in Enterprise Manager)See http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55210&SearchTerms=What+type+of+backup+should+I+useKristen |
 |
|
|
Job
Yak Posting Veteran
69 Posts |
Posted - 2006-01-22 : 15:02:32
|
| Hi Kristen,I'm using the 2005 edition and I believe this setting is in a different location...I looked online and didn't find it readily..Restoring to last nights backup would probably be fine...so sounds like the simple recovery model would suffice. |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-01-22 : 15:35:31
|
| Dunno about SQL 2k5, but you can probably do it from SQL using something likeALTER DATABASE myDatabase SET RECOVERY SIMPLE To see what its currently set to use:sp_helpdb 'myDatabase'Kristen |
 |
|
|
|