Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
lazerath
Constraint Violating Yak Guru
343 Posts |
Posted - 2011-11-03 : 11:17:49
|
Hello SQL Team! I have scenario that I'd like to get advice on. I have a query running against a large, wide table (220 million rows) with many indexes (15+) defined on it (This is not my design and I am not at liberty to change it in the near future). The specific query in question is using TOP 300 and returns sub-second. Given the same query but with TOP 50, the optimizer is choosing a different execution plan that takes upwards of 20 minutes to complete. The first plan uses a seek on a covering index with a subtree cost of ~70, but when TOP 50 is applied the execution plan uses a seek on an index with a single column and then seeks against the clustered index (essentially a bookmark lookup operation). The subtree cost for this plan is ~54, which means the optimizer somehow thinks this operation is faster/better.I'm looking to get advice on what to do from here. Statistics are on auto-update and there is also a daily job to update them. Index maintenance jobs run every day. Looking at the statistics Date_Updated shows they are being kept up to date, and the actual statistics data for the single column index looks accurate.I could use a query hint to specify an index, but experience tells me that it's a bandaid approach and it limits the optimizer's ability to chose a better index if we do some reorganization later. Any feedback is appreciated. |
|
lazerath
Constraint Violating Yak Guru
343 Posts |
Posted - 2011-11-03 : 11:19:30
|
I forgot to mention this is on SQL Server 2005 enterprise with SP4. |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-11-03 : 12:31:53
|
Would need to see the execution plan (and preferably query, table design and index design) to say anything useful here.--Gail ShawSQL Server MVP |
 |
|
lazerath
Constraint Violating Yak Guru
343 Posts |
Posted - 2011-11-03 : 15:56:38
|
Dang. I'm at a new organization and I'm not sure I am allowed to provide details on the system. Fundamentally though, I'm asking a pretty general question here: What would cause the optimizer to generate a subtree cost that is so far out of whack with reality? Secondary to the question is what can I do to "help" the optimizer in choosing either a better plan or recognizing the true cost? All I have right now is statistics and fragmentation, got anything else? |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-11-03 : 18:36:33
|
Incorrect cardinality estimations will almost certainly be the root cause. What causes that is hard to say. Can be bad stats, non-SARGable predicates, bad parameter sniffing, variable usage, redundant predicates, certain query forms, probably a few more I can't recall offhand. Fragmentation is not a possible cause. Optimiser doesn't care about fragmentation.p.s. To give you an idea, I recently did a 90 minute presentation on this kind of problem, and barely scratched the surface of this.--Gail ShawSQL Server MVP |
 |
|
lazerath
Constraint Violating Yak Guru
343 Posts |
Posted - 2011-11-04 : 10:30:42
|
Thanks for the feedback Gail.My thought process on fragmentation was that the optimizer doesn't take it into consideration, so if the index is highly fragmented it could take much longer to carry out the actual seek than the optimizer estimated. |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-11-04 : 12:36:28
|
Fragmentation affects large range scans from disk only. So if the operation is small seeks or the data is in cache, fragmentation will have no effect at all.btw, you did check and ensure that there was no blocking skewing the times?--Gail ShawSQL Server MVP |
 |
|
|
|
|
|
|