| Author |
Topic |
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2006-03-08 : 14:23:59
|
| Hi, everyone. Still working on some major app redesign stuff.One problem we're experiencing now is blocking, along with long-running queries that can lead to timeouts. All stuff to work on, of course.But I've found myself using table vars in some cases where a long-running query is joined against a busy but small table. For instance, we have a users_online table that (obviously enough) indicates what users are online. It's only a couple of thousand rows at most. It's a very busy table, since people are always logging on and off, and it updates their most recent pageview on top of that.What I've found myself doing is setting up a table variable and just doing one bit "insert into @t (i_users,last_pageview) select i_users,last_pageview from users_online", and then joining against @t in big queries. It seems to have reduced locking and blocking and improved performance.Is that weird to do? Am I heading down some evil road if I move more of the queries to do that?Thanks-b |
|
|
spirit1
Cybernetic Yak Master
11752 Posts |
Posted - 2006-03-08 : 14:26:58
|
depends on the queries what i find intriguing is that you can sustain you table variable... how can you do everything in one batch?Go with the flow & have fun! Else fight the flow |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-03-08 : 14:33:41
|
| Maybe instead of a Table Variable (which I've hear still goes to the TempDB to be stored) maybe a Select with a NOLOCK on the real table may work better for ya?Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda> |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2006-03-08 : 16:04:58
|
| The table variable is instantiated at the beginning of a stored procedure; some of these are 600-line SP's that run a few queries that join 6 or 8 tables. Populating the table var once *seems* to have helped things out.Using NOLOCK was no good because of the dreaded "couldn't complete query with NOLOCK due to data movement" issue; basically, the users_online table is too dynamic to get joined with NOLOCK. The table variable definitely gets stored in tempdb, so it's more about isolating the data than performance. Though we do plan to move tempdb to a ramdisk in the near future, so there may be performance benefits as well down the road.-b |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-03-08 : 16:08:14
|
| In that case, your Temp Table / Table Variable solution seems like the way to go. A Solid State Disk for TempDB would be nice, but keep in mind they are REALLY expensive. Last time I looked into it they were up in the 6 digits to have just two relativly small fibre channel disks.Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda> |
 |
|
|
jsmith8858
Dr. Cross Join
7423 Posts |
Posted - 2006-03-08 : 16:33:06
|
| Are you sure that these stored procs are well-written? 600 lines is pretty big. Are there any cursors or manual looping or lookups being done with functions or inefficient WHERE clauses (i.e., WHERE IN(..WHERE IN(...WHERE IN.. etc ...)))) or other things that can be done more efficiently? |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2006-03-08 : 16:33:46
|
| Our plan is to add another 4GB of ram to the server itself, and use 1GB of that for a local ramdisk for tempdb. http://www.superspeed.com is the ramdisk vendor, and it seems to play well with SQL server. Since tempdb is created at reboot, it doesn't matter if the data is lost in a server crash or something.We're also adding SSD for some of the frequently used tables; they've come down in price quite a bit. I think we're looking at low to mid 5 figures for 16GB, with two 4GB fibre channel interfaces. But that's another story (one that I'll probably have a million questions about later).Thanks for validating the table variable approach; it seemed reasonable, but I've done plenty of things that seemed reasonable at the time and were pretty horrid in hindsight.Cheers-b |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2006-03-08 : 16:37:42
|
quote: Are you sure that these stored procs are well-written? 600 lines is pretty big. Are there any cursors or manual looping or lookups being done with functions or inefficient WHERE clauses (i.e., WHERE IN(..WHERE IN(...WHERE IN.. etc ...)))) or other things that can be done more efficiently?
I think they're pretty well written; there's always roomn for improvement, of course. No cursors, no where/in/where/in stuff. There are a bunch of "where ((@iSearchThis is null) or (joinedTable.column in (select validValues from @tSearchValues)))" type constructions, but that actually keeps the size down (since we don't have dynamic joins, and I really don't want to manage one copy of the logic for each permutation of search parameters).Cheers-b |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-03-08 : 17:57:30
|
| Remember that if you have a "Cluster" that the RAM disk needs to not be on the individual servers, it must be on the shared disk array someplace (IE a fibre channel disk in your SAN or something like that). I've never done anything with SSD due to the costs, but just from an architure point of view, for clustering, the data must be shared.Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda> |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2006-03-09 : 06:56:11
|
quote: Originally posted by aikenNo cursors, no where/in/where/in stuff. There are a bunch of "where ((@iSearchThis is null) or (joinedTable.column in (select validValues from @tSearchValues)))" type constructions, but that actually keeps the size down (since we don't have dynamic joins, and I really don't want to manage one copy of the logic for each permutation of search parameters).
In previous company I worked for they used their own templates and template runner to produce stored procedure for each branch and client code calling all these stored procedure. They (we) achieved great performance. |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2006-03-09 : 16:35:30
|
| Yeah, mmarovic, I'd love to do something like that. And I still might, at some point, but that's a pretty big project for a pretty small shop like us.MicaelP, I hear ya on clusering. The SSD we're picking up can be used for clustering, but I don't see it happening in the near future. For our business, money is better spend on blazing performance and 99.9% uptime than good performance and 99.999% uptime.Cheers-b |
 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2006-03-09 : 19:10:44
|
| Have you really verified that tempdb is the performance bottleneck? You can spend a lot of money on that for very little gain.I was looking at their web site, and I have to completely disagree with this statement:"Microsoft SQL Server performance can be limited by network, processor or memory resources, but the most common bottleneck is disk input/output."In my experience, the most common bottleneck by far is poor performing queries. Until I verified that there was nothing to be gained by optimizing stored procedures, or adding indexes, I would not even look at hardware.In most applications, a small number of procedures use most of the system resources, and that is fairly easy to identify with Profiler. It would be fairly typical for 3 or 4 procedures to be 90% of your CPU usage while the system is busy. A little bit of effort with carefully optimizing a small number of procedures can make a huge improvement. It may not be the procedures you suspect. If a procedure runs fast, but is called 20 times per second, it can be your largest consumer of CPU.You can also analyze the output from Profiler with the Index Tuning Wizard to see if additional indexes can improve performance. I usually capture the workload from a busy period into a table on several different days, and analyze the output of each. If it suggests creating the same index each time, I create that index and repeat the process until it looks like there is nothing more to gain.After that, additions to server memory would be the next thing I would look at. If your SQL Pages Read/Sec is high, you can probably use more memory. It's always faster to have it in memory, rather than on disk, even a ram disk. You can never be too rich or have too much memory.If you still suspect an I/O bottleneck, make sure you know where the bottleneck is. It could be on the database data files, it could be on the transaction log files, and maybe it’s tempdb. Just make sure you are spending money where the problem is, not where you guess it might be. You don’t want to have to explain why the server is just as slow as ever after you spent a lot of money to upgrade it.CODO ERGO SUM |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-03-09 : 20:22:35
|
| Well said Mr. Jones!I agree with what you have to say there. Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda> |
 |
|
|
|