Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2008-06-27 : 05:45:50
|
Ok, we have this customer which is running a somewhat large CMS (100GB) on a 64bits Win2k3 Enterprise Server with 4 x Quad Core AMD Opteron 2.21Ghz processors, 1TB SAN, 32GB RAM and 64bits SQL Server 2005 Standard Edition (yes, standard). The server is not clustered but is the source of a custom logshipping routine and has a few replication publications. The thing is that just one month ago RAM and processors were doubled only to see that it didn't impact performance all that much, it was basically just buying them some time. Processors are currently running at about 70% avg while it was at about 85% avg before. Prior to this I did some tests on this database and it seems that there are two rather nasty procedures running basically all the time both doing sort of the same thing; a select with a 16-table join where 6 of the tables have more than 2 mill rows (one nearly 7 mill rows) and the rest with an average of about 100k rows. The selects themselves are actually very well optimized but still this seems to me like the source of the problem.I recomended basically 2 things: - "home-made" horisontal partitioning (since we're on standard edition)- split primary filegroup into several and move tables to different disks (there are actually 9 datafiles at the moment but they are all in the primary filegroup on the same disk for some reason)My question is this: which options do they have here? I have to admit that I have only read about systems like this and never really done any hands-on performance work on a system this large. I think my recomendations are correct but are there any other options available, quick-wins or not doesn't really matter. I'm from a development background and my "default" recomendation in any performance problem is usually to optimize software before hardware...Any suggestions are welcome.- Lumbago |
|
AndrewMurphy
Master Smack Fu Yak Hacker
2916 Posts |
Posted - 2008-06-27 : 08:49:02
|
Maybe analyse the business work currently in place.Do the 2 SP's really need to execute so fast/so often?Could they be throttled back....i.e. using some sort or crude/shopisticated wait statements to let others get a chance at using the server?Could a mini-datamart be created to hive off the high activity stuff/data?Could daily summaries be created to assist the business and reduce the workload?I agree with the software optimise before hardware....but i'd also challenge the basic premise....does this stuff really need to run this often, this fast? Cheapest work - don't do the work in the first place, then look at improving the software, then seek and install better hardware. |
|
|
|
|
|