Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
redbrad0
Posting Yak Master
176 Posts |
Posted - 2005-11-27 : 11:48:24
|
| We have a Quad 3.66 Ghz (Hyper threaded) server with 16GB of Ram (Win2k3 Enterprise). There are only two databases where one is 20.2 GB and the other is 14.8 GB. I ran a trace for one hour and then used that to guess at how many querys a day we are processing and it came out around 5.5 Million querys a day. I know this is a good amount but it seems that with this big of a server we should be screaming thru the records. The CPU stays between 30 - 50% with bursts to 80-100%. I know the database could be better optimized by a Expert DBA which I have done the best job that I can.I know there are alot of things that can effect this, but what would your guess be on the amount of query's we should be able to handle without having SQL Timeout's and getting ready to double our querys on the database?Does anyone have any stats on there servers and how many querys you run in a day?Would it be best to keep most of the querys as inline querys or would SP be better for all the querys? I read that sometimes SP are faster but the SQL Server machine handles the querys and data processing to return the results so my thought was to use inline SQL so the other servers/computers do some of the data processing.Sorry for 20,000 questions.Thanks,Brad WickwireQuality NT Web Hosting & Design |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-11-27 : 12:35:04
|
| "Would it be best to keep most of the querys as inline querys or would SP be better for all the querys"Usually SProcs will be faster because the queries are cached.If you use parameterized queries from your application (e.g. using sp_ExecuteSQL and ensuring that the "query" IS built so that it IS parameterized and thus gets cached!!) then that is probably as good performance-wise as Sprocs (but suffers that the application has to have permissions on the TABLES rather than just on the SPROCS, so security is compromised)"30 - 50% with bursts to 80-100%"So the server isn't running flat-out, so optimising isn't required in order to prevent the server max-ing out ...We have one client with a similar Quad server with no fancy RAM / no Enterprise stuff - so I guess its just using 2GB of RAM. Database is around 8GB. We do 6.5million Sproc calls per day (that includes Sprocs called by other SProcs, but excludes 2 further Sproc calls PER Sproc to log the Start and End of each Sproc (the first of which may do a further Sproc call to convert Name -> ID) - so you could say that we do nearer to 20m Sproc calls per day).CPU runs at about 20%, Connections normally peak at around 200, but mostly run at around 50. We have put a lot of time into optimising things, and the application only communicates with the Server through Sprocs.The machine in question has lots of disk drives on 3 separate channels.Kristen |
 |
|
|
druer
Constraint Violating Yak Guru
314 Posts |
Posted - 2005-11-28 : 14:48:56
|
| 1. I assume you have enabled your SQL Server for AWE so that it will actually use the 16 GB of memory that you have, otherwise it will just be sitting around.2. Indexes, are always the first thing I check when trying to improve performance. If you run your profiler again and only look for queries/stored procs that are taking longer than X milliseconds for duration or for more than X milliseconds of your CPU. Then you can run those queries/stored procs in Query Analyzer with the Execution Plan and see what you can do to improve them. Optionally you can install and run a product called Speed Coefficient from Quest which will run a profile and then do the magic of showing you the worst performers and summarize the percent of time they took by CPU or Duration so that you have the executive summary without doing any work on your own. Then you can take the worst ones and find out what the Execution Plan is for them to see if you can improve them. If you run the Speed Coefficient tool for a FULL Detail report it will actually show you a list of querys that required a TABLE SCAN without you having to look that kind of thing up in Show Execution Plan, that's just slick.3. We have a quad cpu machine with only 8 gig of memory, which has a 50 Gig database running on it. Prior to tuning the indexes (installed by 3rd party application developer) our server was hitting the same kind of performance bottlenecks as you. After tuning the indexes we run about 5% normal, and peak at 14% CPU utilization. The only bottlenecks in the system now are those that I can't as the DBA avoid, because they are designed into the system. (ie audits that the developers put in which cause all queries no matter which parts of the application they touch to all update/insert to a few tables) which ends up causing blocking. 4. As Kristen suggests often the biggest culprit in applications is the fact that the application will send a query like "select a, b, c from table1 where field = 123", which the system has to then see if it is cached, and if not come up with an execution plan for it, then actually run it. Converting that same thinkg to a stored procedure or executing as she suggested with sp_ExecuteSQL where the field = 123 is actually a parameter instead allows the system to say "woo-hoo I have that cached and I already have an optimized execution plan for it, I'll just run it with the new parameter.5. Correlation to indexes, is the maintenance of the indexes themselves. We run a defragment on any indexes that are over 30% fragmented each night, and weekly do a complete reindex of all of the indexes. |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-11-30 : 08:58:57
|
| "and weekly do a complete reindex of all of the indexes"Memory may be failing me, but I think I read that this was a bad idea - specifically on the one that you are already hitting with your DEFRAG.Something to do with having to move the WHOLE index to a new part of the MDF file / physical disk, and the risk of fragmentation of the file / disk / mousemat! or somesuch.I would suggest you just stick with "any indexes that are over 30% fragmented"We, additionally, use REINDEX rather than DEFRAG on small tables [based on a threshold setting] and do NOT do anything on very VERY small tables (seems like they look fragmented every day until they get at least a handful of rows).What's your check for "30% fragmented"? (I'm embarrassed to say that I think our check for that is wrong, but I ain't saying what it is!!)By the by, she -> he over this side of the pond (assuming you are the other side!)Kristen |
 |
|
|
|
|
|
|
|