Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 SQL Performance Monitoring

Author  Topic 

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2003-10-16 : 18:21:03
My question is this: once I identify a slow running procedure, how do I identify why it is running slow? What I mean is I've broken the procedure out, reviewed the query plans, etc and they are using indexes optimally. From this point, I'm guessing I'm hitting a lot of page splits (it also degrades in performance over time). How do I verify what is happening internally when a procedure runs so I can prove or disprove this theory?

Thanks in advance.

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-10-16 : 18:26:20
Well, you could check the fragmentation level by running DBCC SHOWCONTIG. You can fix fragmentation by running DBCC DBREINDEX or DBCC INDEXDEFRAG. We run DBCC DBREINDEX every day. We also run DBCC INDEXDEFRAG during the day as needed.

If you monitor page splits, it would be a cumulative value and not just for those tables.

Tara
Go to Top of Page

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2003-10-16 : 19:13:47
Thanks for the reply but that won't help my situation. I need to identify what is happening during the execution of a given procedure. We have jobs setup that do what you describe now, but what I need is the ability to monitor WHEN it happens. This information will be used to then reevaluate some of our indexing strategies on some of our larger tables that are used heavily for multiple purposes (heavy updating, simple transactional querying and minor report querying).

To elaborate, we have a theory to assist in reducing how often certain tables become fragmented during heavy batch processing updates and inserts. One thought is to basically use an IDENTITY column and make that the clustered index. That way as data is inserted, it's always stored at the end of the table. The other thought is to lower the fill factor. However, that can hinder the performance of some of our transactional processes. So, before I embark on making changes for either one of these solutions, I'd like to determine if certain processes are actually causing the fragmentation.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-10-16 : 19:17:37
SQL Profiler would be able to tell you what is occurring on the system (in SQL Server at least, Performance Monitor would help you out for the rest of the server) at the same time. SQL Profiler would have to be running in order to capture this. If you decide to do this, then run it on a client machine and not on the server because SQL Profiler will impact performance, but minimally when run on a client machine.

Have you run the Index Tuning Wizard to see what SQL Server recommends?

Tara
Go to Top of Page

nr
SQLTeam MVY

12543 Posts

Posted - 2003-10-17 : 00:58:12
You could also not have a clustered index (a heap). That will stop data page fragmentation due to the index.
Try not to update fields to a longer length unnecessarily. Pad them and don't start off with nulls. e.g. don't insetr records with null values then update from another table.

==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2003-10-17 : 08:28:39
quote:
Originally posted by nr

You could also not have a clustered index (a heap). That will stop data page fragmentation due to the index.


Do you think that's advisable with a large table (several million rows)? I can't count the number of times I've tuned procedures to use the clustered index for selects, etc to improve their performance.
Go to Top of Page
   

- Advertisement -