Author |
Topic |
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-02-27 : 14:02:13
|
I've been searching high and low and I realize that there is no way to determine directly when a stored procedure was last executed. What I'd like to know if any of the gurus here have any suggestions for pulling this information out of SQL Server. I'm using SQL Server 2000 Enterprise SP3. I've toyed with the idea of creating a job that periodically executes DBCC MEMUSAGE and strips out any object id's that are procedures but I'm not sure if that will work. |
|
X002548
Not Just a Number
15586 Posts |
Posted - 2004-02-27 : 14:42:47
|
Either mofiy the sprocs and make them all call a standard looging sproc, or turn on Profiler to capture the start and end of the sprocs...That's my guess...Anyone else?Brett8-) |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2004-02-27 : 15:08:52
|
SQL Profiler is the tool to use if the stored procedures don't already log this information to a custom user table.Tara |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-02-27 : 15:24:25
|
I was afraid of those two options. The SQL Profiler really isn't an option against our production boxes IMO. Unless there was a way to tell it to only capture procedures that haven't been captured before. And as far as a standard logging proc, I don't like that idea just because of the number of procedures I'd have to modify :(. Although if there are no other options that would be the way I'd have to go.Although you'd think there would be some way to interrogate the procedure cache to see what's in it, assuming a procedure is only in the cache if it's actually been executed, hence the thought about DBCC MEMUSAGE. |
 |
|
AjarnMark
SQL Slashing Gunting Master
3246 Posts |
Posted - 2004-02-27 : 16:10:16
|
Okay, I give up. WHY do you care when a sproc was last executed? I'm really curious about this because I can't think of a time when I cared other than scheduled jobs, and those have histories you can view.--------------------------------------------------------------Find more words of wisdom at [url]http://weblogs.sqlteam.com/markc[/url] |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-02-27 : 16:16:49
|
Why isn't relevent in this case, just trust me when I say I need to. Otherwise I'd have to kill you after I told you :).Anyway, I think I found a way that is less tedious than rolling our own solution into each procedure. The table in the master database, syscacheobjects seems to hold the information I need. Of course this is assuming that a procedure never makes it to the procedure cache if it's never executed and if a procedure is executed it will always make it into the cache. Being removed from the cache is not a concern in this case. Is there anyone out here that can verify these questions? |
 |
|
MuadDBA
628 Posts |
Posted - 2004-02-27 : 17:12:16
|
quote: Originally posted by tfountain I was afraid of those two options. The SQL Profiler really isn't an option against our production boxes IMO. Unless there was a way to tell it to only capture procedures that haven't been captured before. And as far as a standard logging proc, I don't like that idea just because of the number of procedures I'd have to modify :(. Although if there are no other options that would be the way I'd have to go.Although you'd think there would be some way to interrogate the procedure cache to see what's in it, assuming a procedure is only in the cache if it's actually been executed, hence the thought about DBCC MEMUSAGE.
Hmmm...I suppose you could use the xp_trace stored procedures to build up a queue and then yank the results out of the queue every so often to update your log. I can't really see how this would be more costly than your DBCC MEMUSAGE, and it's much more foolproof. If you have a lot of stuff going in and out of the cache, you could miss stuff if it depends on gettnig the restuls from MEMUSAGE. Or you could be running it too often.Your second sentence has me really confused. You want the last execute time for each stored procedure, but you don't want to update procedures that you already have a last update time for? Or did I misread that? |
 |
|
derrickleggett
Pointy Haired Yak DBA
4184 Posts |
Posted - 2004-02-27 : 18:03:24
|
You have my curiosity up. Do you EVER run profiler against your production SQL Server???MeanOldDBAderrickleggett@hotmail.comWhen life gives you a lemon, fire the DBA. |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-02-28 : 10:45:32
|
quote: Hmmm...I suppose you could use the xp_trace stored procedures to build up a queue and then yank the results out of the queue every so often to update your log. I can't really see how this would be more costly than your DBCC MEMUSAGE, and it's much more foolproof. If you have a lot of stuff going in and out of the cache, you could miss stuff if it depends on gettnig the restuls from MEMUSAGE. Or you could be running it too often.Your second sentence has me really confused. You want the last execute time for each stored procedure, but you don't want to update procedures that you already have a last update time for? Or did I misread that?
I may have mistated my intentions. What I'm really looking to determine are what procedures, functions, views, etc are no longer used. I don't really care what the specific time is that an object is last executed as long as I know it's being executed. We have over well over 5,000 procedures, views, functions, etc and I'm looking for the least effort approach to determine this. With that said, it appears that the syscacheobjects table in the master database holds what I need to know. Granted, we may miss some objects but we intend on polling and collecting the information over a period of time before acting on it. |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-02-28 : 10:47:37
|
quote: Originally posted by derrickleggett You have my curiosity up. Do you EVER run profiler against your production SQL Server???
Of course I do, but not for an extended period of time over a high traffic transactional production database. |
 |
|
X002548
Not Just a Number
15586 Posts |
Posted - 2004-02-28 : 12:21:14
|
We're not talking about running it on the server....I don't believe a client should have muich impact...Could be wrong...(it wouldn't be the first time...)Brett8-) |
 |
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2004-02-28 : 14:11:43
|
quote: Granted, we may miss some objects but we intend on polling and collecting the information over a period of time before acting on it.
Then turn on Profiler, limit the events and columns captured to the bare minimum you require, and let it rip. That way you won't miss ANYTHING, and you won't have to poll any system tables. This is specifically what Profiler was designed to do, why not use it? Profiler's overhead has been wildly exagerrated and is not the peformance killer it's been labeled to be. For more, see graz's PASS presentation:http://www.sqlteam.com/item.asp?ItemID=15084 |
 |
|
derrickleggett
Pointy Haired Yak DBA
4184 Posts |
Posted - 2004-02-28 : 16:21:26
|
We run 300-400 tps/~1 trillion reads in the business hours on our main production database server. I have a box by my desk that just collects profiler and perfmon stats from the production SQL Servers all day long.This means I have 4 profiler sessions and perfmon running against the highly transactional database server all day long. The difference is that they are running on the box setting on my desk, thus mitigating risk of memory leaks, horrible performance, and all the other things people mistakenly associate with profiler and perfmon.The only thing you have to monitor is the network bandwidth. Since we never come close to our capacity on that, it's not a concern beyond monitoring it.Right now we capture:1. Anything over 1 second.2. Anthing over 1000 reads.3. Any user that connects, including length of session, and host.4. DDL statement.5. Performance monitor counters for several things of interest.It's going to be much more accurate then accessing cache tables, which if your system is high transaction can flush fairly quickly even if your cache is efficient.MeanOldDBAderrickleggett@hotmail.comWhen life gives you a lemon, fire the DBA. |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-03-01 : 11:05:18
|
quote: Originally posted by derrickleggett We run 300-400 tps/~1 trillion reads in the business hours on our main production database server.
What about lost information in profiler? Sometimes when we run against the production boxes we get messages in profiler that some of the information was lost. |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2004-03-01 : 12:55:49
|
I have seen this occur on systems that were heavily in use, like in the 95-100% range. It means that SQL Server was unable to collect the data because the system was maxed out. You will also see this in Performance Monitor. I suggest you start looking into why your system is so heavily in use and if you have any hardware performance problems.Tara |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-03-01 : 15:04:27
|
Thanks for your replies everyone. Honestly, I like the SQL Profiler approach but I'll still have to do a few things to to filter out what I want but I'd have to do that either way. The link to the PASS presentation was informative as well. I didn't realize there were functions designed to read trace files! The one thing I liked about the syscacheobjects table though was it also shows what views are being used. I'll have to see if I can do the same with Profiler. |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-03-15 : 11:17:58
|
Just an update for all you gurus. I went ahead with my original idea and I periodically poll the syscacheobjects. I wrote a query to cycle through all the records in the cache, iterate through them based on the database id and log that information to a physical table. This polling happens in the form of a SQL Server job setup to run every 15 minutes. It runs in less than a second. Then to identify anything not used we have another query we can run on demand that compares the existing procedures, views, functions to this list. The reason I went with this was it is pretty much a setup and leave it alone kind of job and is less intrusive then running profiles and worring about disconnects, duplicate objects logged, disk space concerns, etc. I also log the last time an object was found in the cache so if we need to in the future, we can see what has or has not been executed based on a date range.Granted, we still may miss objects but the goal was to generate a list for visual verification of elements not used. So far, this is working like a charm. |
 |
|
derrickleggett
Pointy Haired Yak DBA
4184 Posts |
Posted - 2004-03-15 : 20:52:13
|
Post your script on the SQL Scripts area here. I haven't seen any like that on any of the sites. I would still try to run profiler against your production server from "another" box. This should solve a lot of your problems with the disconnects and missed statistics. It can still happen, but will most likely be caused by network problems if it happens at all. The SQL Server requires very little processor to run a trace.If you still have the problem, run a diagnostic on your network cards and traffic to see if you have a problem there.MeanOldDBAderrickleggett@hotmail.comWhen life gives you a lemon, fire the DBA. |
 |
|
tfountain
Constraint Violating Yak Guru
491 Posts |
Posted - 2004-03-17 : 13:42:33
|
quote: Originally posted by derrickleggett Post your script on the SQL Scripts area here.
Will do. |
 |
|
|