Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
AskSQLTeam
Ask SQLTeam Question
0 Posts |
Posted - 2004-02-13 : 08:18:35
|
| Dean writes "I write custom web applications, and one of my applications contains the capability to log change history for up to roughly 20 different fields in a table. It works, but I'd like to speed it up if possible. I was wondering what some suggestions might be. This is on SQL Server 2000. I don't know what service packs are on it.I have a table that has a column for the field name to be logged and a bit field column to turn on or off history logging for that field. The value of a third field must be checked for any additional processing that must be done.My trigger inserts 2 rows in a garbage table (an actual table in the database used solely to hold these values temporarily), the first for inserted and the second for deleted. Each insert statement must join to roughly 8 different tables to get lookup values.I run through a cursor to get the field to check. I then run sp_executesql twice with output parameters to select the values from my garbage table. I do this because the column I'm selecting is a variable, and I didn't find any other way to do it. Finally, I compare the values of the output parameters to see whether the field has actually been changed or not. If so, I insert a message in the history log that explains what changed, and check the field for extra processing.I know this trigger is involved, and will require some time to run, but I'd be curious to know whether the overall design is inefficient and can be sped up.Thanks" |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2004-02-13 : 08:28:00
|
quote: I run through a cursor to get the field to check.
Well, right there is gonna be the main performance bottleneck. Get rid of the cursor and you'll likely fix 95% of the problem.How about setting up your audit/history tables to mimic the structure of the tables you're auditing? Something like:CREATE TABLE audit_myTable (ChangeDate datetime not null, ChangeUser varchar(50) not null, ...rest of columns from myTable go here)This way you can write an trigger like this:CREATE TRIGGER Audit_myTable ON myTable FOR UPDATE, DELETE ASSET NOCOUNT ONINSERT INTO audit_myTableSELECT getDate() as ChangeDate, suser_sname() AS ChangeUser, * FROM deletedBy doing this, you're auditing the entire row, not just the columns that changed, and you avoid the overhead of comparing columns. You also avoid using dynamic SQL. It may use more space than you need, but it's far more useful when trying to construct a previous version of the row. If you don't want to audit all of the columns in the table, then only create the ones you need in the audit table, and change the SELECT column list to accommodate them in the trigger. Also, you'd only be auditing the older version of the row, if you need the current version you can always join it to the original table. |
 |
|
|
|
|
|
|
|