Examining SQL Server Trace Files

By Bill Graziano on 15 May 2006 | Tags: Performance Monitors , Profiler & Trace


sqlec writes "I am attempting to analyze a stored procedure that I have written and have a question regarding profiler, What exactly are reads? and what is the correlation between cpu, reads and duration? is that correlation direct?" It's been a while since I answered any of the posted questions. This looked like a great one to start with.

I spend a lot of my time reading through trace files. I've started writing a utility called ClearTrace to make this process easier. It summarizes the performance of stored procedures and SQL statements much like Read80Trace does. It has both a GUI and command line version and handes traces for both 2000 and 2005. Much of this article comes from what I've learned reading trace files and creating ClearTrace to process them.

Events

I typically start by tracing the SQL:BatchCompleted event and the RPC:CompletedEvent event. These two events should capture performance for all your SQL batches and stored procedures. The SQL:BatchCompleted event is generated whenever a batch of SQL statements is completed. If you run code in SQL Server Management Studio (or Query Analyzer) it will generate SQL:BatchCompleted events. If you create a .NET SqlCommand object and give it a CommandType of "Text" that will also generate SQL:BatchCompleted events. If your SqlCommand object has a CommandType of "StoredProcedure" that will generate RPC:Completed events.

Data Columns

I normally capture the following data columns for the two events above:

  • EventClass. This identifies the event. It must be selected.
  • TextData. This is the text of the SQL batch or the name of the stored procedure. We'll see situations below where RPC:Completed events that have what looks like a SQL batch in the TextData.
  • ApplicationName. This is the name of the application that generated this event such as Query Analyzer or SQL Server Management Studio. This is something you can set in your applications through the connection string.
  • HostName. This is the computer that connected to SQL Server and issues the SQL statement.
  • LoginName. This identifies the account that the SQL statements were run under.
  • EndTime. This identifies when the event ended. StartTime could be used instead.
  • SPID. This is the SPID of the connection that ran the SQL statement. This is required.
  • CPU. This is the actual amount of CPU used in milliseconds (one thousandth of a second).
  • Duration. This is the actual duration of the batch. In SQL Server 2000 traces it is in milliseconds. In SQL Server 2005 this is reported in microseconds (one millionth of a second) however Profiler 2005 automatically translates it back into microseconds.
  • Reads. This is the number of logical disk reads in 8KB pages. For example if this number is 85 then SQL Server logically read 5,570,560 bytes of data (85 pages times 8192 bytes per page). Physical disk reads are typically much lower than logical reads as SQL Server typically caches data pages. There's no direct way to find the number of physical reads a query generates. I typically find a fairly strong correlation between logical and physical reads though. If I can reduce the logical reads usuallly the physical reads will fall also. These logical reads also take up memory. Either these data pages are already in memory or they are read into memory. A high logical reads is also a good proxy for the memory usage of a query.
  • Writes. This is the nubmer of logical disk writes in 8KB pages. Logical writes are typically much closer to physical writes. I usually don't look at this number as much.

Those events should give you a good start identifying slow queries or queries that use a lot of resources.

sp_reset_connection

If you're using connection pooling you'll see lots and lots of calls to sp_reset_connection. Typically you'll see one before each SQL batch or stored procedure. ODBC and OLEDB use this to reset a connection before it's "passed around" in the connection pool. Gert Drapers has a good description of what sp_reset_connection resets. Basically I'd just ignore this in the trace (which is what I do in ClearTrace).

Cursors

You may see a pattern like this:

declare @P1 int
set @P1=180150001
declare @P2 int
set @P2=8
declare @P3 int
set @P3=1
declare @P4 int
set @P4=1
exec sp_cursoropen 
  @P1 output, 
  N'select SomeColumn FROM TableName', 
  @P2 output, 
  @P3 output, 
  @P4 output
select @P1, @P2, @P3, @P4
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
exec sp_cursorfetch 180150001, 2, 1, 1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
exec sp_cursorclose 180150001

The three sections above will be three different events. The sp_cursorfetch event may be repeated any number of times. You'll find it will be repeated once per row returned by the query. This is a server-side cursor and it is the slowest way to query data out of SQL Server.

This occurs when the client issues an sp_cursoropen statement with a SQL query. This executes and is stored in a temporary table. For each row the client wants to return it issues an sp_cursorfetch. When it's done it issues an sp_cursorclose to close the cursor and delete the temporary table. This means that the client makes on full round trip to SQL Server for EACH row it wants to return. You'll find that each individual event profiled (the creates, fetches and closes) isn't that slow but combined they result in terrible performance. In SQL Server 7.0 this also resulted contention in tempdb as the database was locked while the temporary table was created.

I typically see these when a client has two active result sets open on a single connection. For example, a client might run a query to list all the books that match a search criteria. Then inside the loop that is returning that data it will run a second query to get some additional details about each book returned. That second query will run against the same connection as the outer query. That results in a server-side cursor. I think you can also generate these by specifying a server-side cursor in ODBC or OLEDB.

These typically have to be fixed on the client side by creating additional connections for each active query. If you have access to the source code they are fairly easy to fix. In ADO.NET 2.0 you can use the Multiple Active Resultset (MARS) option to allow this same situation without generating a server-side cursor.

In ClearTrace I pull the actual executed SQL statement out of the sp_cursoropen statement. I prefix it with "{CURSOR}" so it can be easily identified as a server-side cursor. You can use the keyword search to quickly find these.

Prepared SQL

I also see this pattern quite a bit -- especially with Access:

declare @P1 int
set @P1=1
exec sp_prepare 
  @P1 output, 
  N'@P1 int', 
  N'SELECT CustName FROM Customers CustID = @P1', 1
select @P1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
exec sp_execute 1, 125
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
exec sp_execute 1, 126
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
exec sp_execute 1, 127
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
exec sp_unprepare 1

The sp_prepare statement is compiling the SQL statement (i.e. building a query plan) and assigning it handle of 1. Each sp_execute statement is calling that SQL statement and passing it the second value as the parameter @P1. The final statement is unpreparing statement nubmer 1. There are a number of variations on this including sp_prepexec and sp_prepexecrpc. I don't know if this is actually any faster than just executing the raw SQL statements. In ClearTrace I replace each sp_exectue statement with the actualy SQL statement it executes.

Summary

Which brings us back around to your question. There is no direct correlation between CPU, Reads and Duration. Each is the actual amount of the resources consumed by that query. Typically higher CPU and/or Reads will result in a higher Duration. However Duration is strongly affected by everything else running on the server. In can also be affected by what is already in the cache.


Related Articles

Testing with Profiler Custom Events and Database Snapshots (22 June 2009)

Advanced SQL Server 2008 Extended Events with Examples (25 May 2009)

Introduction to SQL Server 2008 Extended Events (19 May 2009)

Integrating Profiler and PerfMon Log Files (6 February 2008)

Summarizing SQL Server Trace Files (or Read80Trace for SQL Server 2005) (10 April 2006)

Reading a Trace File using C# in SQL Server 2005 (25 October 2004)

PASS 2004 Presentation: Profiler and Trace in SQL Server 2005 (3 October 2004)

Automatically Process SQL Server Trace Files (15 September 2004)

Other Recent Forum Posts

Select a single row based on conditions in multiple rows (2h)

I want Help Managing Big Data Sets in T SQL Efficiently (13h)

SQL stored procedure to load the error and correct record based on some business rules (22h)

Query is running too long (1d)

Sql Query to check status change of an item (1d)

Can I create differential backups tied to a specifc Full backup instead of the most recent? (7d)

My informix Sql query retruns Null always (8d)

Vehicle availability query (9d)

- Advertisement -