Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
dcuffee
Starting Member
3 Posts |
Posted - 2014-03-24 : 13:12:45
|
I created a table called Performance_Statistics that have several columns with just about every SQL datatype.I created it on Server A and Server B. I used Red Gate's Data Generator to populate both tables with 100,000 records.The issue I am having is on Server B, the following SQL query is much longer: SELECT * FROM Performance_StatisticsOn Server A it took 47 seconds and on Server B it took 153 seconds. These are both first time queries and have no compiled execution plans already on the servers.Then I created Stored Procedures with the same query inside. The results changed to:Server A took 38 seconds and Server B dropped to 27 seconds.Then I tried the same queries again not inside a Stored Procedure:Server A took 36 seconds and Server B hit 27 seconds again. At this point I am sure the execution plan was in place and it used it.Just to note the Performance_Statistics table did not have any indexes on it. Just straight columns created and records loaded to it.Could there be a difference in the way the two servers are configured that would cause first time queries to perform drastically better on Server A than on Server B, but once a Stored Procedure is created on both, then Server B outperforms Server A?If so, where should I start looking?Additional Note: I did a right-click, properties on both servers. I didn't find much of a difference in any setting. The only difference I noticed and this may be the big difference is on Server A the Automatically set processor affinity mask for all processors and I/O affinity mask for all processors are checked. On server B, these boxes are not checked. Could this account for the difference in first time large result queries? |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2014-03-24 : 13:45:32
|
The query you are using is not a good way to test performance. Returning 100,000 rows to the client is inefficient. You need to performance test with a query that your application will actually run. Having proper indexes on the table to support the query too is a good too. Once you have a proper query with matching indexes, then run your tests again and let us know the outcome.Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
|
|
ScottPletcher
Aged Yak Warrior
550 Posts |
Posted - 2014-03-24 : 14:05:12
|
By far the most likely reason for the difference is additional physical I/O occurring on server B; i.e., the data to be read was not already in the SQL buffer on B, and had to be read from disk.Raw clock time is not a good way to test. Instead, for testing, you always want to specify:SET STATISTICS IO ONYou may also want to specify:SET STATISTICS TIME ONalthough it is usually less a factor. |
|
|
|
|
|
|
|