Author |
Topic |
maniag
Starting Member
4 Posts |
Posted - 2010-03-20 : 07:31:58
|
We have database that uses GUID as primary key. The reason for GUID as pkid was becasue we have to sync local DB with central DB as our client is working on local store.Everything works when the rows are less than 0.5 million but when we have 5 million rows all the queries (select, insert, etc) timeout.As per our business requirements, we can not replace GUID with bigint. Even if we introduce another column for bigint as pkid, the impact of change is too much...We tried to introduce more index and they improve the performnace of select but kill insert/delete type queries...Can anyone recommend possible alternative? Thanks in advancemanish.agarwal@nagarro.com |
|
Kristen
Test
22859 Posts |
Posted - 2010-03-20 : 07:50:19
|
I'm surprised that you have performance problems with 5 million row tables.I assume that you have "reasonable" hardware, and you have "reasonable" number of concurrent users?Otherwise you are trying to get a Quart out of a Pint pot!Do you have proper housekeeping in place? Index rebuilds and Update statistics?How is SQL executed? If it is all dynamic SQL from application then maybe the Query Plan is not getting reused from cache - and you could benefit from changing to using Stored Procedures (or at least using sp_ExecuteSQL instead of plain dynamic SQL)Putting MDF and LDF on different drives will help - particularly if they are chosen to favour Random Access for MDF and sequential access for LDFSummary: I don't think your database size should be a problem. |
|
|
maniag
Starting Member
4 Posts |
Posted - 2010-03-22 : 03:16:55
|
Hi Kristin,1) we are using good machine to host DB. The machine has - Intel(R) Core(TM) 2 Quad CPU Q9550 @ 2.83 GHz and 2.83 GHz CPU- 4 GB RAM- SQL Server 2005 Service Pack 3- Microsoft Windows Server 2003 R2 Enterprise Edition SP2 2) Can you elaborate on "proper housekeeping"? We are using GUID as primary key and the datatype of column in VARCHAR. The database is normalized and when we introduced indexes than SELECT improved but DELETE/UPDATE/INSERT operation timedout.3)We are executing queries in SP4) We checked the physical disk space and RAM and space does not seem to be the problem. We will still try to put MDF and LDF files on different disk but i doubt if that will impact the performanceManish |
|
|
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2010-03-22 : 03:35:14
|
Can you post the CREATE INDEX statement you introduced and also a sample of a query that times out? And is there any particular reason why you're not using uniqueidentifier as a datatype?- LumbagoIf the facts don't fit the theory, change the facts. Albert Einstein |
|
|
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2010-03-22 : 03:40:37
|
And what Kristen refers to with "proper housekeeping" he's talking about index reorganize, index rebuild and update statistics on a regular basis. Can you run this script on your database and see if you get a lot of high values for "avg_fragmentation_in_percent":SELECT 'ALTER INDEX [' + b.name + '] ON .[dbo].[' + c.name + '] REBUILD GO --' + cast(ps.avg_fragmentation_in_percent as varchar (30)) , c.name as tablename, b.name as indexname, ps.avg_fragmentation_in_percent, ps.record_countFROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ps--FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ps LEFT OUTER JOIN sys.indexes AS b ON ps.OBJECT_ID = b.OBJECT_ID AND ps.index_id = b.index_id LEFT OUTER JOIN sys.tables AS c ON b.object_id = c.object_idWHERE ps.database_id = DB_ID() AND b.name NOT IN ( SELECT DISTINCT tab.name as heaptable FROM sys.indexes ind INNER JOIN sys.tables AS tab ON ind.object_id = tab.object_id WHERE ind.type_desc = 'HEAP' )ORDER BY ps.avg_fragmentation_in_percent DESC - LumbagoIf the facts don't fit the theory, change the facts. Albert Einstein |
|
|
Kristen
Test
22859 Posts |
Posted - 2010-03-22 : 03:44:44
|
"proper housekeeping"Regular Index rebuilds and Update statistics |
|
|
maniag
Starting Member
4 Posts |
|
Kristen
Test
22859 Posts |
Posted - 2010-03-22 : 08:15:04
|
I think once a week is not sufficient for index rebuilds / updating statistics, as they immediately start to go out of shape again! We rebuild ours every night (but only for tables that indicate that such action is required).But it depends how much slack time you have each night. If you do have slack time each night then you might as well freshen up Indexes and Statistics, because it will give optimal query plans.(Well, not in a Read Only database!! but in any database that has Updates, Inserts and Deleted) |
|
|
maniag
Starting Member
4 Posts |
Posted - 2010-03-22 : 08:59:24
|
KristenThanks for your reply. We do rebuild indexes weekly and use fillfactor of 90. As per your suggestion we will increase the frequency.We are evaluating the use of SequentialGUID in our application and we have hope that it might improve the performance |
|
|
Kristen
Test
22859 Posts |
Posted - 2010-03-22 : 10:32:31
|
"use fillfactor of 90"Note that for any table that has an IDENTITY column for the PK you could increase that to 100% (new values only added at the end). Same for SequentialGUID.There is quite a lot of experimental evidence that setting 100% for all tables on large databases is beneficial. A page split is required to add a new row into the middle of the index, but queries are faster because there are more entries in each index page. Also, if you have to insert several new entries into an index page even at 90% fill it will need a page split - so if new index values are "clustered" around specific values (such as dates) then 100% is better anyway.Just food for though - no right or wrong way on that.Probably best to have the Index Maintenance leave the Fill Factor at its setting (on a table-by-table, index-by-index, basis) so that you can specifically set some indexes to 100% fill. |
|
|
|