| Author |
Topic |
|
mrpeepers
Starting Member
9 Posts |
Posted - 2006-03-07 : 11:51:25
|
Hi all. Hopefully someone out there has experienced this same issue and might be able to provide some assistance. System Specs:SQL Server 2000SP4Quad Processor4GB Ram Database size: 50 GB Background:Web based business system, back end SQL Server.Our database is comprised of about 100+ tables, one of which is about 30GB - (our transactional table). We have several jobs that run against this table by reading, updating, and inserting into this table. Problem:Whenever one of these jobs runs against this ONE table, the entire system is inaccessible. It's like hitting that one table, locks the entire system. Example:When job ABC (against the big table) is running and a user attempts to login to the system, now understand that the login process in NO WAY hits that transaction table, the user times-out while trying to access the system even though it is hitting an entirely different table! It's like the entire system relies on that one table, which it DOES NOT, and whenever that table is heavily hit in terms of selecting data, inserting data or updating data - EVERYTHING ELSE is adversely affected. The same issue also occurs when a database integrity check is run and starts on that table. The entire system, even though specific jobs hit that table and not necessarily the web based app - goes unavailable.Any thoughts?M |
|
|
RickD
Slow But Sure Yak Herding Master
3608 Posts |
Posted - 2006-03-07 : 11:57:06
|
| Sounds like the job is running some badly written code which is very I/O intensive, can you post the code for the job as i'm guessing what you have is very inefficient.. |
 |
|
|
mrpeepers
Starting Member
9 Posts |
Posted - 2006-03-07 : 12:05:10
|
| Nope, it's not the code seeing that there are 4 jobs reading, writing, and updating this table (not all at once). I know it's not the code seeing that a Database Integrity check causes the identical problem when running against that same table. :) |
 |
|
|
RickD
Slow But Sure Yak Herding Master
3608 Posts |
Posted - 2006-03-07 : 12:14:29
|
| How big is the table? What indexes do you have on the table? What is the rowsize of the table? Do you have any full-text indexes on the table?Have you run a trace against the table? |
 |
|
|
mrpeepers
Starting Member
9 Posts |
Posted - 2006-03-07 : 12:26:01
|
| The table is approximately 40 GB. It has 16 fields and 12 indexes (a problem?), and a clustered index on the date column.Yes, we have run traces and they are inconclusive seing the inserts and updates they are doing are very simple. |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-03-07 : 12:28:11
|
| Also what sort of disk subsystem do you have?Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda> |
 |
|
|
RickD
Slow But Sure Yak Herding Master
3608 Posts |
Posted - 2006-03-07 : 12:33:01
|
quote: Originally posted by mrpeepers The table is approximately 40 GB. It has 16 fields and 12 indexes (a problem?), and a clustered index on the date column.Yes, we have run traces and they are inconclusive seing the inserts and updates they are doing are very simple.
They might be simple, but they will take longer due to the amount of indexes you have, is the date column unique or do you allow duplicates? How many records do you update/insert/delete in a batch on average?What counters did you use in your trace(s)? |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-03-07 : 12:37:33
|
| Is this a single table, or do you have it setup as a horizontally partitioned table? A single 40GB Table is kind big. 12 indexes on said table seems like a bit much as well. You probably need to partition that table our and reduce the number indexes.Anyway, to address your problem, what does your disk queue (Avg Disk Queue Length) look like when you are running this job? I'm betting this is a disk throughput issue.Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda> |
 |
|
|
mrpeepers
Starting Member
9 Posts |
Posted - 2006-03-07 : 12:46:48
|
quote: Originally posted by RickD
quote: Originally posted by mrpeepers The table is approximately 40 GB. It has 16 fields and 12 indexes (a problem?), and a clustered index on the date column.Yes, we have run traces and they are inconclusive seing the inserts and updates they are doing are very simple.
They might be simple, but they will take longer due to the amount of indexes you have, is the date column unique or do you allow duplicates? How many records do you update/insert/delete in a batch on average?What counters did you use in your trace(s)?
The date column isn't unique. On average, I would say, 43,000 per day. |
 |
|
|
mrpeepers
Starting Member
9 Posts |
Posted - 2006-03-07 : 12:48:36
|
quote: Originally posted by MichaelP Is this a single table, or do you have it setup as a horizontally partitioned table? A single 40GB Table is kind big. 12 indexes on said table seems like a bit much as well. You probably need to partition that table our and reduce the number indexes.Anyway, to address your problem, what does your disk queue (Avg Disk Queue Length) look like when you are running this job? I'm betting this is a disk throughput issue.Michael
It's a single table. I'm not sure what you mean by horizontally partitioned table.Average disk length is like 6-8 when these jobs run. |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-03-07 : 14:54:27
|
| Well, you should not have an average disk queue length over 2.0 for any long sustained period of time. If you are seeing 6-8 for long periods of times (minutes or longer) then you need more / faster / better configured disks.Can you describe your disk configuration?Here are some links for partitioning:http://www.sqlteam.com/item.asp?ItemID=684http://vyaskn.tripod.com/federated.htmYou need to do a lot of testing when building your partitioned tables to find the best partition, the best indexes, etc.Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda> |
 |
|
|
|