Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2003-10-08 : 23:08:53
|
| Hi, everyone. I'm back with yet another question about one of my tables, which basically serves as a log. The table is coming up on 1,000,000 inserts a day, and will probably only get more activity as time goes on.It's getting to the point where our tx log backups, scheduled every 5 minutes, are taking 20-30 seconds, during which time the application slows down noticably. Of course, most of those transactions are inserts to that log table. I would be perfectly fine with only getting nightly backups of the log.Should I consider moving that table to a seperate database that is not having txlog backups run? Or, is there some way to exempt transactions to that table from either the tx log or the backup process? Or am I missing something altogether?Performance is definitely important here, and for some reason I have it in my head that cross-database transactions are slower. Is that correct? Would I see any kind of performance hit by changing the SP to do an insert into database.dbo.table rather than the insert into table that it's doing now?Thanks in advance -- you folks are the best!Cheers-b |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2003-10-09 : 12:31:51
|
| 1,000,000 per day! That's a lot. There is no way to exempt transactions from that database. You could move it to another database and set the recovery model to SIMPLE if you don't care about the ability to restore to a point in time for that table. Yes cross-database transactions would be slower, but it would need to be tested to see how much slower. It could be very quick. Have you considered moving this table onto its own filegroup? You might also want to consider putting the filegroup on a RAID array designed for this kind of volume.Tara |
 |
|
|
X002548
Not Just a Number
15586 Posts |
Posted - 2003-10-09 : 13:03:44
|
| What kind of app is this?If your dumping logs every 5 minutes, why would not being able to recover those inserts not be a problem?That's like 11 inserts a secondSELECT 1000000/(24*60*60.00)What are you tracking?And if it's like a logging table can you just echo the data to a text file and then do a load at some point?Doesn't sound like it has to be available real time...or does it?Brett8-) |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2003-10-09 : 16:11:56
|
| Thanks for the responses.The table is a logging table for a web application; it counts a number of things, including pageviews, specific transactions, debugging info, etc. It is currently on a dedicated fiegroup on a RAID 1 array, on its own SCSI channel. The transaction log is on a seperate RAID 1 array on another SCSI channel (most data is on a RAID 5E array on yet another channel).The problem is that the data in the table needs to be available on short notice ("Did we serve more than the usual number of 404's after checking in that last update 10 minutes ago?"), so periodic loads aren't really an option. However, in the event of a disaster, it wouldn't be a problem to just lose all of the data in the table.We're looking at moving to text file logging, but that gets to be a mess with multiple application servers and combining the logs for ad-hoc queries.I'll take a look and see about the seperate DB and simple recovery mode; that may be the best bet. Most inserts to the log are run asynchronously, so if the insert is a tiny bit slower that's probably OK. The real issue is the log backup time and load, and my guess is that the whole app is suffering from the high volume going to the tx log.Thanks for the feedback!-b |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2003-10-09 : 16:51:10
|
| Can't you log things like that from the web server? If you need them in sql server then maybe dedicate a sql server to that.I would also consider maintaining the agregates on insert rather than obtaining them from queries if that's what you are doing.Putting it in another database should cure the tr backup problem - that would have i/o to do the backup then to clear the log (where were the backups taken to? Do they use the same controller as the database disks?). This database should be optimised for sequential writes I guess. No indexes other than maybe a clustered one on a sequential field (identity?).==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2003-10-10 : 00:35:41
|
| First, there is some excellent advice already posted about how to optimize your configuration. I would take nigel and tara's advice and move these tables to a separate database. Change this new databsae to simple recovery mode if possible, and backup this database nightly (not every 5 minutes) if you can.I do have some questions though:1. Where is the incremental backup going? to tape or disk? If disk, is that disk shared w/the transaction logs or data files? Best practice would be to create a separate RAID group dedicated to just the backup files. It is also a good idea to locate your TEMPDB on a separate RAID as well, especially on high volume systems. If to tape, consider a to disk backup strategy as it is much faster.2. You say that the database slows down during the backup, are you doing any integrity check on the DB before each backup? This is a common setting if you setup your backup through a maintenance plan.3. What kind of server (brand and model if possible) is this running on? Does it have multiple PCI busses? Are each of your SCSI/RAID controllers on a separate bus? What RAID controllers do you use? When you say RAID 5E, are you talking about the RAID 5 ADG that compaq offers on their 53xx and newer controllers? Or is it something else altogether? RAID 5 w/2 parity drives is much slower than normal RAID 5 (which is already much slower than RAID 0+1). If your high volume inserts and transaction log backups are both going to the RAID5E, then that is a problem.4. HOw is your RAID cache setup? is it 50/50 Read/Write or something else? I would recommend that it is weighted more towards writes than reads for best performance. maybe 75/25 if possible. If you have multiple RAID controllers, you can set your heavy write filesystems on one controller, and your heavy read filesystems on another. Each controller can have a different adjustment for how to use the cache. Also, make sure that your RAID controllers have a battery backed write cache module. IF it doesn't (most do these days), then using write cache could lead to data loss if you lost power.Also, Check things like clustered indexes and fill factors on the high volume tables, make sure there is adequate space for inserts. Rebuild these tables nightly if possible to maintain the optimal fill factor.You may also want to consider a backup tool like sqllitespeed (I have no connection to the company btw). My testing has shown this product to be much faster than the native backup tools.Sorry for all the questions and this long winded response, a lot of stuff came to mind as I wrote this.-ec |
 |
|
|
|
|
|
|
|