Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Turning off logging for a table / DB?

Author  Topic 

aiken
Aged Yak Warrior

525 Posts

Posted - 2006-01-07 : 14:31:20
One of my most heavily used tables is basically a central repository for session data that's shared by a web server farm. This data is essentially a GUID and a varchar(8000), with no intrinsic RI, FK's, etc.

It seems silly to have every update to this data logged; if the DB server goes down, the web servers go down and the sessions are useless anyway.

I know that informix has an option to turn off transaction logging on a per-DB basis. Is there anything like that in SQL server?

For my purposes, I would be perfectly happy if this table were kept in memory and never even written to disk; the utility here is having a central server, and unlike 99% of the web app I manage, data integrity is just not that important.

(Yeah, it would probably make sense to write some kind if in-memory app that would serve the same function, but this is an existing app that I'm working with, and if there's a quicker way to reduce unnecessary load on the DB, that's what I need to do).

Cheers
-b

Kristen
Test

22859 Posts

Posted - 2006-01-07 : 14:48:36
You could set the database recovery model to SIMPLE, instead of FULL, but perhaps that's not what you are meaning ...

Kristen
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-01-07 : 15:54:29
SQL Server does not have the option to turn off transaction logging.

As Kristen suggested, you can set the recovery mode to SIMPLE. This will result in the transaction log being truncated on checkpoint. If your transactions are simple, like inserts or updates to single rows, the effort devoted to transaction log activity will be minimal.

If your database also has data you want to persist and would want to be able to recover, you may want to move your session data table to a separate database with simple recovery and leave the other data in a database with full recovery.

If the only activity on the database server is this one table, all the data should be in memory in the data cache if you have a good amount of memory. With 1 GB of memory, you would have enough data cache to hold over 100,000 8K pages in memory. If the average session data row is 500 bytes, that is about 1.6 million rows.

Are you experiencing some kind of performance problem with this application?





CODO ERGO SUM
Go to Top of Page

aiken
Aged Yak Warrior

525 Posts

Posted - 2006-01-07 : 16:15:49
Yeah, I'm hoping for something even more severe than SIMPLE. Like, "don't log anything and treat the table as an in-memory table variable."

As usual, the real answer is a more fundamental app change, but in the meantime anything I can do to ligten the load on the DB from storing this data that really doesn't belong in a DB (as it's not relational and we don't care if it gets deleted) would be nice.

Cheers
-b
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-01-07 : 16:57:39
Why are you concerned about this? Is there some performance problem? It doesn't sound like an application that would put very much strain on a typical database server.

If there is no performance problem, why worry about it?





CODO ERGO SUM
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-01-07 : 17:52:03
I did some testing with Oracle years ago (on unix) with putting the redo logs on RAM disk. This was insanely fast but somewhat unstable. I was trying to speed up a database import and it crashed about 90% through. The problem was when it crashed I couldn't get a proper crash recovery because all the redo logs were gone. anyway...

Maybe something like that could be done here. What if you created a RAM disk and put the datafiles for the session DB there. Of course you would want to use simple recovery mode and then possibly pin that table into buffer cache also. Would be an interesting experiment I think.



-ec
Go to Top of Page

aiken
Aged Yak Warrior

525 Posts

Posted - 2006-01-07 : 18:56:59
quote:
Why are you concerned about this? Is there some performance problem? It doesn't sound like an application that would put very much strain on a typical database server.


I should have elaborated: we are seeing performance issues. The session data alone is only a couple of million very lightweight transactions a day, but because it resides on the same server are some more expensive transactions, if the server bogs for a moment due to a DSS or other more tough query, a whole lot of session requests queue up, leading to 1) slow pageloads for users, and 2) a sudden run and blocking on the session table when they all hit at once.

It's not an emergency, we're just growing at a rate that will make it a real problem over the next year. Hence the question: is there an easy fix in the DB, or do I really need to move this stuff out of the DB altogether?

quote:
What if you created a RAM disk and put the datafiles for the session DB there. Of course you would want to use simple recovery mode and then possibly pin that table into buffer cache also. Would be an interesting experiment I think.


A very clever idea, thank you! We'd need some kind of startup script to stuck a copy of a blank DB onto the ramdisk at server reboot, but that shouldn't be a problem at all. I'll give it a go and report back.

Cheers
-b
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-01-07 : 21:00:54
It sounds like your performance problem is due to having applications that have different performance needs on the same server. DSS applications take a lot of resources on an intermittent basis. Your transaction system probably has light resource needs, but needs to have fast response, and can't wait until that 5 minute DSS query completes.

I doubt that the performance of the transaction log is the issue, but is more likely due to intense CPU, memory, and disk usage for DSS queries.

You best bet is probably just to move the web application to a server by itself.






CODO ERGO SUM
Go to Top of Page

aiken
Aged Yak Warrior

525 Posts

Posted - 2006-01-07 : 21:54:06
Oh yeah, I hear ya. It's definitely not the performance of the transaction log itself that's the issue here, but the fact that those heavy CPU, memory, and disk demands from DSS and DSS-like queries are happening ont he same server. But if we can dramatically improve the transaction log performance, we'll at least be treating the symptoms.

It's only a matter of money, hardware, and time . If we could afford three great servers to replace our one main SQL server, we'd be fine. If we could rearchitect the app to make more efficient use of resources and use a dedicated in-memory server for session data, etc, we'd be fine. And if we could afford to just pay someone else to deal with it, we'd be fine.

Right now, we're running 5 web servers against two database servers (one beefy IBM unit for the main app, one moderate one for logging), and we really should have 8 web servers against 4 DB servers. And the app itself has numerous ineffeciences that need to be addressed.

But we have to work within the time, money, and hardware contraints that we have, so we need to make optimal use of the resources we have while working on the underlying issues of money, hardware, and time. A quick fix like the ramdisk idea may save us $40k in a dedicated server (between hardware, engineer time, SQL licensing, hardware and software support costs, etc), which lets us spend that money on an appdev guy to update the app, etc.

Yes, mixing different workloads on one server is a bad idea, but when you've got more work than you have servers, that's what happens. I'm deeply jealous of the guy who can just say "hmm, must need another server." Believe me, if I could just say "order another server and SQL server license and we'll move some of the workload," I would!

Cheers
-b
Go to Top of Page

aiken
Aged Yak Warrior

525 Posts

Posted - 2006-01-08 : 00:58:07
Ok, so I was a little defensive there. Sorry! We are just one of those companies where every dollar and every piece of code has to go as far as possible.

Cheers
-b
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-01-08 : 01:36:19
Have you used performance monitor to look at system resource usage? When I see a server performing badly, I start a performance graph running to look at memory paging, CPU, processor queue length, disk IO on each disk, disk queue length, network card IO, context switches, user connections, logins per second, batches per second, cache hit ratio, table scans, index scans, database page reads and writes, database page lookups, and others.

Usually a perfmon graph running that captures these items at 15 minutes intervals will identify some metrics to zoom in on without consuming a lot of resources for the performance monitor, and it will display a whole days performance on one page. Once I spot items I am interested in, I start another monitor capturing data at 1 minute intervals. This lets me see the performance over a shorter period to see how spiky the load is. I monitor the system over time to see what is going on, see if the current workload is typical, and compare it to other servers.

Another thing to look at is the performance of queries using Profiler. I usually setup a database to capture Profiler output. I capture the workload to a table for an hour or two during the peak time when performance is worst, and do that for several days. I can then run queries against this data to see which procedures are using the most resources. In a transaction processing system, it is fairly typical that a small number of procedures are the majority of the workload on the system, and optimizing a small number of procedures can produce dramatic improvements.

Another thing you can do is use the Index Tuning Wizard with the Profiler output to see if it recommends creating any indexes. I usually capture Profiler output during the heavy workload for at least 3 different days, and run the Index Tuning Wizard against each. If I see that it recommends the same index each time, I go ahead and create it. Then I repeat this cycle until it cannot recommend more indexes. This can often give you dramatic performance improvements. You should only run the Wizard on a production system during times of minimal usage.

It is a lot easier to justify hardware upgrades if you have real data to base your decision on. If you can show people that CPU is running at 80% with an average queue length of 20, and that you have done everything else you can to improve performance, then you know that a more powerful box is what is needed. Nothing is worse than spending a bunch of money and not getting the performance improvement you need.

It may be that something a lot cheaper that a complete server replacement can improve performance. Once thing I always consider is more memory if the server is not already maxed out. With more memory, the server has less need for disk IO and for CPU to perform IO. If you see a lot of disk read IO or long disk queue lengths, then more memory can cut that down by letting you cache more data.

On the idea of purchasing an additional server, you may not need that powerful a box for your web application, just one that is not being hammered by DSS queries.

Of course, you may have already gone through all this, but I am just trying to throw ideas out there from my own experience. I have had more than my share of trouble dealing with similar issues.


Edit:
I doubt that the performance bottleneck is IO to the transaction log. I have never seen a transaction log as a performance bottleneck, and DSS systems are usually much more read intensive. Performance monitor on disk queue length, disk writes/sec, and disk write bytes/sec on the disk with the transaction log should tell you right away what the truth is.



CODO ERGO SUM
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-01-08 : 05:24:25
"I doubt that the performance bottleneck is IO to the transaction log"

Me too, but I think I know what aiken is worrying about as we have similar setup. If a user calls up a simple web pages, but that page needs to log some simple data, and if that logging slows down for whatever reason a) the user may give up or b) the user may press refresh which only doubles the problem!

We have a SQL Server Load test (each minute we log key statistics like @@CPU_BUSY and so on) and if a threshold is exceeded we deny new sessions. The existing folk can carry on, but new folk have to wait. As people finish up and leave the site, and whatever massive query comes to an end, the system self-corrects and allows people back in again.

Kristen
Go to Top of Page

RonPaulIn2012
Starting Member

1 Post

Posted - 2011-12-28 : 15:05:18
I have an application that logging is totally unnecessary, and logging the information is just a waste of resources.

SQL Server should have an option for Enable Logging;

There is no reason to write to disk when the owner of the data has no desire for an audit trail.
Go to Top of Page

GilaMonster
Master Smack Fu Yak Hacker

4507 Posts

Posted - 2011-12-28 : 16:30:44
The transaction log is NOT an audit trail.

Do you want a database that will go suspect (and need restoring from backup) every time a transaction rolls back or a statement fails? A database that will have to be restored from last backup any time there's an unexpected shutdown of SQL? A database that you cannot replicate, that you cannot log ship or mirror, that you cannot take backups that will even restore consistently?

If yes to all of those, then logging is unnecessary.

P.s. 5 year old thread.

--
Gail Shaw
SQL Server MVP
Go to Top of Page
   

- Advertisement -