Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 Other SQL Server 2008 Topics
 Memory Usage when Writing to a large table

Author  Topic 

NoDaylight
Starting Member

7 Posts

Posted - 2010-06-11 : 18:06:56
Hello.

I have a database with a large table (~2 Billion Rows) which is written to 24 hours a day at a rate of about 400 rows every two seconds. These are cached and written in a bulk insert from a .NET application every 30 seconds.

Having been careful with partitoning and indexes, the required queries are very quick when the server is not being written to (for example, 140,000 rows out of 1.5 Billion in 3 seconds from a "standing start").

However, the actual database is percieved to be very slow. Each bulk insert increases the memory usage until the PC memory is exhausted or my supplied upper memory limit is hit. It seems that the first query has to clean out some of this memory before starting the query itself, which makes the first query after an extended period of writing take several minutes. The performance is the same whatever I cap memory usage to (1GB, 4GB, 10GB...)

Is it possible to prevent this "memory creep" whilst making continuous writes to a database?

Thanks.

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-06-11 : 18:37:57
Are you using READ_COMMITTED_SNAPSHOT isolation level?

Could you describe your hardware? How many CPUs, sockets, cores, memory, disk config, etc...?

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-06-12 : 02:40:22
How are you measuring memory? Just the amount allocated to SQL? If so it will "use" all available ( / allocated) memory - i.e. it will take memory as it needs it, until the MAX is reached, but will NOT release it.

This is generally A Good Thing as that memory is used to cache data etc.

Assuming this is a dedicated server you should set SQL's MAX memory to be a bit less than the total to allow room for the O/S. I forget the exact recommendation (Tara will remind me I'm sure!) I think its 1GB per CPU (is that physical CPU or core I wonder?)

Definitely don't cut it down to 1GB max!

Is the system "fast" when it first starts up and then "slow" later? If not then this is not a memory issue.

How long does the .NET App take to write? (each 30 seconds it writes 12,000 rows? If that is taking 29 seconds then there is going to be trouble! if its taking 1 second then there should be no noticeable difference from it not writing at all.)

Tara is right on the money with READ_COMMITTED_SNAPSHOT - you may need to be prepared for TEMPDB size to escalate though - well, keep an eye on it at least!
Go to Top of Page

NoDaylight
Starting Member

7 Posts

Posted - 2010-06-12 : 03:34:19
Thanks for prompt replies!

I see the same issue on my dev machine (Quad core i7, 12GB, Win7 x64, SQL 2008 SP1) and the client test machine (Core Duo, 4GB, 2k3 Server 32bit, SQL 2008 SP1). Both machine only have one hard disk with a small system partition and a large data partition.

The machine is not dedicated - It's storing the output of a mathematical simulation which runs on the same machine. Writing takes about half a second every thirty seconds. It doesn't slow down particularly for very large table sizes.

To replicate the problem quickly on my dev machine, I write data as fast as I can and then suspend it whilst issuing the query. It takes over two days to see the same issue on the test machine writing in real time.

I hope I'm not misrepresenting the issue! I'm "measuring memory" by seeing the trace creep up and up on task manager! I've never seen this issue occur before SQL hits whatever upper limit I've defined (Say 4GB on my 12GB machine, so I see task manager "flat-line" at about 7GB). Under these circumstances the first query is very slow, but subsequent ones are fast even after issuing DBCC FREEPROCCACHE / DROPCLEANBUFFERS. (Cue chorus of "Well, Obviously!")

Will try READ_COMMITTED_SNAPSHOT.

Thanks.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-06-12 : 04:58:00
" The machine is not dedicated"

You need to make sure you restrict SQL Server memory to allow enough "spare" for both the O/S and that application.

" I'm "measuring memory" by seeing the trace creep up and up on task manager! "

Yup, that behaviour is normal (as per my explanation above). There are other ways to check how much memory is actually "busy" within SQL Server (Performance Monitor for example)

"It takes over two days to see the same issue on the test machine writing in real time."

Sorry for being dim, but do you mean "Two days to see all memory used up" (which would be normal) or "Two days for SELECT queries to become slow"? which would be strange! (but might be explained by a need for index rebuilds or update statistics or inappropriate database growth settings or somesuch)

The single-disk is definitely an issue IMHO! for a 12,000 row INSERT every 30 seconds. Moving the LDF file to a seaprate disk subsystem, optimsie for sequential access (I'm not a hardware bod, but I think that's RAID1 or RAID1+0 or somesuch) with the NDF on random-access optimised drive (RAID5 maybe?) would help.

"Under these circumstances the first query is very slow, but subsequent ones are fast even after issuing DBCC FREEPROCCACHE / DROPCLEANBUFFERS. (Cue chorus of "Well, Obviously!")"

No "well obviously" here! I'm not understanding this ...

Restart SQL Service (ie. start with clean slate)

Issue a query and its slow

Re-issue same query and its fast (if the query is identical, or parameterised buit the guts of it are identical), then the query plan will be reused from cache. If it references the same data then that data will be in cache, so the data will be reused (as well).

I'm OK with those steps.

Hwoever, if you then do DBCC FREEPROCCACHE / DROPCLEANBUFFERS you should be back to having nothing cached (unless you are querying data newly INSERTed data, i.e. not yet written to disk - use CHECKPOINT before DROPCLEANBUFFERS if that is potentially the case)

It should be slow again at this point, exactly the same as on first startup.

Maybe the data is also in the cache on the disk controller or somesuch? Or maybe SQL Server grabbing of memory, first time around, makes a difference? (Seems unlikely to me though ...) - SQL Server will hold on to that after getting it the first time, so that WILL be different second time around.

Any idea how long the .NET APP takes to do the 12,000 bulk inserts? (My interest is that I expect that could be optimised, but if it is taking a second, or less, there is no point - if it is taking 29 seconds then there is every point!)
Go to Top of Page

NoDaylight
Starting Member

7 Posts

Posted - 2010-06-12 : 05:21:38
On the 4GB machine, I have SQL limited to 2GB. It takes less that a second to log 12,000 rows (It takes my test app 2 seconds to write 72,000 rows)

The table schema is:

LogTime as DateTime NOT NULL, ItemIndex as Int NOT NULL, DataValue as VARBINARY(MAX) (which can hold anything from 4 bytes to 10kB)

ItemIndex is a foreign key.

No Primary Key, but the clustered index is LogTime. I have a separate index (ItemIndex ASC, LogTime DESC) to help a rarely used query.

1. I start SQL Server. My table is 200GB and 1.5 Billion rows.

2. I issue the query:
SELECT LogTime, ItemIndex, Value FROM Data WHERE LogTime BETWEEN...
Takes less than 10 seconds to retrieve 10 minutes of data.

3. I use my test app to write a couple of days worth of data. The memory usage tops out. I stop the test app.

4. I issue the exact same query, it takes 4 minutes.

5. I issue it again, it takes seconds.

6. Execute DBCC FREEPROCCACHE / DBCC DROPCLEANBUFFERS

7. Issue the same query, it takes less than 10 seconds.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-06-12 : 05:39:59
Very helpful synopsis, thanks.

(7) is very strange.

I can think of a number of things, but not sure they are likely to be relevant

(3) will most probably mean that the statistics are mucked up, or SQL thinks they are mucked up. However, reissuing the original query at (4) can, of course, safely reuse the original statistics (indeed, any query NOT trying to query latest / new data can safely use old statistics]. Maybe SQL thinks it needs to rebuild the statistics first? Try turning off automatic statistics rebuilding?

Either that or force a statics rebuild between (3) and (4) and see if (4) then runs in 10 seconds, instead of 4 minutes.
Go to Top of Page

NoDaylight
Starting Member

7 Posts

Posted - 2010-06-12 : 06:14:52
Odd, isn't it?

I'm trying the Statistics rebuild but it's very slow, and I now have to go out for the rest of the day!

Thanks for replies.

Go to Top of Page

NoDaylight
Starting Member

7 Posts

Posted - 2010-06-12 : 16:54:19
It's something to do with the first access to the table - The memory issue is a sideshow, I think. Here's another test:

1. Write 8 million rows (of the above schema) to a new table.
2. Restart SQL Server without reading from the table.
3. Issue a simple query - result takes 17 seconds.
4. Restart SQL Server again
5. Issue the same query - result takes under a second.

So something is being rebuilt. If I issue -
UPDATE STATISTICS [TestTable]
.. it takes about 15 seconds, then the first actual query is fast, even after restarting or clearing the buffers.

I think this will explain the longer delay when reading against a table with 50 or 60 million freshly-written rows in it.

Is the VARBINARY(MAX) causing this? I need it, as some values can be over 8k. Any tips?
Go to Top of Page

NoDaylight
Starting Member

7 Posts

Posted - 2010-06-12 : 18:08:07
.. and just to keep things interesting, I don't think it's to do with statistics.

By default, statistics aren't updated until the first read. But even if I turn off auto_update_statistics and run the same test as the last post, I get the same result (slow first, then fast even after restart), even though the STATS_DATE value is still NULL after queries become fast!
Go to Top of Page

NoDaylight
Starting Member

7 Posts

Posted - 2010-06-13 : 04:27:40
Success!

Just to complete the thread, it is the statistics.

If I set AUTO_UPDATE_STATISTICS_ASYNC to true, but manually update the table statistics after just the first SqlBulkCopy group (about 1000 rows, but enough to be fairly representative), I get good performance straight away with the first query.

Thanks Kristen, your pointed me on to the right path.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-06-13 : 06:02:06
Glad you got it solved. You need to run a statistics update often enough for it to be "fresh" (particularly if you have Auto Static Update turned off - as thats the normal life-line to queries turning slow (at the cost of some response time, as you have seen). If you have slack time each night I would suggest doing it daily (why wait until the weekend with degrading performance each day through Friday?) but most SQL-shops seem to do housekeeping at weekends only ...

But having said that, on a very large table adding another several millions rows is unlikely to distort the statistics much, even if it gives SQL itchy-pants to be re-updating them!

If I have understood correctly, it takes days for the live site to hit "slowdown"?, so if you can do nightly-rebuild of Stats that should mean you can leave Auto Rebuild Stats turned ON - as a safety net in case nightly ones fail, or some huge "catch up bulk import after system failure" import occurs etc.

If not already doing so you may need to use FULLSCAN in the Update Statistics. I found that query plans were unreliable without it, and we now use it on all tables' update statistics as "belt and braces" (although we implemented that as a "fix" back in SQL2000 and whatever the issue was may have improved since)

" Is the VARBINARY(MAX) causing this? I need it, as some values can be over 8k. Any tips?"

Can't think of anything. If you need it then you need it! Just make sure you aren't doing SELECT * anywhere, particularly where that column is NOT then used.

I would have a look at the query plans to make sure they are using the indexes you expect - but I expect on billion-row-tables (not got any of them here!) you very soon find out if a query is NOT using an index?!

And don't forget to set SQL's memory to allow room for O/S and the other APP, but set the amount available to SQL as high as you can within that.

By the by, you said your manual update of Stats took 15 seconds, but saved 4 minutes on the query ... I reckon your command defaulted to using "Sample", and maybe the previous SQL Auto Rebuild Stats was using "FullScan" (hence "4 minutes" longer) ... you may want to just review whether the Stats you now have are only "Sample" (I have no idea which method you actually need, as I said above we use FullScan, but clearly if you get identical query plans with "Sample" that would save a LOT of time updating them - but if you have enough slack time overnight I think I would just opt for FullScan)
Go to Top of Page
   

- Advertisement -