Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
KidSQL
Yak Posting Veteran
88 Posts |
Posted - 2005-06-08 : 11:24:29
|
| Can anyone tell me if there is a very rough and general rule of thumb regarding table sizes and their correspondence to physical memory size?I realize the question I'm posing cannot be answered unambiguously due to the variables involved but if anyone has any clue as to a simple column/row vs. memory allocation relationship on a table with no keys (or 1 key, 2, etc.), I would be interested in knowing.Or, does it strictly follow that the memory allocation is calculable purely from the size the datatypes occupy (so 20 rows of varchar data of a given length will mean 20 times that, etc.)Thanks. |
|
|
KLang23
Posting Yak Master
115 Posts |
Posted - 2005-06-08 : 13:34:19
|
There are many components at many levels here.Data is stored in 8K "blocks" (SQL2K), and that is the unit of transfer from disk to memory.Once in memory, the blocks are processed and decomposed. There are "system" blocks that contain mapping information as well as blocks that (of course) contain data. The data blocks have pointer sections in them that assist in the location of individual columns. NULLable and variable-length columns require more mapping information than fixed-length columns, and "Text" datatypes require, well..., a slightly different mapping altogether.Add indexes, RI rules, etc. and there is no way the layman can accurately calculate the storage requirements rule-of-thumb.There are calculators that will assist you in estimating disk requirments for a given schema, but translating that in to memory requirements is a little different. There may be memory calculators for a given schema, and that is what it will take to make an accurate estimate.The general rule is to keep as many active blocks in memory as possible, and that means buying as much memory as you can afford.Sorry if this was too generalized - and these are just my opinions (based on a little experience - I'm really old. ) |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-06-08 : 13:41:10
|
| Here's a different answer, but I may be answering he wrong question!On a SQL box you want 2GB of memory for SQL server - that's the maximum the bog-standard version can handle. It is prudent to install more than that, particularly if you have other applications running on the server - if its just SQL And Windows then Windows then it won't use much more, so if the budget is tight then 2GB will have to do. 2GB of RAM is not-a-lot-of-$ these days.If the application needs more than 2GB RAM (i.e. it's a pretty serious application, with a pretty large database, and either huge analytical processing needs, or lots of users) then [with a fair sprinkling of Generalisation here!!] it needs some fairly expensive SQL licenses, and a Posh Server, and thus the budget is unlikey to be troubled by cash-for-more-RAM.Kristen |
 |
|
|
KidSQL
Yak Posting Veteran
88 Posts |
Posted - 2005-06-08 : 14:30:31
|
| Thank you for your responses. Actually, I was looking for something along the lines of what KLang23 wrote, which I found very helpful, indeed. |
 |
|
|
|
|
|
|
|