Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
Bex
Aged Yak Warrior
580 Posts |
Posted - 2009-11-24 : 09:21:20
|
I have a client database, and can calculate how big it may get. However, i have been told that it is industry standard to ensure you add another 30% of the predicted size for contingency.There was also something about better for performance and something to do with defragging? (this was second hand info delivered from a Project Manager, so i have no idea of the context of these statements). I was assuming that perhaps the client meant that they do not want the database to autogrow (hence the performance reference)... but not sure about index defragging?So, my question is this, Is there an industry standard for the amount of contingency (in %) that should be allowed for, and what has defragging or fragmentation got to do with it??? When we size a database, should we assume that it will become fragmented and therefore need to allocate additional pages?? I don't understand..........Oh, and we have a daily maintenance task that defrags the indexesHearty head pats |
|
|
|
|