Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 Database Design and Application Architecture
 Database Design

Author  Topic 

akasaintme
Starting Member

3 Posts

Posted - 2012-01-11 : 02:03:15
Hi,

We are developing CMS software which will be used by around 300-500 websites in future. All websites will be having same database structure but their content will be different. I have few questions in mind:

1. Separate database for each website will be good option. We will be having continuous ongoing changes to database structure, so keeping all databases structure (tables, indexes,procedures) in synchronize will be huge task. All databases will be on same server. What is better way to keep all these databases structures in synchronize with minimum effort?

2. Keeping single database for all websites, so maintenance will be easier. There will be WebsiteID in each table to identify website. But will have to take care of performance issues, if we go with this approach.

We are using asp.net with sql server 2008 for development. I have heard about Share point technology. Can Share point will be useful in this case. Please suggest technology/methodology that will be useful to achieve good performance with less maintenance.

X002548
Not Just a Number

15586 Posts

Posted - 2012-01-11 : 09:59:31
Some Clients demand that their data is not comingled

Went to the NYC SSUG that discussed this

THEIR big problem was...How to do maintenance and backups on 500 databases on 1 server?

They only did nightly backups with some other magic....so I asked...are the clients comfortable with an entire's day of a loss of data...

The answer...the consultant was NOT ASKED that question...so it wasn't an issue..they were their to make sure all the nightly backups worked

Some solution

1 Database if your clients will go for it

Make SURE that you can isolate security to the row level based...?????

Client ID as a column in every table???? Do you need a more granular level of data security?



Brett

8-)

Hint: Want your questions answered fast? Follow the direction in this link
http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx


Want to help yourself?

http://msdn.microsoft.com/en-us/library/ms130214.aspx

http://weblogs.sqlteam.com/brettk/

http://brettkaiser.blogspot.com/


Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2012-01-11 : 13:18:57
If you have one database then ALL clients MUST update at the same time. That's not necessarily a bad thing ... but if something goes wrong ALL clients will be hammering on your door wanting a fix!

Your worry about rolling out upgrades to multiple databases is probably a bit unfounded; but you do have to tool up for it. Hopefully you are planning to have a multi-step release procedure - so you start on DEV, then apply the changes to TEST for some manually user-testing, then apply to QA for a formal set of tests, then apply to STAGING (a recent copy of the PRODUCTION database, so you can rehearse the rollout and make sure nothing goofy happens) ... and then finally release to the PRODUCTION database.

So you want a single (or, at most, "very few") script that you run that makes all the database DDL changes, any data twiddling DDL changes, and updates all Sprocs etc.

Once you have that script (for your testing and rollout procedure) its a small step to be able to run that on any number of Client databases)

What we do is as follows:

EVERY (and I mean absolutely every single one, absolutely no exceptions) change to the database structure is scripted; we save each script in a sequentially numbered filename system. Ditto with any data-twiddling scripts - e.g. where you change the Structure and also need to change the nuance of the data to match.

We store every SProc and Trigger in its own file. We NEVER (not ever, absolutely without fail) just change an Sproc in the database (e.g. by using RightClick - Modify). All those source-code files are stored in a Revision Control System (we use SVN and Tortoise, they are free)

To create a rollout-script for the new version we concatenate all the DDL scripts (since the last rollout) into a single "Upgrade" script, and we concatenate all the Sproc and Trigger scripts, etc., that have a MODIFY date newer than last time into a second script (i.e. we can just pull all the ones that have changed out of the Revision Control System).

Then we try running that "upgrade script(s)" on a restored copy of the TEST system and see what breaks. Rinse and Repeat until its OK, then we push that out to QA system and so on ...

Rolling that out to N-client databases is just a question of running the scripts. Whether you do that manually, or automatically, is up to you - and perhaps depends a bit on what the length of downtime is and the complexity of the updates (whether a human watching the process is more likely to spot issues than it just "running blind").

We use a bug-tracking system called FogBugz (I'd recommend that if you don't current use anything; up to 2 developers is free if you want to make a trial). They have one database per Client, and they have several upgrades a year. I've known them use two systems (that I can remember). One was that when you tried to log in (after some cutoff date) you got a message that your database was being upgraded, and to try again in N minutes. That probably works well if numerous of your client-systems have become dormant [as I suspect is the case with Fogbugz]. Other times I get an email to tell me that there will be scheduled maintenance on XXX at YYY. You could probably let client's choose an upgrade slot (using your application ) and have the APP Insert that "request" into a table of "Upgrade Dates & Times". (Obviously you need some sort of backups and a route by which the APP can be reversed back to the previously version if it all goes wrong)

With one-database-per-client you can move client database to different machines as a machine becomes overloaded. That's harder to do with multiple-clients-in-one-database. You could decide that you might have 50 clients per physical database, but if 10 of those clients get particularly busy you may need to split individual clients out to less busy machines ... and if all 50 have minimal activity [at the moment ...] that server is underloaded
Go to Top of Page

akasaintme
Starting Member

3 Posts

Posted - 2012-01-16 : 23:23:11
quote:
Originally posted by Kristen

If you have one database then ALL clients MUST update at the same time. That's not necessarily a bad thing ... but if something goes wrong ALL clients will be hammering on your door wanting a fix!

Your worry about rolling out upgrades to multiple databases is probably a bit unfounded; but you do have to tool up for it. Hopefully you are planning to have a multi-step release procedure - so you start on DEV, then apply the changes to TEST for some manually user-testing, then apply to QA for a formal set of tests, then apply to STAGING (a recent copy of the PRODUCTION database, so you can rehearse the rollout and make sure nothing goofy happens) ... and then finally release to the PRODUCTION database.

So you want a single (or, at most, "very few") script that you run that makes all the database DDL changes, any data twiddling DDL changes, and updates all Sprocs etc.

Once you have that script (for your testing and rollout procedure) its a small step to be able to run that on any number of Client databases)

What we do is as follows:

EVERY (and I mean absolutely every single one, absolutely no exceptions) change to the database structure is scripted; we save each script in a sequentially numbered filename system. Ditto with any data-twiddling scripts - e.g. where you change the Structure and also need to change the nuance of the data to match.

We store every SProc and Trigger in its own file. We NEVER (not ever, absolutely without fail) just change an Sproc in the database (e.g. by using RightClick - Modify). All those source-code files are stored in a Revision Control System (we use SVN and Tortoise, they are free)

To create a rollout-script for the new version we concatenate all the DDL scripts (since the last rollout) into a single "Upgrade" script, and we concatenate all the Sproc and Trigger scripts, etc., that have a MODIFY date newer than last time into a second script (i.e. we can just pull all the ones that have changed out of the Revision Control System).

Then we try running that "upgrade script(s)" on a restored copy of the TEST system and see what breaks. Rinse and Repeat until its OK, then we push that out to QA system and so on ...

Rolling that out to N-client databases is just a question of running the scripts. Whether you do that manually, or automatically, is up to you - and perhaps depends a bit on what the length of downtime is and the complexity of the updates (whether a human watching the process is more likely to spot issues than it just "running blind").

We use a bug-tracking system called FogBugz (I'd recommend that if you don't current use anything; up to 2 developers is free if you want to make a trial). They have one database per Client, and they have several upgrades a year. I've known them use two systems (that I can remember). One was that when you tried to log in (after some cutoff date) you got a message that your database was being upgraded, and to try again in N minutes. That probably works well if numerous of your client-systems have become dormant [as I suspect is the case with Fogbugz]. Other times I get an email to tell me that there will be scheduled maintenance on XXX at YYY. You could probably let client's choose an upgrade slot (using your application ) and have the APP Insert that "request" into a table of "Upgrade Dates & Times". (Obviously you need some sort of backups and a route by which the APP can be reversed back to the previously version if it all goes wrong)

With one-database-per-client you can move client database to different machines as a machine becomes overloaded. That's harder to do with multiple-clients-in-one-database. You could decide that you might have 50 clients per physical database, but if 10 of those clients get particularly busy you may need to split individual clients out to less busy machines ... and if all 50 have minimal activity [at the moment ...] that server is underloaded



Hi,

Thanks for detailed explanation. We are now thinking to go with 1st approach. i.e. separate database for each website. But i am not sure on how to script the database changes. I will be running script(containing all the changes) for all databases. Can all database changes be scripted or is it necessary to use sql server mgt studio wizard to do some changes?

Because if there is need to use sql server wizard to make some changes, it will be big task to do it manually for all the databases.

Please let me know the changes which can't be easily scripted and tools we can use to generate scripts containing changes. Or will I have to use some 3rd party tools to generate
script for changes?
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2012-01-17 : 03:59:49
"Can all database changes be scripted "

Yes

However, you must be disciplined to always create a script (even if you use SSMS in order to have a nice GUI to prepare the work, make sure you always script the result, and never let SSMS "Save" the change. There is a setting, somewhere, to have SSMS generate a script if you do press SAVE - so at least if that happened, and you were aware of your accident, you could then SAVE / Cut & Paste the script to a file. Not sure if this applies to every action, I've only used it on Table DESIGN tools in SSMS.
Go to Top of Page

boybawang
Starting Member

15 Posts

Posted - 2012-05-02 : 22:40:34
As to developing new database design there are certain things to consider on the separation of different database to avoid repetition and debugging of the program.
Go to Top of Page
   

- Advertisement -