Author |
Topic |
jose_piratilla
Starting Member
7 Posts |
Posted - 2008-10-13 : 11:53:32
|
First of all, hello to everybody. This will be for sure one of my future reference forums.I have a question, so maybe someone could help me.We are designing a product to be used by multiple clients on the same computer. The product will be based on a SQL Server database. There is no common data between clients, but the same data structure (that is, the same table and relationship definition). So, what it is the best solution: to have an independent database for each client, or to add an IdClient column to almost all tables in order to differentiate the client.I think that the best is the first, because you can isolate the datasets from each client. But I don't know if to run more than 40 databases will eat a lot of memory.Thank you |
|
visakh16
Very Important crosS Applying yaK Herder
52326 Posts |
Posted - 2008-10-13 : 12:54:35
|
i also think first option as best, one of main being maintaining data security for both clients. storing them in same db requires views to be created on tables to make sure each client has only access to their own data and no other group will have access to sensitive data of others. also the security roles need to maintained for each client and for each table that makes administration difficult.Localisation can also be difficult just in case the two clients have different types of data being handled in db. so i think its better to give each client a seperate db so that we can define roles specifically for each and also makes management much easier. |
|
|
jose_piratilla
Starting Member
7 Posts |
Posted - 2008-10-13 : 15:55:19
|
Actually, clients do not have access to the Database. The software is sold as as service, so the clients can only use it via web. The user security is done at the front end, and the SQL server can only be accessed from the host. I think that we can take profit of the sql server security, but I'm in a team, that does not think as me ;).My only interest is about performance: Memory and CPU. So reformulating the question: What is a better chose in terms of performance: only one database of 100 MB with 10 clients accessing it or 10 databases of 10 MB with 1 client accessing each database. I think that second is best because at each select the first option needs to filter by IdClient.But, thank you a lot visakh16. You gave me another reason to change our current schema. |
|
|
Transact Charlie
Master Smack Fu Yak Hacker
3451 Posts |
Posted - 2008-10-14 : 05:19:55
|
Go with the many database approach. All of your concerns about speed and performance should come after your concerns about data security and disaster recovery.If you have 1 big database with all of your clients on it it only takes 1 mis-formed query to wipe out all you clients in one go. Also it only takes 1 incident to invalidate your database file (hardware defect, whatever) -- then when you are recovering that database, all your clients are down, not just one.I work with a similar set up you describe -- all access from customers through a front end web app. Unexpected things can and will happen so I think it's a much better idea to go with isolated databases.-------------Charlie |
|
|
jose_piratilla
Starting Member
7 Posts |
Posted - 2008-10-14 : 18:34:33
|
I think exactly the same. But the responsible of the database defends his position arguing that it consumes lesser resources. So I would like to reduce the question to the system performance field.I have a lot of reasons to change the current schema, but can you help me answering this question in performance terms. Thank you a lot. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Transact Charlie
Master Smack Fu Yak Hacker
3451 Posts |
Posted - 2008-10-15 : 06:16:22
|
Hi tkizer.The difference is the approach is probably down to scale and experience.We're still a relatively little company with maybe 100 clients (each of which is a company which can have x thousand employees that make up our data)I imagine that you are maintaining *much* more data than we are with a *much* more ..... complete? is that the word I'm looking for... disaster recovery / continual service requirement.Add that to the fact that some of our clients are on different versions of our code (with slightly different schemas) then a database per client is probably the only way we can go right now.Also some of our clients like to host their database on their own servers.How is the isolation of your client's data guaranteed in your database (process / company keys / different schema?).At the moment in my company just to keep everything working and going forward a staggeringly large number of live data-fixes and modifications need to be made on an ongoing basis. If we had each client in the same database then it would only take a carelessly formed query to make a big, big problem. (I'm completely aware that's this isn't a good state to be in but with the nature of the product I think it's pretty much un-avoidable)jose_piratilla -- tkizer's got all the experience so I'd treat her opinion as solid gold.-------------Charlie |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2008-10-15 : 13:18:36
|
Yes we have a full disaster recovery/business continuity solution in place. Our production systems are all on clusters. Most of my primary support is on 4-node (active/active/active/active) cluster with 11 SQL Server 2005 instances on it. We have a duplicate environment setup 300 miles away. We use asynchronous database mirroring between the two sites. For security of the customer's data, it's as easy as having the CompanyId in the tables that need it. Every query references it. We don't use different schemas. They are all in dbo. For performance of a one database approach, you can scale it out by using partitioned views. This is what large companies do with large databases such as MySpace.com and Verizon Wireless.The difference between our environment and yours perhaps might be that we designed the system knowing that we'd have thousands of customers.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-10-15 : 13:32:58
|
quote: Originally posted by tkizer Yes we have a full disaster recovery/business continuity solution in place. Our production systems are all on clusters. Most of my primary support is on 4-node (active/active/active/active) cluster with 11 SQL Server 2005 instances on it. We have a duplicate environment setup 300 miles away. We use asynchronous database mirroring between the two sites. For security of the customer's data, it's as easy as having the CompanyId in the tables that need it. Every query references it. We don't use different schemas. They are all in dbo. For performance of a one database approach, you can scale it out by using partitioned views. This is what large companies do with large databases such as MySpace.com and Verizon Wireless.The difference between our environment and yours perhaps might be that we designed the system knowing that we'd have thousands of customers.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog
Tara,I know this is completely out of topic but I am really interested about Active/Active/Active/Active Cluster with 4 instances and how many servers are involved in it. Recently, We have 2 node clusters and company won't use more than this. How do you monitor and plan for all these instances? Any advices. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2008-10-15 : 14:06:21
|
It's a 4-node cluster with 11 instances on it. I wouldn't recommend going over 2 instances though as it becomes a nightmare each month when the Microsoft security patches have to get installed or if there's a new cumulative update package for SQL. You can't launch more than one, so these updates mean a few hours of work. If you need 4 servers, I'd recommend 2 2-node clusters instead. We'll never use more than 2 nodes in the future. We'll just add more clusters. We use lots of tools to monitor our stuff. Our primary monitoring system is MOM with escalation going through an in-house built application. I wrote a VB.NET application that also does some monitoring as MOM seems to be unreliable due to configuration reasons not actual MOM software reasons. We also have Veritas monitoring our mount points and use Concord for the server level stuff. I believe we have even more monitoring in place too. Our environment is supposed to be 99.999% available so we've got an overload of monitoring. We've also got application monitoring.As far as planning for these 11 instances, well we use Performance Monitor and SQL Profiler to ensure that we aren't overloading any node. Each node runs a minimum of 2 instances and a maximum of 4. 7 of the instances are production, the other 4 are there to stage production deployments.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-10-15 : 16:24:31
|
quote: Originally posted by tkizer It's a 4-node cluster with 11 instances on it. I wouldn't recommend going over 2 instances though as it becomes a nightmare each month when the Microsoft security patches have to get installed or if there's a new cumulative update package for SQL. You can't launch more than one, so these updates mean a few hours of work. If you need 4 servers, I'd recommend 2 2-node clusters instead. We'll never use more than 2 nodes in the future. We'll just add more clusters. We use lots of tools to monitor our stuff. Our primary monitoring system is MOM with escalation going through an in-house built application. I wrote a VB.NET application that also does some monitoring as MOM seems to be unreliable due to configuration reasons not actual MOM software reasons. We also have Veritas monitoring our mount points and use Concord for the server level stuff. I believe we have even more monitoring in place too. Our environment is supposed to be 99.999% available so we've got an overload of monitoring. We've also got application monitoring.As far as planning for these 11 instances, well we use Performance Monitor and SQL Profiler to ensure that we aren't overloading any node. Each node runs a minimum of 2 instances and a maximum of 4. 7 of the instances are production, the other 4 are there to stage production deployments.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog
Thanks Tara for valuable Information. We are using 2 2-nodes Clusters for 4 node instead. I can't imagine 1 node getting load of 4 instances incase of disaster. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
jose_piratilla
Starting Member
7 Posts |
Posted - 2008-10-16 : 00:55:44
|
Hello Tkizer,Thanks for your contribution. You gave me another point of view.I will comment a little more about our product. Our product is a web contents manager, specifically designed for the tourist sector. The software is selled in the form of SaaS. We have 100-150 clients, with 5-10 MB data each; but planning to release a "cheap" version of the product, and may an increaing number of customers. A customer can have more than one web contracted (Hotel in Spain, France and Italy, for instance) and some tools are provided to interact between web pages (copy of products, combined statistics, etc).Our objective at the end of 2009 is 250 custormers. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
|