Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 High Availability (2008)
 Real time two-way mirroring

Author  Topic 

mattt
Posting Yak Master

194 Posts

Posted - 2011-01-04 : 10:05:06
Hi,

I'm a c# programmer, not a DBA and I've had the (mis)fortune to be handed a database admin task. So please bear this in mind when answering this question.

What I've been asked to do is to create a real time two-way mirror between two databases. So when either changes it updates the other. This is not a standard data mirroring/failover task where one DB is the master and the other is a backup - both are live and each needs to instantly reflect changes made to the other.

In my head this sounds like a tall order, one which may even be impossible - after all in a rapidly changing environment with lots of users this is going to be massively resource intensive and create locks and queues of jobs all over the place.

Is it possible? If so, can anyone either give me some basic instructions and/or point me at some places to start my reading and research?

Cheers,
Matt

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-01-04 : 11:04:33
Merge or peer-to-peer replication may accomplish this, but your requirements are a little too vague to be sure. There are 3rd party replication agents (SteelEye, Doubletake) that may meet your needs, but they are expensive.

You can try Service Broker to pass synchronization messages between each server, but it's a significant architectural change, and if you're not a SQL Server expert you'll need to contract someone who is.

What's the purpose/need for 2 separate databases? Are they geographically separated? If your bosses are serious about this (and not smoking crack) you'll need to engage an experienced DBA or systems architect, this is definitely not a task for a "C# developer" (no offense).
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-01-04 : 11:14:12
If the two systems are "connected" you can just use a "2-phase-commit" - this in effect will update both systems within one transaction - so either your Update succeeds, on both system, or it does (on either).

Both systems have to be online and "UP" to do any updates (but Query is OK if only your local system is "UP").

If you want to work "disconnected" then you will have to update the "other" system periodically. What if the record you are now wanting up update on the remote system has already been updated locally? You then have a "collision" that you have to resolve (maybe some collisions can be resolved programatically, others made need a human to decide - lots of collisions which need humans to decide the outcome will be a right-royal-PITA !!

The more frequently you replicate the changes, the fewer collisions you will have and, you got it!, once you move to real-time-replication (as described above) there are no collisions (although you will get some ROLLBACKs when the record changes at the Remote end just-before your update - but that can happen when someone else changes a record on your local database just before you, so not really any different).

I don't really see this as an ADMIN task (bits of it are, for sure), its more a programming task. If you use 2-phase-commit the application has got to be changed to accommodate this (unless the products Rob refers to will "magic" it for you ), or if you do this with Service Broker or some sort of delayed synchronisation you will have to do some programming for the "merge" parts, and in particular the collision-resolution (where that can be done programatically)

On the face of it I think its a big job, but hearing more about the description of what you are trying to do may well enable folk here to suggest some shortcuts, or that the application is simpler than I have assumed
Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2011-01-04 : 11:15:50
Sounds like Peer to Peer Replication is the way to go. But I agree with Rob. There are a lot of prerequisites that must be satisfied AND this usually isn't a configure and walk away kind of thing. There will almost certainly be ongoing maintenance involved.
Go to Top of Page

mattt
Posting Yak Master

194 Posts

Posted - 2011-01-04 : 11:33:51
quote:
Originally posted by robvolk

If your bosses are serious about this (and not smoking crack) you'll need to engage an experienced DBA or systems architect, this is definitely not a task for a "C# developer" (no offense).



Heh, none taken. I'm well aware this is outside my usual realm of expertise and SQL Server requires a whole realm of knowledge and experience in and of itself.

The two servers are geographically separated but linked by a 10 Megabit line. In discussion - after reading your posts - it seems as though someone higher up the management chain might well have been smoking crack: it's looking as though doing a failover backup might be good enough. Certainly no-one could come up with a good reason why it had to be two-way live - the central requirement is that was have a near-100% accurate backup of the DB that we can switch to automatically if the master fails. I assume failover can be configured to accomodate this?

Cheers,
Matt
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-01-04 : 11:42:14
Hmmm ... OK an "easier" problem to solve ... that's the good news!

You can have a CLUSTER which will ensure that the second server is fully up to date with the primary

Its not my area of expertise, but my understanding is that you can have a Secondary server which is 100% up to date with the Primary AND you can have geographical separation of the two servers, and their disk storage

The other approach is "Log Shipping". As the primary server (database) is updated everything is logged - you probably take Log Backups now, so you can recover to point-in-time if something-bad-happens - with Log Shipping you copy those Log Backups to the secondary server, and restore them, but the database just sits there in "restoring" mode, then when something-bad-happens you tell it that there is no more to restore and put it "live"

Trouble is that the database is only as good as most recent Log Backup it has restored.

Even if the log backups are VERY frequent there will still be data on the Primary Server that is lost.

Depends on your application how critical that is. For us, with ecommerce applications, that would mean:

Customers created on Primary Server now lost. The same IDs will be allocoated to new/different customers on the new server

Order created on Primary Server now lost. Same problem with IDs being reallocated. But customer's will have had their credit cards debited, they will have received confirmation Emails. The Order Fulfilment system may already have received their order and it will be being despatched (and the order fulfilment system will raise an error when we try to send it an order Number a second time (as will, probably, PayPal etc. when that gets an order number again ...) ... or the order may just be "lost"

So Log Shipping with "some data lost" can cause a landslide of problems IMHO

A real-time replication/clustering solution would be better if you can afford it. But then what happens when the link to the secondary server is down? (I suppose you just disable it, and run on Primary-only, and then resync once the link is back up)
Go to Top of Page

mattt
Posting Yak Master

194 Posts

Posted - 2011-01-04 : 11:54:42
Thanks for that information. We're not e-commerce and it's not critical that the two DB's are absolutely 100% in sync at all times - a very small data loss is acceptable.

However there's no good reason not to use clustering if that's going to get us the best results. But researching this, I came across mention of Database mirroring and was curious to read in the introduction that ..

"atabase mirroring offers substantial availability and provides an easy-to-manage alternative or supplement to failover clustering or log shipping"

http://msdn.microsoft.com/en-us/library/bb934127%28SQL.100%29.aspx

Any opinions on this option?
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-01-04 : 12:00:55
Geographically dispersed clusters have significant hardware requirements and restrictions, the least of which is $$$$$. Even normal clustering is a lot more involved than database mirroring.

Also, a 10 megabit line is not gonna cut it for either database mirroring or clustering (or replication, to be honest). I have a dedicated T3 (45 MBit) for a mirror between NY and DE and it still has barely the bandwidth I need. You can still try it, but TEST thoroughly, and try to get the fastest line between locations anyway.

I agree with Kristen about log shipping. It's easier to use than replication and can provide a small enough restore window (e.g. 5 minutes) to satisfy your bosses, and possibly work with your current configuration. You won't get automatic failover, but you can recover a secondary quickly.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-01-04 : 12:29:26
"database mirroring offers substantial availability and provides an easy-to-manage alternative or supplement to failover clustering or log shipping"

Not my speciality, so hopefully folk will correct anything I get wrong - don't rely on it until then!

Clustering

Pair of computers which share a disk. There is only one copy of the database, on the disk, but if the Primary computer fails then the Secondary can take over. Getting a disk system sharable across a geographical break is a hefty thing to achieve.

Mirroring

A pair of computers with separate copies of the database. The primary server sends the secondary one every transaction. It either does it within the transaction ("high safety"), or it commits the transaction locally and then transfers, and commits, the transaction on the secondary ("high performance").

If you can achieve High Safety that is definitely preferable. The hassle and fall out of "some lost data" is always a nightmare - and will IME bring to light some gotchas that rumble on for weeks, and cause people to spend a disproportionate amount of time sorting out. Depends a bit when you have your Total System Failure - at 1am its probably fine, the middle of your busiest day ever is not so good ...

Log Shipping

Cheap and cheerful. Secondary server runs behind the primary. Lets assume you back up your Primary Server Logs every 5 minutes. Then you copy the BAK file to the remote. Once its there you are good (even if it has not been restored "yet")

However, in practice, things will get a little more behind that. Log backups will, at times, be significantly bigger than normal. I don't know if they are capable of getting blocked by anything? (I don't think so ... but it would be handy to know for sure), but during our Index Rebuilds we generate massive logs in 5 minutes (we actually increase the Log Backup frequency to 2 minutes during Index Rebuild) ... so those can take a few minutes to create, and longer to copy.

We have network interference on our machines, so we copy LOG files to the remote with a "slow" copy speed to reduce network impact. (We are copying during for Geographic Disaster Recovery purposes, we have clustering for normal disasters! but that uses the shared disk and is all-in-one-building). This need not effect you, I think its just a consequence of sharing the network with the links between out Web and SQL boxes.

Maybe have a look at how big your TLog backups are - particularly when you have scheduled maintenance doing a reindex. You will almost certainly have to move to "smarter" reindexing (i.e. something that checks the fragmentation of the indexes and only rebuilds the ones that need it rather than the "Do the lot" approach of the Maintenance Wizard ("Wizard" ... there's a joke!!)

"Size of log" will also impact on how well Mirroring works. Clustering doesn't ship anything (there is only one disk copy of the database) so all it has to do is detect that the primary server is bust and fire up the secondary server, connected to the same disks.
Go to Top of Page
   

- Advertisement -