Centralized Asynchronous Auditing across Instances and Servers with Service Broker

By Mladen Prajdić on 20 August 2007 | Tags: Administration , Service Broker


In my previous article I showed how to asynchronously audit data in one instance with one central database that holds audited data from all other databases on that instance. So to continue showing the greatness that is Service Broker this article explains how to do the same with multiple servers or instances. This means that a single server holds audited data from all other servers. It's a classic many to one (one to many) scenario. Note that this technique works also if you want to send data from the master server to multiple children servers.

New terminology we need to know

I've covered the basic terminology in the previous article so if you're not familiar with it go read that first. Done? Ok here we go

  • Endpoints provide a means of communication between instances. Each instance can have only one Service Broker endpoint which can only communicate on the TCP/IP protocol.
  • Routes define how messages will travel between services in instances. They hold the target service name, target Service Broker ID, target instance network address, etc.
  • Certificates are used to provide security between two points of communication. They are the means to authenticate the user using the point of communication (endpoint, services or other).
  • Transport Security between two endpoints. Certificate or Windows security can be used to achieve this. This encrypts communication between two endpoints with an algorithm you specify.
  • Dialog Security for a Service Broker conversation. If used each message in a conversation is encrypted. By default this is used but we'll set the ENCRYPTION = OFF when we begin dialog conversations which means we won't be using Dialog Security

Building the whole system

For a scalable and reusable system like this we need a few requirements:

  1. There has to be one target instance that will store all incoming data.
  2. A child instance should be able to be added without modifying the master instance.
  3. Secure communication must be enabled between instances.

We have number 1 already covered since we have only one target instance.  Number 2 and 3 are a bit trickier though. Let's cover number 3 first. For instance, to have secure communication between each other (Transport Security) we can use Certificates or Integrated windows security. I've used the Certificate approach because it's harder to set up. These are the steps for setting up transport security on the master server and each child server.

Setup in Master Database on Master Server

  1. Create Master key
  2. Create Certificate (CertificateAuditDataReceiver) which is encrypted with the master key by default and has Active for BEGIN_DIALOG = ON
  3. Back Up the certificate to file. This file has to be copied to every child machine since we'll use it to authenticate the user.
  4. Create the Service Broker Endpoint and authenticate it with the certificate created in step 2.
    Grant Connect privileges to public on the endpoint
  5. Create the MasterAuditDatabase and set ENABLE_BROKER on it

Setup in Master Database on all Child Servers:

  1. Create a new Login and a new User for that Login
  2. Create Master key
  3. Create Certificate (CertificateAuditDataSender) which is encrypted with the master key by default and has Active for BEGIN_DIALOG = ON
  4. Create Certificate (CertificateAuditDataReceiver) from the file you copied from the master server (step 3 in Setup in Master Database on Master Server) and authorize it with the User (not Login) create in step 1.
  5. Create the Service Broker Endpoint and authenticate it with the certificate created in step 3.
  6. Grant Connect privileges to Login (not User) created in step 1 on the endpoint

If you want to use windows security instead of certificate-based then simply skip all of the steps that involve certificates and create endpoints with windows security.  Now we have setup the Transport Security between the Master and Child Servers. This covers point 3 and half of point 2. The other half of point 2 includes routes.

Routes

Service Broker cross-instance communication is dependent on routes because they tell the messages where to go. For two service brokers in different instances to communicate with each other, each service broker must have a corresponding route to the other service broker - even if the data flow is only one way. Even then the acknowledgements must be sent to notify success when receiving a message.

But this presents a problem because if we add a new child database and a new service in it we'd have to connect the Central Master Service Broker to the new service. This would mean a change on the Master Server for every new database which is unacceptable and clearly violates our point 2 (no changes on master instance). The solution is to create such a route on the master instance which would somehow correctly find the child service location (instance and database) from the child's service name.

Because of this scenario a special kind of route called 'TRANSPORT' exists. This special route must be created on the Master instance in the MasterAuditDatabase like so:

CREATE ROUTE [RouteDataReceiver] WITH ADDRESS = N'TRANSPORT'

For transport route to work properly each child databases' services must be named like this:  [tcp://xxx.xxx.xxx.xxx:PORT/UniqueServiceName]

  • TCP is the protocol name (since service broker only supports TCP this is the only option
  • xxx.xxx.xxx.xxx is the IP or the name of the computer we have our child instance on.
  • PORT is the port we use. It can be any number between 1024 and 32767
  • UniqueServiceName is a unique service name. I've chosen the format [.../DBName/Audit/DataSender] since we need only one per database.

In our case the services would be named:

-- service to send data from server 1 and TestDb1
[tcp://123.123.123.123:4022/TestDb1/Audit/DataSender] 
-- service to send data from server 1 and TestDb2
[tcp://123.123.123.123:4022/TestDb2/Audit/DataSender] 
-- service to send data from server 2 and TestDb1
[tcp://456.456.456.456:4022/TestDb1/Audit/DataSender] 
-- service to send data from server 2 and TestDb2
[tcp://456.456.456.456:4022/TestDb2/Audit/DataSender]

Note that if you have more than one instance on the same computer you have to use a different port for each instance.

Dialog Conversation reuse

Dialog Conversation reuse simply means that we don't start a new conversation for every message but send messages on the existing conversation. This is good for performance reasons because a single conversation for all messages to the same target service takes fewer resources than opening and closing a new conversation for each message. For example: having 1 conversation with 100 messages will take a lot less resources than 100 conversations with each having only 1 message.

Because in auditing there is a constant flow of data from child instances to master instance I've chosen to have one conversation per database constantly open. Each child database has an AuditDialogs table that holds the dialog for reuse.  Not ending conversations can lead to some problems in case of failures but those are outside of the scope of this article. For any errors that might happen check dbo.AuditErrors custom table and sys.transmission_queue view in both MasterAuditDatabase and all child databases.

Ending conversations

Conversations should be ended from the target. If you start and end a conversation from the initiator the target won't be able to send back an acknowledgment of any kind even if an error happens (failed validation, permissions problem, etc.) when processing the received message on the target. If that happens the received message that resulted in an error will be lost." This isn't the desired behavior in auditing. Ending conversations from the target server solves this problem. For our scenario the best thing do to is setup a scheduled job on the master instance (the target) that will end conversations periodically. The interval is up to you. See what works for you. This script run in the MasterAuditDatabase will end all conversations:

DECLARE @conversationHandle UNIQUEIDENTIFIER
DECLARE EndConversationCursor CURSOR FOR 
SELECT conversation_handle FROM sys.conversation_endpoints
 
OPEN EndConversationCursor
 
FETCH NEXT FROM EndConversationCursor
INTO @conversationHandle
 
WHILE @@FETCH_STATUS = 0
BEGIN
      END CONVERSATION @conversationHandle 
      
      FETCH NEXT FROM EndConversationCursor
      INTO @conversationHandle
END 
 
CLOSE EndConversationCursor
DEALLOCATE EndConversationCursor

Now the state of the conversation in MasterAuditDatabase is DISCONNECTED_OUTBOUND while the state of all conversations in child databases is DISCONNECTED_INBOUND. For the conversations to be truly closed you have to run the same script in all child databases as well. This will then remove the conversation from the sys.conversation_endpoints child databases and set the state of the conversations in the MasterAuditDatabase to CLOSED.

Conclusion

Hopefully I've shown how to easily implement a Many-to-One (One-to-Many) scenario in an understandable and easy to follow way. Service Broker is a powerful addition that opens up a whole new level of possibilities of use.

Attached script use

Attached script use order:

  1. Run script for Master database on Master instance (Data Receiver)
  2. Run script for MasterAuditDatabase database on Master instance (Data Receiver)
  3. Copy the certificate created in point 1 to all child instances (Data Senders)
  4. Copy the Service Broker Id retuned from point 2 into the script in point 6
  5. Run script for Master database on all child instances (Data Senders)
  6. Change IP's in script for child databases to your server IP's
  7. Run script for child databases on all child instances (Data Senders)
  8. Test.

The script files are all well documented and hopefully clear.


Related Articles

Advanced SQL Server 2008 Extended Events with Examples (25 May 2009)

Introduction to SQL Server 2008 Extended Events (19 May 2009)

Monitoring SQL Server Agent with Powershell (24 March 2009)

SQL Server Version (29 January 2009)

Scheduling Jobs in SQL Server Express - Part 2 (1 December 2008)

Scheduling Jobs in SQL Server Express (27 August 2008)

Alerts for when Login Failures Strike (14 July 2008)

Using xp_ReadErrorLog in SQL Server 2005 (12 May 2008)

Other Recent Forum Posts

Query performance Call Center data (21h)

Looking for on Premises tool to read data from SQL logs and populate data warehouse (1d)

Possible SQL 2014 to 2016 Issue - Some Application Functions Slow/Failing (1d)

Working with multiple WHERE statements (1d)

Create a new field value that shows a difference in value from 2 fields (3d)

Hierarchy wise Sales Targets (3d)

Get the MaxDate in results (5d)

Switch Statement within a SSRS SQL Expression (6d)

- Advertisement -