Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Using triggers to update remote database?

Author  Topic 

martind10
Starting Member

6 Posts

Posted - 2004-10-21 : 11:04:45
Is it possible to use triggers to maintain updates on a remote copy of the database?

For reasons of table design, it has not proved possible to use replication. The production database is updated throughout the working day, and is now 16GB in size with 268 tables. Most of the tables do NOT have a unique Primary Key.

The client needs to use a local copy of the database for reporting in as near to real-time as possible, but are across a 2MB WAN, which is also used for other traffic.

Have investigated replication and scheduled DTS, but all have shortcomings. Any ideas? Please?

Many thanks

Martin

Kristen
Test

22859 Posts

Posted - 2004-10-21 : 11:27:21
So you could have a trigger that does:

INSERT RemoteServer.RemoteDatabase.dbo.MyTable
SELECT *
FROM inserted I
LEFT OUTER JOIN deleted D
ON D.MyPK = I.MyPK
WHERE D.MyPK IS NULL

-- I'm worried about the performance of the JOIN in this one. OPENQUERY would be better
UPDATE U
SET MyColumnA = I.MyColumnA,
...
FROM inserted D
JOIN deleted D
ON D.MyPK = I.MyPK
JOIN RemoteServer.RemoteDatabase.dbo.MyTable U
ON U.MyPK = I.MyPK

DELETE RemoteServer.RemoteDatabase.dbo.MyTable
WHERE MyPK IN
(
SELECT MyPK
FROM deleted D
LEFT OUTER JOIN inserted I
ON I.MyPK = D.MyPK
WHERE I.MyPK IS NULL
)

but I'm not sure its a good idea!

Kristen
Go to Top of Page

kselvia
Aged Yak Warrior

526 Posts

Posted - 2004-10-21 : 12:14:52
I think it is a bad idea to do remote operations in a trigger. I suggest creating a transfer_queue table with a processed_flag column default 'N'. Insert row id's into it from your trigger then create a SQL Agent job that sets processed_flag to 'P' (Pending) then moves the records to the remote server and either deletes the 'P' records or sets the flag to 'D' (Done) Schedule the job to run as frequently as you want.

If you need more real-time updates than that consider something like this. Put your update code in a SQL Agent job called "UpLoadData" then:

CREATE TRIGGER start_upload
ON NAV
AFTER UPDATE
DECLARE @nbr_approvals int
SELECT @nbr_approvals = count(*)
FROM NAV
WHERE approval_date_time = getdate()
IF @nbr_approvals = 3
BEGIN
if object_id('tempdb..##sync_dts') is null
exec msdb..sp_start_job @job_name = 'Update Approvals'
END

Create a SQL Agent job named 'Update Approvals' that executes the sp_oacreate statements below to launch the DTS package. Enable it, but do not schedule it.

DECLARE @hr INT
DECLARE @oPKG INT
EXEC @hr = sp_OACreate 'DTS.Package', @oPKG OUT
IF @hr = 0
EXEC @hr = sp_OAMethod @oPKG, 'LoadFromSQLServer("(local)", "", "", 256, , , , "UploadData")', NULL
IF @hr = 0
EXEC @hr = sp_OAMethod @oPKG, 'Execute'
IF @hr = 0
EXEC @hr = sp_OADestroy @oPKG

Add to the beginning of the DTS Package

create table ##sync_dts (c1 int)

I did something like that and it supports about 7,000 inserts/second.

--Ken
I want to die in my sleep like my grandfather, not screaming in terror like his passengers.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-10-21 : 13:08:50
I like the transfer-queue too, but trouble with that is once you get to two tables the updates need to be in the original order to preserve RI and stuff ... and then you are re-inventing replication, eh? <bg>

Kristen
Go to Top of Page

kselvia
Aged Yak Warrior

526 Posts

Posted - 2004-10-21 : 13:11:31
Good point. I have never used log shipping. I wonder if it might be a good alternative?

--Ken
I want to die in my sleep like my grandfather, not screaming in terror like his passengers.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-10-21 : 13:15:35
Darn it, I should'a read the question more carefully ...

"The client needs to use a local copy of the database for reporting"

sounds like a readonly copy would do nicely

Kristen
Go to Top of Page

martind10
Starting Member

6 Posts

Posted - 2004-10-21 : 13:26:00
Sounds like I've started a discussion here - read only is fine (in fact essential - the database is a mirror of their trading application), full merge replication or scheduled DTS could be carried out overnight, but they need real-time reporting during the working day, without any loss of availability of data.

doesn't log-shipping mean that some tables are unavailable during the restore process - we have no control over when the queries (reports) are run, the data has got to be available throughout.

Martin
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-10-21 : 13:30:53
If its just READ ONLY at the far end why was replication deemed to be A Bad Idea?

(I don't know enough about log shipping to know if you can have continuous read access as new logs are "applied")

Kristen
Go to Top of Page

martind10
Starting Member

6 Posts

Posted - 2004-10-21 : 14:45:28
As far as I could tell, Snapshot and Transactional replication both required unique Primary Keys on each table - many (most) tables do not have this - this leaves Merge; which requires a full 'read' of the remote database (which hasn't had any updates!) before any commitment of new tranasactions. This proved much too slow over our conection for the amounts of data involved.

Sceduled DTS appears to work, but takes nearly two hours each time, and because the 'Drop Objects' needs to be selected, throughout this period at least one table is incomplete, leaving the report results unreliable.

What I'm trying to achieve is the equivalent of transactional replication without the need for a unique PK.

over...

Martin
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-10-21 : 14:46:59
The PK thingie would be a showstopper. Are you in a position to add an Identity column and make it a PK for those tables that have no PK at present? Then you could use replication ...

Kristen
Go to Top of Page

martind10
Starting Member

6 Posts

Posted - 2004-10-21 : 14:53:06
I'd have to speak to the vendor of the production application we are mirroring - that would be on Monday now as it's 8.00 pm here...

..do you have nothing else to do!?

Appreciating the help

Martin
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-10-22 : 01:35:52
Bit scary to have a bought-in application that has no PKs on its tables ... but I'm speaking off the top of my head.

"do you have nothing else to do"

Well ... I've just finished the sleeping part of my day, and now I'm off to manage the troops ... like yesterday!

Kristen
Go to Top of Page

jen
Master Smack Fu Yak Hacker

4110 Posts

Posted - 2004-10-22 : 01:42:23
if it's for read-only, determine at which part of the day you'll need the data for reporting, and if the vendor can't add a primary key,
then you can create your own logshipping...

network backup the log files every n-interval, then upon received, restore them to read only and able to restore more logs...

just some ideas...

--------------------
keeping it simple...
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-10-22 : 02:21:15
Is the remote database available during log shipping restore? (I dunno, that's why I'm asking!)

I wonder if the users would be more comfortable with data changing at known times, and not in between. We used to have an aplication where the enquiry users only saw yesterdays data (overnight replication). The reports they made during the day were consistent & repeatable (and the lifespan of a "record" on their industry was 20 years, so today's data should not have been important to them) ... but they clammoured for Today's Data ... and then when I gave them reap-time data they moaned that their reports were not repeatable (they would "just add a column" and then get different results ... <g>) and they started hassling the Data Entry people because such-and-such a record was not yet in the system ... Yeah, it was in the pile of stuff data entry were busy entering...

Kristen
Go to Top of Page

surefooted
Posting Yak Master

188 Posts

Posted - 2004-10-22 : 09:00:45
quote:
Originally posted by martind10

As far as I could tell, Snapshot and Transactional replication both required unique Primary Keys on each table - many (most) tables do not have this - this leaves Merge; which requires a full 'read' of the remote database (which hasn't had any updates!) before any commitment of new tranasactions. This proved much too slow over our conection for the amounts of data involved.



You should be able to snapshot replicate without primary keys.

-Jon
Should still be a "Starting Member" .
Go to Top of Page

tsturm
Starting Member

7 Posts

Posted - 2004-10-22 : 10:01:31
@Kristen: yup, there´s nothing like a good reporting that you have carefully faked yourself :-) beats any messy real-time stuff in terms of consistancy. But to get back to the point: has anyone had any headaches because of linked servers? why is a trigger or sp à la "insert into servername.database.dbo.table blah" not a good idea?
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-10-22 : 10:06:31
Tends to be quite slow if you join a table on a linked server to one in the current server. We tend to do things like

SELECT @strSQL = '
SELECT *
INTO ##TEMP_FREE_STOCK
FROM OPENQUERY(MyOtherServer,
''
SELECT PRODUCT_CODE, STOCK_LEVEL
FROM MyOtherDatabase.dbo.FREE_STOCK
'')
'

EXEC (@strSQL)

and then make JOINs to the temporary table

Kristen
Go to Top of Page
   

- Advertisement -