Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
ijprasad
Starting Member
29 Posts |
Posted - 2005-04-01 : 06:03:17
|
| We are facing the problem while reading from the table. We have real-time transaction processing system for one of our client. One database contains two tables(Table1 and Table2) of Raw transaction. Table1 and Table2 has identity column SerialNo(increment is 1). Data to these tables are appended by the third party application in random fashion real-time (as soon as they receive transaction and we do not have control over this application). Our Server Application reads the above transactions as soon as they inserted into the table.Currently we are reading transactions from only one table(Table1) based on Serial No Identity Column.At the first time Server fires the query on the database, filtering Serial No greater than zero if it fetches the X records than it would process the those X records than fires the second query filtering Serial No greater than X and so on. If the query fetches zero record than It would wait for 10 seconds than fires it again, this process will be continuous till we get records. For ExampleFirst QuerySELECT * FROM Table1(NOLOCK) WHERE SerialNo > 0 if above query fetches the X Reocords and Last Serial No is N1 than Second Query would be SELECT * FROM Table1(NOLOCK) WHERE SerialNo > N1 if above query fetches the Y records and last serial number is N2 than Third Query would be SELECT * FROM Table1(NOLOCK) WHERE SerialNo > N2 As the third party application insertion is continuous in random fashion, the X and Y Values will varies from point to point. Average record insertion speed is 500 transaction per second. Total records for the given day is about 1500000. During this process we are missing the 3 to 4 transactions on the given day. This happens once in a week or two weeks. Please suggest us what could be wrong? We have done several test and arrives to the conclusion that there is problem while reading from the database. Is Simultaneous Write and Read can't be done? Note: This is inform you that the processing for the above transactions take place within a second. we can't delay the transaction processing from our side. We are using SQL Server 7.0 and Borland Delphi 5.0. It is Borland Database Engine(BDE) connects to the database.Inderjeet |
|
|
mr_mist
Grunnio
1870 Posts |
|
|
mr_mist
Grunnio
1870 Posts |
|
|
Merkin
Funky Drop Bear Fearing SQL Dude!
4970 Posts |
Posted - 2005-04-01 : 07:16:51
|
Are you being ironic ? I've locked the other thread though, Inderjeet it's rude to crosspost.DamianIta erat quando hic adveni. |
 |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2005-04-01 : 08:19:23
|
| ijprasad, I can't tell for sure based on the limited information you've provided but here a couple questions that may lead you to answers:What happens when your processing of a record(s) fails? Is there a way for you to reset your last id to the last successfully processed id? Perhaps instead of using the last id your query received, you should use the last ID that your process completed. That way if you have a rolled back transacation in your processing, you don't miss that id.How are you getting the "last" id value? Are you making a serperate call from the select? ie: Select max(id) from table1 (nolock) or when you are processing your record set you're incrementing the last value as you process them.What happens when they rollback a transaction after you've read the (uncommitted) value? Do you get extra records in your processing?How are you determining that you're missing transactions? Missing IDs that exist in Table1 or missing an alternate key value(s)?Is the table really an identity column are are they using some homegrown id incremting scheme? If they are resetting IDs after a transaction is rolled back, then you could think you've already processed a record they've rolled back.Be One with the OptimizerTG |
 |
|
|
ijprasad
Starting Member
29 Posts |
Posted - 2005-04-02 : 01:23:14
|
When processing fails application raises the error and it is logged in application log file. Yes we are setting the last successful id. and this last success ful is we store in INI file and we get the last id from this ini file only.Third party never do the rollock.After doing the processing we put in our define table and match with third party inserted table to arrived at difference.We have defined the SQL SERVER IDENTITY Column. Rollback never happens during or after the processing.Thanks for solution.quote: Originally posted by TG ijprasad, I can't tell for sure based on the limited information you've provided but here a couple questions that may lead you to answers:What happens when your processing of a record(s) fails? Is there a way for you to reset your last id to the last successfully processed id? Perhaps instead of using the last id your query received, you should use the last ID that your process completed. That way if you have a rolled back transacation in your processing, you don't miss that id.How are you getting the "last" id value? Are you making a serperate call from the select? ie: Select max(id) from table1 (nolock) or when you are processing your record set you're incrementing the last value as you process them.What happens when they rollback a transaction after you've read the (uncommitted) value? Do you get extra records in your processing?How are you determining that you're missing transactions? Missing IDs that exist in Table1 or missing an alternate key value(s)?Is the table really an identity column are are they using some homegrown id incremting scheme? If they are resetting IDs after a transaction is rolled back, then you could think you've already processed a record they've rolled back.Be One with the OptimizerTG
Inderjeet |
 |
|
|
ijprasad
Starting Member
29 Posts |
Posted - 2005-04-02 : 01:23:14
|
When processing fails application raises the error and it is logged in application log file. Yes we are setting the last successful id. and this last success ful is we store in INI file and we get the last id from this ini file only.Third party never do the rollock.After doing the processing we put in our define table and match with third party inserted table to arrived at difference.We have defined the SQL SERVER IDENTITY Column. Rollback never happens during or after the processing.Thanks for solution.quote: Originally posted by TG ijprasad, I can't tell for sure based on the limited information you've provided but here a couple questions that may lead you to answers:What happens when your processing of a record(s) fails? Is there a way for you to reset your last id to the last successfully processed id? Perhaps instead of using the last id your query received, you should use the last ID that your process completed. That way if you have a rolled back transacation in your processing, you don't miss that id.How are you getting the "last" id value? Are you making a serperate call from the select? ie: Select max(id) from table1 (nolock) or when you are processing your record set you're incrementing the last value as you process them.What happens when they rollback a transaction after you've read the (uncommitted) value? Do you get extra records in your processing?How are you determining that you're missing transactions? Missing IDs that exist in Table1 or missing an alternate key value(s)?Is the table really an identity column are are they using some homegrown id incremting scheme? If they are resetting IDs after a transaction is rolled back, then you could think you've already processed a record they've rolled back.Be One with the OptimizerTG
Inderjeet |
 |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2005-04-02 : 12:11:30
|
Inderjeet,What I've done in the past was to work off of a "processing" table instead of the actual table. In other words, I created a trigger for insert on the table that your OLTP system is writing to. Something very quick and simple so as to have minimal impact on the on-line system. That way, every record in the client table must be in the processing table. Then your process updates processStatus and processDate columns and you have a log of what's been done and when as well as what hasn't been done yet. This concept lends itself well to not missing any records. It also can work well for having multiple processes work to keep up with the OLTP.For this approach, you would need to confirm that adding a trigger to that table wouldn't violate any agreements you have with the party that inserts to that table.This is an example of what I'm talking about:set nocount on--create client tablecreate table clientTable (clientTableID int identity(1,1), col1 int)--create processing (log) tablecreate table processClientTable (clientTableID int, processStatusID tinyint, processDate datetime, processedByID int)/* example processStatusIDsprocessStatusIDs 0: unprocessed 1-5: number of failed attemps 10: successfully processed 20: failed to processprocessedByID if you have multiple processes working simultaniously to keep up this could track which process has "reserved" the record as well as log which process completed the processing.*/go--add trigger to the client table to insert into your processing tablecreate trigger trInsClientTable on clientTable for insert asinsert processClientTable (clientTableID, processStatusID)Select clientTableID, 0 from insertedGO--try it outinsert clientTable (col1) values (10)Select * from ClientTableSelect * from ProcessClientTableGOdrop table clientTabledrop table processClientTable Be One with the OptimizerTG |
 |
|
|
|
|
|
|
|