| Author |
Topic |
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 08:52:51
|
| Hi SQL TEAMI have designed a database where most of the tables have relationships with their UNIQUE IDENTITY coulmns. since I didnt find any column in those tables as PK so I have decided to generate an IDENTITY column and have relationships. Now when I am trying to insert data into those tables at once from a text file I am having difficulty.I cudnt update the FK columns which referenced from an other table having the same column as PK.Can any one help me how to handle such things. |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 09:26:11
|
| Please confirm if you are talking about the next scenario:1. There is table A with identity pk column.2. There is table B with identity pk column and A_ID column that is fk column implementing many-to-one relationship to table A.3. There is file containing data that should be written in both table A and table B.4. There is no any alternate key in both tables. |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 09:32:37
|
| Yeah u r right.There is a file cotaining data the shud be written not only in the 2 tables but 5 tables where all of them have A_ID column as FK. |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 09:49:49
|
| So table A rows are repeated in a file. I mean, more then one row is related to only one row that should be written in table A. The problem is: How do you know which rows from the file should be written as one table A row? Maybe all columns from table A make its pk? |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 10:06:54
|
| I am using the text file in this way to fill the tablesINSERT INTO TableASELECTsubstring(Col001,1,6) as RecordTypesubstring(Col001,7,8) end as CreateDate and so on.... |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 10:20:40
|
| I understand that, but my question is:Can you have two rows in table A with all columns with the same value except for identity column? |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 10:24:31
|
| No I may not have repeating rows in tht way. |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 10:35:59
|
| It means there is alternate row identifier in table A and it contains of all columns (or maybe less?) in the table.The solution is:1. Copy file data in auxiliary table (Aux) consisting of all columns from all five table.2. Insert rows into table A from Aux table using select distinct <table a columns>3. Insert into B (<non-identity columns other then A_ID>, a.A_ID) select <expressions to populate table b columns>, a.A_ID form Aux join A on a.column1 = aux.column1 and ... and a.columnN = aux.AcolumnN...and so on. |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 11:05:16
|
| Yeah I am doing the same but I am truncating the rows in table(Aux) after I am loading the data into my tables.Assuming a fresh data to come into my (Aux)table when I load text file.S0 do u want me to use distinct when I am inserting into table A,B... or use distinct for Aux table. |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 11:11:40
|
quote: Yeah I am doing the same but I am truncating the rows in table(Aux) after I am loading the data into my tables.Assuming a fresh data to come into my (Aux)table when I load text file.S0 do u want me to use distinct when I am inserting into table A,B... or use distinct for Aux table.
That's correct rows should be truncated after successfull load.As I already wrote (step 2.) you should use distinct while inserting into table A. |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 11:46:37
|
| Do u want me to join each n every column Iam inserting into tables? |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 11:49:03
|
| Well, I hope there are not too many. Are you sure there is no subset of columns that is unique row identifier? |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 11:56:14
|
| Yeah there are too many columns(15). Do u think making a join in tht way is OK |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 12:14:54
|
| In that case there is another faster solution, but your code must be allowed to use set identity_insert:1. Copy file data in auxiliary table (Aux) consisting of all columns from all five table + AuxID identity column.2. Save @LastAID, @LastBID and so on. (select @LastAID = max(AID) from A...)3. Insert rows into table variable @A from Aux table using select distinct <table a columns>. Table variable (or temp table) has the same structure as table A except for additional AuxID column.4. Set identity_insert A on5. insert into A(AID, <other columns>) select AID + @LastAID, <other columns> from @A6. Insert into B (<non-identity columns other then A_ID>, a.A_ID)select <expressions to populate table b columns>, @A.AID + @LastAIDform Auxjoin @A on a.AuxID = aux.AuxID...Also, if more then one query can be run in parallel, you'll need to add transactions and error handling.[Edited] You need transaction anyway. |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 14:38:19
|
| I didnt get ur 2nd point.Anyways how can I maintain DISTINCT columns in my tables, because I am selecting distinct rows from a temp table and inserting into my table A. What will happen if I am trying to load a text file twice? the temp table has fresh data each time a file is loaded,How can we can control duplicate data in my main tables. |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-19 : 14:49:05
|
quote: I didnt get ur 2nd point.
quote: 2. Save @LastAID, @LastBID and so on. (select @LastAID = max(AID) from A...)
Sorry. I tought that there is deeper table hierarchy, later I figure out it's not. So correct step 2 should be:declare @LastAID intselect @lastAID = max(AID) from A quote: What will happen if I am trying to load a text file twice? the temp table has fresh data each time a file is loaded,How can we can control duplicate data in my main tables.
You can try with kind of controll table where you will store file name for each run. That means you should have new file name (or folder) whenever new data are loaded. I don't know your process to tell you exactly how. Figure out a consistent process including file name rules. |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-19 : 15:10:17
|
| The thing is I cant control over input file through its name because the name of the file will never change and more over it is an automated process to load a file. |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-21 : 12:50:34
|
| Then you'll have to go with joining all columns unless someone have a better idea. Maybe you can compare new file instance with old one moved to archive folder? |
 |
|
|
supersql
Yak Posting Veteran
99 Posts |
Posted - 2005-09-21 : 15:44:15
|
| Do u mean I need to compare the content of the new file with old file, Is so how is it possible? |
 |
|
|
mmarovic
Aged Yak Warrior
518 Posts |
Posted - 2005-09-22 : 03:14:45
|
quote: Do u mean I need to compare the content of the new file with old file, Is so how is it possible?
Yes, but I didn't mean to use sql for that. I know there are free tools that are displaying differences if there are any otherwise they display info that files are identical. Some of them may be able to return error code or to write info in log.All that in case that joining by all columns is too slow for practical use, which may happen. |
 |
|
|
|