Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
allend2010
Starting Member
28 Posts |
Posted - 2003-10-08 : 12:27:44
|
| Hello:I have a sql statement that is returning multiple rows from a large table with a recursive join whenever there are duplicate records. This only seems to happen when the matching criteria that I am using has duplicate records. There is no explicit unuqie column because we are loading from a file. For example, this table data would work ok:Field1 Field2 Field3====== ====== ======A 1 TedA 2 FredA 3 JedB 1 JillB 2 BillB 3 WillBy doing the self join where Field2 is the matching criteria for the inner join and it works ok here but if I suddenly have another record where field1 has a value of A then I will get a full join because it is going to match all data from both sets of records. Is there anyway to avoid this to ensure uniqueness? PS I tried to create a grouping column which would uniquely group each set of data but due to the size of the data it takes way to long to process.Thanks in advance for your helpAl |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2003-10-08 : 12:32:08
|
| What do you want to get from the query?You must have unique records and need to uniquely define the record to join to if you want a single record.If you don't have unique records then you should clear up the data first.If you do then you probably just need to include other fields in the join.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
|
|
|