Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
chedderslam
Posting Yak Master
223 Posts |
Posted - 2002-08-12 : 15:13:17
|
| Is there anyway to build a comma-delimited list into one variable, but longer than 8000 characters(varchar)?Basically, we have a dynamic "where" clause that pulls just the keys into a recorset for a cursor, then the cursor appends those keys with commas to a var. Then we want to pass that to another proc to pull the actual recordset columns required, using an IN clause on the keys passed. However, you can't append to a TEXT datatype, so I am hoping to find another way to do it.To make it not too easy, we would also like to avoid temp tables. : )SO, is there some native function that will dump a single column of a recordset into a csv variable, preferably a text data type?Thanks for any help! |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2002-08-12 : 15:54:57
|
quote: Is there anyway to build a comma-delimited list into one variable, but longer than 8000 characters(varchar)?
I think you can do this using ADO, because it has capability to handle long string (text) values. But, in passing the text variable to SQL Server you may encounter problems extracting data from it. You'd have to EXECUTE the dynamic SQL by including the text variable whole:CREATE PROCEDURE GetData @where text ASEXECUTE ('SELECT * FROM myTable WHERE myColumn IN (' + @where + ') ')This *might* not work, I haven't tested it. In any case, it would be almost as easy to use multiple varchar(8000) variables, and they would totally avoid any problems associated with text data. And if this sounds like a kludgy, undesirable method, good, 'cause it helps me lead into:quote: To make it not too easy, we would also like to avoid temp tables...SO, is there some native function that will dump a single column of a recordset into a csv variable, preferably a text data type?
In actuality, passing such a large list as a string for use as an IN clause is probably the LEAST efficient way to process this query. Think about it: you're building a long string using multiple concatenation operations (pretty expensive), passing it to the server (increased bandwidth usage) only to have the optimizer parse each value out when processing the IN list!Would you like to know a more (if not the most) efficient method?Temp tables! If you're populating a cursor (eeeeeeeeeeeeeeeeewwwwwwwwwwwwwwwwww) where are you getting the values from? Probably a table, right? Why not just dump the values into a table, even a temp table? You can then call an SP that accesses the temp table. You can index the temp table to improve performance, and since you're searching ANOTHER table using the temp table for key values, you can use a regular JOIN operation and the query optimizer can optimize the hell out of it. Not to mention that the execution plan can be cached using a temp table, and absolutely CAN'T using a dynamic IN list.I'm not criticizing, but you're doing yourself a disservice by ignoring temp tables WITHOUT ACTUALLY TRYING THEM FIRST. There is no hard-and-fast rule that temp tables are bad; if they were, they wouldn't even be allowed! Obviously they have some benefits. You always have the option of using something else if temp tables prove to be lacking, but you won't know until you try.If you post the code you have now, and the exact table structures, we should be able to put something more concrete together. |
 |
|
|
chedderslam
Posting Yak Master
223 Posts |
Posted - 2002-08-13 : 14:23:33
|
| >> WITHOUT ACTUALLY TRYING THEM FIRSTActually, we've looked at them. If it was easy, we wouldn't really learn anything from doing it, would we? <grin>The dynamic sql you showed should work, we've already used that in many places.There is actually much more to what we're trying to do, and for many reasons, we need to avoid them.We're trying to make the select dynamic, allowing many "controllers" to adjust the recoredset. Not the best performance, but with the CONSTANT scope creep of this project, we really need to be able to modify the where clause that pulls the records as easily as possible. We're getting tons of web-enabled reports requested daily, and we need to put stuff out as fast as possible.With the number of records being pushed through the web, the performance loss is nominal, and the benifits FAR outweigh that hit.Also, we're using coldFusion, and I haven't been able to get a straight answer as far as the following(snipped from a post I made somwhere else). Feel free to answer if you know for sure:I am hoping that some of you better versed in MS-SQL and CF could clear this up for me.Is there an issue with multiple users accidentally hitting the same data set that is contained in either a local(#tbl_name) or global(##tbl_name) temporary table?Books Online says that a local temp table is local to the current "session", but how does a session relate to the connections that CF maintains, ie connection pooling?If we have, say 5 "select into"s (selecting into a global/local temp table, one after another in a stored proc, and while user 1 is on the 3'rd select into, user2 is on the first select into, and both are "selecting into" the same global/local table, based upon different where clauses, will this cause "bad" data, ie data that doesn't meet the where clauses per user, to enter the temp table? And thus the resulting record set returned would be incorrect?I know that this has a slim margin of happening, but I'm thinking concurrent access, same as the old insert new user, select max id problem.I would like to avoid a transactional lock because of the bottle neck, I would prefer a more "proper" approach.My feeling is that this is an issue, my dba doesn't, but both of us don't really know for sure. |
 |
|
|
|
|
|
|
|