Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Changing tables based on syscatalog informations

Author  Topic 

AskSQLTeam
Ask SQLTeam Question

0 Posts

Posted - 2003-02-25 : 07:41:13
andré writes "Hello,

We are seeking for a way to automate the following tasks, based on SQLscripts based on the information contained in the Sys Tables for generating other SQLscripts

- deleting all the indexes on all tables at once (over 200 indexes for 150 tables).

- setting the collation to "case sensitive, accent insensitive" for all item with typ varchar, nvarchar, char or nchar, and this for all tables in a database.

- seeking for all item with null values, and replacing them with not null, and binding them with a default value (0 for numeric and '' for varchar and char), this also for all tables.

- as nvarchar, nchar and ntext are not supported by embedded C Programm (our main access to the database), we want to change nvarchar to varchar, nchar to char. Here also looking for a way to automate this task, if ever possible.

We have tryed this approch for the second question :
------------------------------------------------------------------
-- generates alter script
--
-- List all Tables/Items mwith Char or Varchar
-- generates :
--
-- ALTER TABLE <TABLE
-- ALTER <column> <typ> COLLATE xx
------------------------------------------------------------------
-- Kopfhammer
-- Jan 2003
------------------------------------------------------------------

-- create temporary tables with all the needed items
drop table #itemlist
create table #itemlist
(id int not null identity primary key,
tabname sysname,
colname sysname,
typname sysname,
typlen int)

insert #itemlist (tabname, colname, typname, typlen)
select
tab.name,
col.name,
typ.name,
col.length
from
sysobjects tab,
syscolumns col,
systypes typ

where tab.type ='U' and
tab.id = col.id and
(col.type = 39 or col.type = 47) and
col.type = typ.type and
(typ.name = 'nvarchar' or typ.name = 'nchar')
order by tab.name, col.name

-- select * from #itemlist

-- Count number of items
declare @i int
declare @dbcount int
declare @dbid int
declare @tabname nvarchar(60)
declare @colname nvarchar(60)
declare @typname nvarchar(20)
declare @typlen int

set @dbcount = (select count(*) from #itemlist)
set @i = 1

-- for each item generate alter statement
while @i <= @dbcount
begin
set @dbid = (select id from #itemlist where id = @i)
set @tabname = (select tabname from #itemlist where id = @i)
set @colname = (select colname from #itemlist where id = @i)
set @typname = (select typname from #itemlist where id = @i)
set @typlen = (select typlen from #itemlist where id = @i)
print 'ALTER TABLE ' + @tabname + char(13) +
'ALTER COLUMN ' + @colname + ' ' +
@typname + '(' + str(@typlen,3) + ')' + char(13) +
'COLLATE Latin1_General_CS_AS' + char(13) + char(13)
set @i = @i + 1
end

This scripts generates a lot of ALTER Statements like this one :

ALTER TABLE abgr_bilanz
ALTER COLUMN abrechnung_teuerung nvarchar(128)
COLLATE Latin1_General_CS_AS

But the alter statment does'nt work on a key item. Also we must drop all the indexes (reason for the first question) and it seems not to be very fast.

We find this method is heavy, there must be a better way to it ?

Do you have an idea, or a suggestion ?

Kind regards

André"

X002548
Not Just a Number

15586 Posts

Posted - 2003-02-25 : 13:17:41
Well...I like to use catalogs from RDBMS's to automate the rather mundane things you are trying to do. With that said, the scope of what you're trying to accomplish is very big. You must take in to account that you're build an application to modify applications. This in itself is going to require extensive QA (does anyone know what that is anymore?).

Anyway, your best bet for SQL Server is to NOT go against the system catalog tables, but to use the system views that are supplied by SQL Server.

Check out BOL for INFORMATION_SCHEMA

Good Luck

Brett

8-)

Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2003-02-25 : 13:55:33
I agree about using the INFORMATION_SCHEMA views instead of going directly against the system tables. It will be easier for you and can be more easily modified to work with other non-SQL Server products should you ever need to.

As far as dropping the indexes, if you have a clustered index on the table, if it is dropped the non-clustered indexes will be rebuilt. If you drop the non-clustered indexes first you should alleviate this problem, but frankly I think there are much easier ways to do what you're suggesting. You would also need to drop any constraints before you go about altering the columns, including the primary key.

Another thing is your C program's lack of unicode support. It is the wrong angle to simply change the data to accommodate the limitations of a particular program. You should really rewrite that C code to accommodate unicode, or simply design the database to use only char/varchar. If you are accepting outside databases for conversion and you simply change the datatypes, you will almost certainly corrupt the data. That's a far more serious problem. You also risk issues by simply replacing nulls with default values without examining possible data integrity consequences. Any null column that has a foreign key to another table may reject a default value, for example.

Also, it makes no sense to set the collation on each column individually, for each table, if they all need to be the same anyway. Set the collation at the database level.

As far as how to go about doing this, in all honesty the easiest way would be to generate scripts for all tables, indexes, constraints, triggers and datatypes, and then do a search and replace for "CREATE TABLE" and change it to "ALTER TABLE". You'd also need to place "ALTER COLUMN" in the proper place, so there's a limit on how much you can automate it. What might be faster is to simply leave the generated script alone and execute it in another, empty database. You could then INSERT the data from the original, writing queries to use IsNull or CASE expressions to replace nulls with their defaults. If you move the data over, and THEN add the indexes and constraints, you can greatly improve the performance.

Go to Top of Page
   

- Advertisement -