Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 New to SQL Server Programming
 Limit SQL Server log file growth rate

Author  Topic 

agismark
Starting Member

8 Posts

Posted - 2013-01-15 : 17:17:39
Hello,
I have a big SQL server database of 2.3 billion rows and size of 76gb
My problem is that i want to convert a column type to smalldatetime but during this operation .ldf file grows so big that takes my entire disk space (it got up to 350gb) and then query exits with error.
Is there any way to keep the .ldf small?

I shrinked my .ldf from options
Database recovery model is set to simple

thank you,
Agis

James K
Master Smack Fu Yak Hacker

3873 Posts

Posted - 2013-01-15 : 18:46:52
Take a look at this thread; people have recommended various approaches: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=182080

The one I favored (which nobody else seemed to support, for reasons hitherto unknown to me) is as follows:

a) Add a new column that is of smalldatetime type to the table.
b) Copy the data from the existing column to the new column. Be sure to do it in small chunks. Since the recovery model is set to simple, the log file growth will be limited by the largest chunk that you copy.
c) Drop the existing column
d) Rename the new column to the old column name.
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 2013-01-15 : 20:19:44
Thanks a lot James,
here is the code that seems to work for me :

WHILE (2 > 1)
BEGIN
BEGIN TRANSACTION
UPDATE TOP ( 10000 ) [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
IF @@ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
-- 1 second delay
WAITFOR DELAY '00:00:01'
END -- WHILE
GO
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 2013-01-15 : 22:45:21
This code didnt work in the end
i got error message:
Msg 9002, Level 17, State 4, Line 21
The transaction log for database 'ais' is full due to 'ACTIVE_TRANSACTION'.

quote:
Originally posted by agismark

Thanks a lot James,
here is the code that seems to work for me :

WHILE (2 > 1)
BEGIN
BEGIN TRANSACTION
UPDATE TOP ( 10000 ) [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
IF @@ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
-- 1 second delay
WAITFOR DELAY '00:00:01'
END -- WHILE
GO


Go to Top of Page

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2013-01-15 : 23:41:55
Is your recovery model simple? change it to simple and delete in chunks and you can truncate logs at each interval.
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 2013-01-16 : 00:25:34
quote:
Originally posted by sodeep

Is your recovery model simple? change it to simple and delete in chunks and you can truncate logs at each interval.



yes it is simple

i entered shrink command but i get error :
Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.
please check my code and advice a change on it
my code is:


declare @metritis int ;
set @metritis=5000000;
declare @msg varchar;



WHILE (2 > 1)
BEGIN
BEGIN TRANSACTION
UPDATE TOP ( 5000000 ) [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
IF @@ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
set @metritis = @metritis +5000000;
set @msg=cast(@metritis as varchar)
RAISERROR (@msg, 10, 1) WITH NOWAIT
-- 1 second delay
WAITFOR DELAY '00:00:01'
DBCC SHRINKDATABASE(ais, 10, TRUNCATEONLY)
END -- WHILE
GO
Go to Top of Page

James K
Master Smack Fu Yak Hacker

3873 Posts

Posted - 2013-01-16 : 06:18:56
I see couple of problems with the way you have written the query.

1. What the error message said - you cannot shrink logfile inside a transaction. In fact, don't try to shrink it at all yet.
2. The Update statement will update the same 5000000 rows with every update, because you don't have a where close, or an order by clause. So it probably will take the first n rows based on the cluster key and update that repeatedly.

So what you probably need is something like this:

UPDATE  [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
WHERE YourBIgIntPKColumn <= 5000000;
GO

UPDATE [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
WHERE YourBIgIntPKColumn BETWEEN 5000001 AND 10000000
GO
... etc


Also, initially at least, I would suggest that you run this manually for a few chunks, one chunk at a time and monitor the log file growth. You can monitor the log file size and the size used using this:

SELECT * FROM sys.dm_os_performance_counters
WHERE instance_name = 'ais'
AND counter_name IN ('Log File(s) Size (KB)','Log File(s) Used Size (KB)')
When the used size gets close to the file size issue a CHECKPOINT command. (Look up MSDN for checkpoint). After you issue the checkpoint, check the log file sizes again.

This should give you an idea of how to manage the log file growth. I never issue manual checkpoint commands - instead letting SQL Server engine, which is much much smarter than me manage it. In your case it may be required unless you want to wait for the automatic check points. When you do checkpoint, it can impact performance of other queries and other databases on the system. So test it to see how it behaves before you do it in production.
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 2013-01-16 : 21:01:04
thanks James ,
it seems that there was something wrong with the whole database so I created it again. The import will finish tomorrow and I will manipulate the data then
Go to Top of Page

jackv
Master Smack Fu Yak Hacker

2179 Posts

Posted - 2013-01-17 : 01:52:19
You could also add a second transaction log file on separate drive - run the queries - and then drop the second transaction log file.

Jack Vamvas
--------------------
http://www.sqlserver-dba.com
Go to Top of Page
   

- Advertisement -