Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 SQL Server Administration (2008)
 DBCC DBREINDEX been running for 24 hours

Author  Topic 

DeepGround
Yak Posting Veteran

73 Posts

Posted - 2011-07-28 : 11:16:51
This seems unusually long even for a lot of data. Can it really take this long to re-index. What is the hardware bottleneck?

The CPU / HDD / Memory all look within par.

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-07-28 : 11:35:11
Check your transaction log file(s), especially their auto-growth increment. If it's a small increment (< 256 MB, or 10 percent) then the repeated auto-growth will impact performance. This will also be affected by disk fragmentation of both the log and data files.
Go to Top of Page

DeepGround
Yak Posting Veteran

73 Posts

Posted - 2011-07-28 : 11:42:05
It was at 10 percent of 666MB. I changed it to grow in 1000MB increments.
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-07-28 : 11:55:30
quote:
666MB
Well there's your problem, your database is possessed by Satan

Go to Top of Page

DeepGround
Yak Posting Veteran

73 Posts

Posted - 2011-07-28 : 12:07:43
Well I wish he would hurry up. I don't have an eternity yet.
Go to Top of Page

DeepGround
Yak Posting Veteran

73 Posts

Posted - 2011-07-28 : 14:17:45
The log file is not even growing. And it is still running.
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-07-28 : 14:26:11
You can't always rely on the file size returned to the GUI in SSMS. Run this query periodically:

select size/128 MB, name, physical_name from sys.master_files WHERE DB_NAME(database_id)='myDB'

If the size of the log file changes, or is not equal to 666, then it's definitely growing.

You may want to install sp_whoisactive:

http://sqlblog.com/blogs/adam_machanic/archive/2011/04/02/who-is-active-design-philosophy-a-month-of-activity-monitoring-part-2-of-30.aspx

And run it on your server. You'll find the session that's running the REINDEX and see which table it's currently working on (if it's looping through them) and the percentage complete (sometimes this may not be populated or updated). There are also columns to tell you if it is blocking or being blocked by other sessions.
Go to Top of Page

DeepGround
Yak Posting Veteran

73 Posts

Posted - 2011-07-28 : 14:41:18
The process is showing as Suspended. I allocated more memory and it is still showing as suspended, how do I ask it to continue?
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-07-28 : 14:59:26
Suspended doesn't mean anything. Look at the following columns:

dd hh:mm:ss.mss - this is how long the session has been running continuously
session_id - this is the SPID for the REINDEX request. If there are other SPIDs running, see if this one is blocking any of them
blocking_session_id - if it's being blocked, this is the process that's blocking it
wait_info - if it's waiting on disk or some other resource, it will list it here, and how long it's been waiting
percent_complete - this should tell you how far along in the process it is. It may not be 100% accurate as it may not refresh if the server is busy
sql_text - click on the link in the results to see the SQL statement being executed

Look in blocking_session_id and see if something else is blocking it. Also click on the sql_text link to view the entire statement. Also check the percent_complete column
Go to Top of Page

DeepGround
Yak Posting Veteran

73 Posts

Posted - 2011-07-28 : 18:33:06
One of my co-workers found that if you shrink the database first then the re-index only takes 10 minutes. He did it with another database and then tried it on this and it worked. Strange.
Go to Top of Page

muratos
Starting Member

22 Posts

Posted - 2011-07-29 : 06:57:12
Log files consist of VLFs (Virtual log files). If at the beginning, you give a small log size and if it increments in very small steps, they will be created a lot, maybe hundreds. This can affect the performance badly due to fragmentation. Because of this, first it is needed to shrink the log to very very small size and increase it to a more appropriate size such as 4GB at once and make the increments in let's say 1GB. This will remove most of the smallest size chunks. It is better to design at the beginning with a large log size at once if it will be certainly filled in the future.

Here is a good resource from Kimberly Tripp from Sqlskills.com about this.

[url]http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx[/url]

http://ipad2keyboard.net
http://ipad2covers.org
Go to Top of Page

DeepGround
Yak Posting Veteran

73 Posts

Posted - 2011-07-29 : 11:33:32
Thanks
Go to Top of Page
   

- Advertisement -