| Working with DBCC 
				INDEXDEFRAG by
				Bill Wunder 
				SQL Server fragmentation comes in 
				two basic flavors: Physical and Logical. Physical fragmentation 
				- or more traditionally external fragmentation -  happens at the 
				file allocation level. Logical fragmentation - also know as 
				Internal fragmentation - occurs primarily at the table and index 
				level.  
				Physical fragmentation can become 
				problematic on a server with many databases where databases have 
				been left to auto grow over time or when a disk or array has 
				been used as both a file server/store and as a SQL Server 
				physical device container. If you have slow performance that you 
				are unable to explain by other means, it may be wise to take a 
				good hard look at the physical fragmentation on a server. See my 
				recent article Benchmarking Techniques using T-SQL Part 2- Performance Counters 
				via sysperfinfo for some tips on using the ::fn_virtualfilestats() 
				function to identify slowness at the file level. But even before 
				you attempt any sort of physical fragmentation cleanup, be sure 
				that there are no logical fragmentation issues to resolve. It is 
				a generally accepted best practice to regularly remove logical 
				fragmentation.   
				Generally to identify and repair 
				physical fragmentation a disk utility is used. Depending upon 
				the hardware and the disk utility used maintaining a SQL server 
				to reduce physical fragmentation can be problematic and highly 
				specialized to the environment. Sometimes an approach such as 
				backup, drop, then restore of a database can alleviate 
				fragmentation, sometimes stopping the SQL Server service and 
				allowing the disk utility to do it's thing can help, and high 
				end disk systems may manage their own physical fragmentation 
				with no user intervention required or even possible. A best 
				practices approach to physical fragmentation is to create your 
				databases at the size the need to be, if you do need to grow 
				them do it once to the expected required size rather than 
				letting them creep up in size over time, and do not allow file 
				server storage and database physical devices to interleave on 
				the same direct attached disk sets.      
				Logical fragmentation is for the 
				most part the result of rows being inserted into a 
				non-monotonically increasing clustering index, deletes, and 
				updates. Under the covers SQL Server handles updates in two 
				basic ways. If storage allocation requirements can remain 
				satisfied the update will occur in place and fragmentation will 
				not occur or at least will be limited to a few bytes as in the 
				case where a varchar column is modified to a string with fewer 
				characters. If storage allocation requirements cannot support 
				the update "in place" the update will actually be processed as a 
				delete and an insert. If you want to spend the time looking you 
				can actually see this behavior happen in the transaction 
				log using the undocumented DBCC LOG() utility. Fortunately, 
				there is a good demonstration of using DBCC LOG() and the 
				behavior of two types of updates in the sswug.org article:
				
				"Update Methods Used in SQL Server 
				7.0" (sorry, I don't know 
				who wrote it so I can't tell you).  If an 
				update is processed as a delete/insert logical fragmentation 
				will occur. Tables clustered on anything other than a 
				monotonically increasing column are likely to see a lower 
				scan density value - approaching 0% - and a higher 
				logical scan fragmentation value over time while tables 
				clustered on an identity are likely to see a very high scan 
				density value - approaching 100% - and a relatively low 
				logical scan fragmentation over time and may never require 
				maintenance to assure consistent and satisfactory query 
				performance.  
				The Basics of Logical 
				Fragmentation Maintenance 
				Taken straight from Books Online: 
				
					Logical Scan Fragmentation 
					- Percentage of out-of-order pages returned from scanning 
					the leaf pages of an index. This number is not relevant to 
					heaps and text indexes. An out of order page is one for 
					which the next page indicated in an IAM is a different page 
					than the page pointed to by the next page pointer in the 
					leaf page. 
					Scan Density - Best 
					count is the ideal number of extent changes if everything is 
					contiguously linked. Actual count is the actual number of 
					extent changes. The number in scan density is 100 if 
					everything is contiguous; if it is less than 100, some 
					fragmentation exists. Scan density is a percentage. 
				 
				my note: (IAM = Index Allocation 
				Map) 
				 
				You can measure logical fragmentation using the DBCC SHOWCONTIG 
				utility. Be aware that DBCC SHOWCONTIG will hold locks on a 
				table while it scans through the pages and extents to measure 
				the fragmentation level of the table. Consequently, for large 
				tables in busy environments even the action of checking for 
				fragmentation can cause blocking durations that are 
				unacceptable. It is possible to measure fragmentation one index 
				at a time to reduce the time that locks are held or to use the 
				WITH FAST option to perform an abbreviated scan. In my 
				experience the clustered index of a table is likely to be the 
				one that takes the longest to scan and to be the most important 
				to monitor and maintain in terms of fragmentation. Because of 
				this I find the WITH FAST option - even though it does not 
				provide logical page density results - to be the most 
				useful because it is the least disruptive method of checking for 
				logical fragmentation. Another useful DBCC SHOWCONTIG option is 
				the ability to return the results of the scan in a tabular 
				format - WITH TABLERESULTS - making it very easy to capture the 
				results to a table and thereby track fragmentation over time. 
				This can be useful in determining the necessary frequency of 
				maintenance and can even help identify table indexing designs 
				that may warrant review and revision.  
				If you run DBCC INDEXDEFRAG on a 
				large logically fragmented table you can expect your transaction 
				log file size to explode during the process. If you use DBCC 
				DBREINDEX or you use a scripted approach to drop and recreate 
				the indexes on a large table - even if it is hardly fragmented - 
				you can expect your table to be unavailable for other use during 
				the process. If the table is small none of the de-fragmentation 
				operations are likely to be a huge concern but then again if the 
				table is small fragmentation is not likely to affect query 
				performance on that table to begin. It's also worth noting that 
				if a query is conditioned on the clustering index of a table or 
				is looking for only a small number of rows the performance 
				impact of fragmentation is much less than if the query is 
				considering a large number of rows, for example from an index 
				access path with low selectivity or a join operation that 
				returns a set of rows. It's quite possible that some 
				applications will perform satisfactorily for extended periods of 
				time with no attention to removing logical fragmentation. 
				However, even when performance is not impacted fragmentation can 
				result in an inordinately large number of pages being consumed 
				by a table relative to the true date set size.      
				In the case of DBCC INDEXDEFRAG 
				even if the table is huge the log growth problems during the 
				maintenance de-fragmentation operation are manifest much more 
				severely as the table is more fragmented. If the maintenance 
				operation doesn't have to move data it doesn't have to log the 
				move. It's that simple, but it is also more complicated than 
				that (man does that sound political). Index defrag cannot 
				resolve problems where different indexes are interleaved. For 
				example if one index allocates the next extent in the IAM 
				sequence, then another table's index comes along to grab the 
				next, then the first table gets the next extent you have a 
				situation where the first index will always have to hop over the 
				extent used by another table when scanning the first tables 
				index. DBCC INDEXDEFRAG cannot resolve this problem. Dropping 
				and recreating the index can. I believe you can identify this 
				problem if you observe that the actual count component of 
				the scan density does not closely approximate the best 
				count component after a DBCC INDEXDEFRAG. 
				For DBCC DBREINDEX or dropping 
				and recreating indexes the impact of the maintenance operation 
				is likely to have about the same impact regardless of the 
				fragmentation state of the table. The time to run a DBCC 
				DBREINDEX or drop the clustered index of a table seems to vary 
				only slightly depending on the level of  fragmentation of the 
				clustered index. Of course if it's not fragmented why bother to 
				run the fragmentation maintenance operation?  
				The Plan for Managing Logical 
				Fragmentation 
				The Database Maintenance Planner 
				wizard is one possible tool you can use to maintain indexes. The 
				wizard is most assuredly a better-than-nothing tool most 
				suitable for a DBA-less database environment. Only DBCC 
				DBREINDEX will be used by the wizard and it supports only a 
				database level granularity: all or none. For me it doesn't 
				provide the necessary flexibility necessary to assure my data 
				remains available nor does it allow me to monitor the and track 
				fragmentation patterns in the data set. 
				 Instead I prefer to use a stored 
				procedure that I run from the SQL Agent Scheduler that you can 
				investigate:
				
				admin db - index (fragmentation) maintenance.   
				 
				use admin
GO
SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS ON 
GO
if exists (select * from dbo.sysobjects 
           where id = object_id(N'[dbo].[IndexMaintenanceForDB]') 
           and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[IndexMaintenanceForDB]
GO
CREATE procedure dbo.IndexMaintenanceForDB
	@DatabaseName varchar(128)
	, @NotifyOnFail varchar(100) = '' 
	, @LogUsedThresholdKB int = 4000000 -- 4GB
as
/******************************************************************************
* dbo.indexMaintenanceForDB.PRC
* Bill Wunder	
* Date: 5/24/2002
*
* Outline: Identify which indexes need to be maintained and which maintenance 
*   operation to use. If scan density is less than 90% or logical scan 
*   fragmentation is more than 10% maintain the index. If index has over 10000
*   pages use DBCC INDEXDEFRAG to maintain otherwise use DBCC DBREINDEX. If 
*   table has a clustered index that will be maintained by a DBCC DBREINDEX do
*   not maintain other indexes because they will be rebuilt when the clustered
*   index is reindexed. Pause any active DBCC INDEXDEFRAG if a backup or 
*   restore is running or if log used size has exceeded @LogUsedThresholdKB.
*   Keep all fragmentation study history for trend analysis.
*
* Statistic                   Description 
* Count Pages                 Number of pages in the table or index. 
* Extents Scanned             Number of extents in the table or index. 
* Extent Switches             Number of times the DBCC statement moved from 
*                             one extent to another while it traversed the 
*                             pages of the table or index. 
* Scan Density                [Best Count: Actual Count] Best count is the 
*                             ideal number of extent changes if everything is
*                             contiguously linked. Actual count is the actual
*                             number of extent changes. The number in scan
*                             density is 100 if everything is contiguous; if 
*                             it is less than 100, some fragmentation exists.
*                             Scan density is a percentage. 
* Logical Scan Fragmentation 	Percentage of out-of-order pages returned from 
*                             scanning the leaf pages of an index. This number 
*                             is not relevant to heaps and text indexes. An 
*                             out of order page is one for which the next page
*                             indicated in an IAM is a different page than the 
*                             page pointed to by the next page pointer in the 
*                             leaf page. 
*!---->following not included in DBCC SHOWCONTIG() WITH FAST mode<-----!
* CountRows                   Number of rows in the table or index.
* Avg. Pages per Extent       Number of pages per extent in the page chain. 
* Extent Scan Fragmentation   Percentage of out-of-order extents in scanning 
*                             the leaf pages of an index. This number is not 
*                             relevant to heaps. An out-of-order extent is one 
*                             for which the extent containing the current page 
*                             for an index is not physically the next extent 
*                             after the extent containing the previous page 
*                             for an index. 
* Avg. Bytes free per page    Average number of free bytes on the pages 
*                             scanned. The higher the number, the less full 
*                             the pages are. Lower numbers are better. This 
*                             number is also affected by row size; a large 
*                             row size can result in a higher number. 
* Avg. Page density (full)    Average page density (as a percentage). This 
*                             value takes into account row size, so it is a 
*                             more accurate indication of how full your pages
*                             are. The higher the percentage, the better. 
*
* status 
*   0 - under consideration
*   1 - defrag
*   2 - defrag in progress
*   3 - defrag complete
*   4 - reindex
*   5 - reindex in progress
*   6 - reindex complete	
*   7 - After defrag showcontig
*   8 - manually populate showcontig
*   9 - eliminated from consideration
*   10 - 19 - historical 
*
* usage:		
    EXECUTE admin.dbo.indexMaintenanceForDB 'admin'
* Notes:		
*			
* Modifications   
* bw 04-25-03 Change showcontig to fast mode and make defrags interruptable 
*             based on log space                        
* bw 04-27-03 Speed up the showcontig and add a duration of showcontig column
* bw 12-22-03 Add logic to reindex if LogicalFrag > 10 in addition to case 
*             where ScanDensity < 90
* bw 03-13-04 Add showcontig to fraglist immediately AFTER maint to see how it 
*             went. Run indexdefrags through called proc and manage log growth
*             try not to run INDEXDEFRAG during backup operation on the server
*******************************************************************************/
declare @SQLStr nvarchar(4000)
 , @ObjectOwner varchar(128)
 , @ObjectName varchar(128)
 , @IndexName varchar(128)
 , @status int
 , @DBCCStartDt datetime
 , @JobCategory varchar(128)
 , @JobName varchar(1024)   
 , @JobDesc varchar(128)
 , @JobCmd varchar(2048)
 , @JobCmd2 varchar(2048)
 , @JobOwner sysname
 , @OutputFile varchar(1024)
 , @rc int
 , @ec int
 , @ErrDesc varchar(1024)
 , @Err varchar(300)
-- Verify requested db does exist, if not then notify and keep moving.
if db_id(@DatabaseName) is null
 begin
  select @Err = 'Database (' + @DatabaseName 
              + ') does not exist.  Resuming processing on next DB.'
  exec sysmon.dbo.safe_sendmail @recipients = @NotifyOnFail 
  return
 end
-- create the temp table once for reuse
CREATE TABLE #fraglist 
 ( ObjectName varchar (255)
 , ObjectId INT
 , IndexName varchar (255)
 , IndexId INT
 , Lvl INT
 , CountPages INT
 , CountRows INT
 , MinRecSize INT
 , MaxRecSize INT
 , AvgRecSize INT
 , ForRecCount INT
 , Extents INT
 , ExtentSwitches INT
 , AvgFreeBytes INT
 , AvgPageDensity INT
 , ScanDensity DECIMAL
 , BestCount INT
 , ActualCount INT
 , LogicalFrag DECIMAL
 , ExtentFrag DECIMAL)
-- Table structure will work with or with out WITH FAST showcontig option
if object_id('admin.dbo.fraglist','U') is null
 CREATE TABLE admin.dbo.fraglist 
  ( DatabaseName varchar (128) NOT NULL
  , ObjectOwner varchar(128) NOT NULL		
  , ObjectName varchar (128) NOT NULL
  , ObjectId int NULL
  , IndexName varchar(128) NOT NULL
  , IndexId int NULL
  , Lvl int NULL
  , CountPages int NULL
  , CountRows int NULL
  , MinRecSize int NULL
  , MaxRecSize int NULL
  , AvgRecSize int NULL
  , ForRecCount int NULL
  , Extents int NULL
  , ExtentSwitches int NULL
  , AvgFreeBytes int NULL
  , AvgPageDensity int NULL
  , ScanDensity decimal NULL
  , BestCount int NULL
  , ActualCount int NULL
  , LogicalFrag decimal NULL
  , ExtentFrag decimal NULL
  , RecCreatedDt datetime NOT NULL
    constraint dft__fraglist__RecCreatedDt DEFAULT (getdate())
  , Status tinyint NULL
    constraint dft__fraglist__Status DEFAULT (0)
  , SecondsToMaintain int	NULL
  , SecondsToShowContig int NULL
  , constraint 
    pkc_fraglist__DbName__ObjOwner__ObjName__IdxName__RecCreatedDt 
    primary key clustered 
     ( DatabaseName
     , ObjectOwner
     , ObjectName
     , IndexName
     , RecCreatedDt) on [PRIMARY]
    ) on [PRIMARY]
-- build a utiltiy proc to handle showcontig population of fraglist
set @SQLStr = 
'Create procedure dbo.GetShowContigForIndex
   @DatabaseName varchar(128)
 , @ObjectOwner varchar(128) = ''dbo''
 , @ObjectName varchar(128) = null
 , @IndexName varchar(128) = null
 , @PlanMaintenance tinyint = 8
as
/******************************************
* see IndexMaintenanceForDB
*
* status 
*  0 - under consideration
*  1 - defrag
*  2 - defrag in progress
*  3 - defrag complete
*  4 - reindex
*  5 - reindex in progress
*  6 - reindex complete	
*  7 - After defrag showcontig       (no-op)
*  8 - manually populate showcontig  (no-op)
*  9 - eliminated from consideration (no-op)
*  10 - 19 - historical 
*******************************************/
declare @SQLStr nvarchar(4000)
 , @ShowContigStartDt datetime
-- table should alrady exist unless call is one off
if object_id(''tempdb.dbo.#fraglist'',''U'') is null
 CREATE TABLE #fraglist 
  ( ObjectName varchar (255)
  , ObjectId INT
  , IndexName varchar (255)
  , IndexId INT
  , Lvl INT
  , CountPages INT
  , CountRows INT
  , MinRecSize INT
  , MaxRecSize INT
  , AvgRecSize INT
  , ForRecCount INT
  , Extents INT
  , ExtentSwitches INT
  , AvgFreeBytes INT
  , AvgPageDensity INT
  , ScanDensity DECIMAL
  , BestCount INT
  , ActualCount INT
  , LogicalFrag DECIMAL
  , ExtentFrag DECIMAL)
set @SQLStr = ''use '' 
            + @DatabaseName 
            + '' DBCC SHOWCONTIG(['' 
            + @ObjectName 
            + ''],['' 
            + @IndexName 
            + '']) WITH TABLERESULTS, FAST''
set @ShowContigStartDt = getdate()
insert #fraglist 
exec sp_executesql @SQLStr
insert admin.dbo.fraglist 
 ( DatabaseName
 , ObjectOwner
 , ObjectName
 , ObjectId
 , IndexName
 , IndexId
 , Lvl
 , CountPages
 , CountRows
 , MinRecSize
 , MaxRecSize
 , AvgRecSize
 , ForRecCount
 , Extents
 , ExtentSwitches
 , AvgFreeBytes
 , AvgPageDensity
 , ScanDensity
 , BestCount
 , ActualCount
 , LogicalFrag
 , ExtentFrag
 , Status
 , SecondsToShowContig)
select @DatabaseName
 , @ObjectOwner
 , ObjectName
 , ObjectId
 , IndexName
 , IndexId
 , Lvl
 , CountPages
 , CountRows
 , MinRecSize
 , MaxRecSize
 , AvgRecSize
 , ForRecCount
 , Extents
 , ExtentSwitches
 , AvgFreeBytes
 , AvgPageDensity
 , ScanDensity
 , BestCount
 , ActualCount
 , LogicalFrag
 , ExtentFrag
 , Status = 
   case	
    when @PlanMaintenance = 0 then
     case
      when (ObjectName = ''fraglist''
            or ScanDensity > 90
            or LogicalFrag < 10
            or (Indexid > 1 
                and exists(select 1 
                           from admin.dbo.fraglist 
                           where DatabaseName = @DatabaseName
                           and ObjectOwner = @ObjectOwner
                           and ObjectName = @ObjectName
                           and IndexId = 1 
                           and status = 4))) -- reindex clustered does all
       then 9  -- noop
      when CountPages < 10000
       then 4  -- reindex 
       else 1  -- defrag
       end
   else @PlanMaintenance
   end 
 , datediff(second,@ShowContigStartDt,getdate())
from #fraglist 
delete #fraglist'
if object_id('dbo.GetShowContigForIndex','P') is not null
 begin
  exec @rc = sp_executesql N'drop procedure dbo.GetShowContigForIndex'
  set @ec = @@error
  if @rc <> 0 or @ec <> 0
   begin
    set @Err = 'drop procedure dbo.GetShowContigForIndex failed'
    goto ErrorHandler
   end
 end
exec @rc = sp_executesql @SQLStr 
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'create procedure dbo.GetShowContigForIndex failed'
  goto ErrorHandler
 end
-- add components to prevent log growth for all INDEXDEFRAGs
-- do not DEFRAG if a backup or restore is active
set @SQLStr = 
'CREATE procedure dbo.DefragLargeTables
 @DatabaseName varchar(128)
as
/*
see IndexMaintenanceForDB
*/
declare @SQLStr nvarchar(4000)
 , @ObjectOwner varchar(128) 
 , @ObjectName varchar(255)
 , @IndexName varchar(255)
 , @status int
 , @DBCCStartDt datetime
-- one at a time until someone says stop, 2s before 1s
select top 1 @DatabaseName = Databasename
 , @ObjectOwner = ObjectOwner
 , @ObjectName = ObjectName
 , @IndexName = IndexName
 , @status = status
from admin.dbo.fraglist
where status in (1,2)
and DatabaseName = @DatabaseName
order by status desc
-- mark as current maintenance candidate if is a 1
update admin.dbo.fraglist		
set status = 2
where DatabaseName = @DatabaseName
and ObjectOwner = @ObjectOwner
and ObjectName = @ObjectName
and IndexName = @IndexName
and status = 1
set @SQLStr = ''DBCC INDEXDEFRAG('''''' 
               + @DatabaseName 
               + '''''',''''''
               + @ObjectOwner 
               + ''.''
               + @ObjectName 
               + '''''',''''''
               + @IndexName 
               + '''''') '' +
               + ''UPDATE STATISTICS ['' 
               + @DatabaseName 
               + ''].['' 
               + @ObjectOwner 
               + ''].['' + @ObjectName 
               + ''] ['' 
               + @IndexName 
               + '']''
set @DBCCStartDt = getdate()
exec sp_executesql @SQLStr 
-- mark as maintenance complete
update admin.dbo.fraglist		
set status = 3
 , SecondsToMaintain = Datediff(second,@DBCCStartDt,getdate()) 
where DatabaseName = @DatabaseName
and ObjectOwner = @ObjectOwner
and ObjectName = @ObjectName
and IndexName = @IndexName
and status = 2
-- add a fraglist row to show the table/index state after the maintenance
exec admin.dbo.GetShowContigForIndex 
   @DatabaseName = @DatabaseName
 , @ObjectOwner = @ObjectOwner
 , @ObjectName = @ObjectName
 , @IndexName = @IndexName
 , @PlanMaintenance = 7'
set @JobCategory = 'Database Maintenance'
set @JobName = 'w Defrag large ' + @DatabaseName + ' tables'
set @JobDesc = 'defrag indexes - stopped if log space reaches threshold size.'  
set @JobCmd = 'EXEC admin.dbo.DefragLargeTables ' 
            + char(39) + @DatabaseName + char(39)
set @JobCmd2 = 'exec admin.dbo.log_job_error ' 
             + char(39) + @NotifyOnFail + char(39) 
set @JobOwner = 'sa'
set @OutputFile = 'F:\MSSQL\Log\' + @JobName + '.out'
		
if object_id('dbo.DefragLargeTables','P') is not null
 begin
  exec @rc = sp_executesql N'drop procedure dbo.DefragLargeTables'
  set @ec = @@error
  if @rc <> 0 or @ec <> 0
   begin
    set @Err = 'drop procedure dbo.DefragLargeTables failed'
    goto ErrorHandler
   end
 end
exec @rc = sp_executesql @SQLStr 
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'create procedure dbo.DefragLargeTables failed'
  goto ErrorHandler
 end
if exists (select 1 from msdb.dbo.sysJobs where name = @JobName)  
 begin
  exec @rc = msdb.dbo.sp_delete_job @job_name = @JobName
  set @ec = @@error
  if @rc <> 0 or @ec <> 0
   begin
    set @Err = 'sp_delete_job failed'
    goto ErrorHandler
   end
 end
if not exists (select 1 
               from msdb.dbo.syscategories 
               where name = @JobCategory)  
 begin
  exec @rc = msdb.dbo.sp_add_category  
     @class = 'Job'
   , @type = 'local'
   , @name = @JobCategory
 set @ec = @@error
 if @rc <> 0 or @ec <> 0
   begin
    set @Err = 'sp_add_category failed'
    goto ErrorHandler
   end
 end
	
exec @rc = msdb.dbo.sp_add_job 
   @job_name = @JobName
 , @owner_login_name = @JobOwner
 , @description = @JobDesc
 , @category_name = @JobCategory
 , @enabled = 1
 , @notify_level_email = 0
 , @notify_level_page = 0
 , @notify_level_netsend = 0
 , @notify_level_eventlog = 2
 , @delete_level= 0
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'sp_add_job failed'
  goto ErrorHandler
 end
exec @rc = msdb.dbo.sp_add_jobstep 
   @job_name = @JobName
 , @step_id = 1
 , @step_name = 'Defrag large tables'
 , @command = @JobCmd
 , @database_name = 'admin'
 , @database_user_name = ''
 , @subsystem = 'TSQL'
 , @cmdexec_success_code = 0
 , @flags = 4
 , @retry_attempts = 0
 , @retry_interval = 1
 , @output_file_name = @OutputFile
 , @on_success_step_id = 0
 , @on_success_action = 1
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'sp_add_jobstep 1 failed'
  goto ErrorHandler
 end
exec @rc = msdb.dbo.sp_add_jobstep 
   @job_name = @JobName
 , @step_id = 2
 , @step_name = 'failure notification'
 , @command = @JobCmd2
 , @database_name = 'admin'
 , @database_user_name = ''
 , @subsystem = 'TSQL'
 , @cmdexec_success_code = 0
 , @flags = 2
 , @retry_attempts = 0
 , @retry_interval = 1
 , @output_file_name = @OutputFile
 , @on_success_step_id = 0
 , @on_success_action = 2
 , @on_fail_step_id = 0
 , @on_fail_action = 2
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'sp_add_jobstep 2 failed'
  goto ErrorHandler
 end
-- add step 1 notification link after creating step 2 to avoid warning message
exec @rc = msdb.dbo.sp_update_jobstep 
   @job_name = @JobName
 , @step_id = 1
 , @on_fail_step_id = 2
 , @on_fail_action = 4
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'sp_update_jobstep 1 failed'
  goto ErrorHandler
 end
exec @rc = msdb.dbo.sp_add_jobserver 
   @job_name = @JobName
 , @server_name = '(local)' 
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'sp_add_jobserver failed'
  goto ErrorHandler
 end
exec @rc = msdb.dbo.sp_update_job 
   @job_name = @JobName
 , @start_step_id = 1 
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'sp_update_job failed'
  goto ErrorHandler
 end
-- end -- add components for full recovery db
set nocount on
-- delete archive over 90 days
delete admin.dbo.fraglist
where DatabaseName = @DatabaseName
and RecCreatedDt < getdate() - 90
-- archive previous run
update admin.dbo.fraglist
set status = status + 10
where DatabaseName = @DatabaseName
and status < 10
-- fragmentation study (find 1s and 4s)
-- maintain sequential index processing order
select @SQLStr = 'use ' + @DatabaseName + '
declare @ObjectId int
 , @IndexId int
 , @DatabaseName varchar(128)
 , @ObjectOwner varchar(128)
 , @ObjectName varchar(128)
	, @IndexName varchar(128)
set nocount on
select @ObjectId = min(id) 
from dbo.sysobjects 
where type = ''U''
while @ObjectId is not null
 begin
  Select @IndexId = min(indid) 
  from dbo.sysindexes 
  where id = @ObjectId 
  and indid > 0 
  and indid < 255
  while @IndexId is not null
   begin
    select @DatabaseName = db_name()
     , @ObjectOwner = user_name(o.uid)
     , @ObjectName = o.name
     , @IndexName = i.name
    from dbo.sysobjects o
    join dbo.sysindexes i
    on o.id = i.id
    where o.id = @ObjectId
    and i.indid = @IndexId
    -- determine if the index ought to be indexed
    exec admin.dbo.GetShowContigForIndex 
       @DatabaseName = @DatabaseName
     , @ObjectOwner = @ObjectOwner
     , @ObjectName = @ObjectName
     , @IndexName = @IndexName
     , @PlanMaintenance = 0
    Select @IndexId = min(indid) 
    from dbo.sysindexes 
    where id = @ObjectId 
    and indid > @IndexId 
    and indid < 255
   end
  select @ObjectId = min(id) 
  from dbo.sysobjects 
  where type = ''U'' 
  and id > @ObjectId
 end'
exec @rc = sp_executesql @sqlStr
set @ec = @@error
if @rc <> 0 or @ec <> 0
 begin
  set @Err = 'DBCC showcontig failed processing'
  goto ErrorHandler
 end
-- perform all identified required maintenance
-- first reindex all the small tables
declare indexesToMaintainList insensitive cursor 
for
select ObjectOwner, ObjectName, IndexName, status
from admin.dbo.fraglist
where DatabaseName = @DatabaseName
and status = 4
open indexesToMaintainList
fetch next 
from indexesToMaintainList 
into @ObjectOwner, @ObjectName, @IndexName, @status
while @@fetch_status = 0
 begin
  -- mark as current maintenance candidate
  update admin.dbo.fraglist		
  set status = 5
  where DatabaseName = @DatabaseName
  and ObjectOwner = @ObjectOwner
  and ObjectName = @ObjectName
  and IndexName = @IndexName
  and status = 4
  set @SQLStr = 'DBCC DBREINDEX(''' 
              + @DatabaseName 
              + '.' 
              + @ObjectOwner 
              + '.' 
              + @ObjectName 
              + ''',''' 
              + @IndexName 
              + ''')'					
  set @DBCCStartDt = getdate()
  exec @rc = sp_executesql @sqlstr 
  set @ec = @@error
  if @rc <> 0 or @ec <> 0
   begin
    set @Err = 'Defragmentation operation "' + @SQLStr + '" failed'
    goto ErrorHandler
   end
  -- mark as maintenance complete
  update admin.dbo.fraglist		
  set status = 6
   , SecondsToMaintain = Datediff(second,@DBCCStartDt, getdate())
  where DatabaseName = @DatabaseName
  and ObjectOwner = @ObjectOwner
  and ObjectName = @ObjectName
  and IndexName = @IndexName
  and status = 5
		
  exec admin.dbo.GetShowContigForIndex 
     @DatabaseName = @DatabaseName
   , @ObjectOwner = @ObjectOwner
   , @ObjectName = @ObjectName
   , @IndexName = @IndexName
   , @PlanMaintenance = 7
 	
  fetch next 
  from indexesToMaintainList 
  into @ObjectOwner, @ObjectName, @IndexName, @status
 end
close indexesToMaintainList
deallocate indexesToMaintainList
-- now INDEXDEFRAG all the larger tables
while exists (select 1 from admin.dbo.fraglist
              where DatabaseName = @DatabaseName
              and status in (1,2))
 begin
  if (select cntr_value from master..sysperfinfo
      where instance_name = @DatabaseName
      and counter_name = 'Log File(s) Used Size (KB)') > @LogUsedThresholdKB
   begin
    if  exists (select 1 
                from master.dbo.sysprocesses p with(nolock)
                join msdb.dbo.sysjobs j with(nolock)
                on substring(p.program_name
                            , charindex('0x', p.program_name) + 18
                            , 16) 
                 = substring(replace(j.job_id, '-',''),17,16) 
                where j.name = @JobName)
    or exists (select 1 
               from master.dbo.sysprocesses with(nolock)
               where (cmd like 'BACKUP%'
                      or cmd like 'RESTORE%')
               and dbid = db_id(@DatabaseName))
     begin
      exec admin.dbo.IsAgentJobRunnable @IsRunnable OUTPUT, @JobName, @JobOwner				
      if  @IsRunnable = 0
       begin
        -- stop the job
	exec @rc = msdb.dbo.sp_stop_job @JobName
        set @ec = @@error
        if @rc <> 0 or @ec <> 0
         begin
          set @Err = 'sp_stop_job ' + @JobName + ' failed'
          goto ErrorHandler
         end
       end
     end     
     -- Set all job steps to append to log file
     exec @rc = msdb.dbo.sp_update_jobstep 
        @job_name = @JobName
      , @step_id = 1
      , @flags = 2
     set @ec = @@error
     if @rc <> 0 or @ec <> 0
      begin
       set @Err = 'sp_update_jobstep 1 to append log file failed'
       goto ErrorHandler
      end
    --give log backup a chance to catch up
    waitfor delay '00:05:00'
   end
  else -- enough log space to keep going
   begin
    -- start the job if it's not running 
    if not exists (select 1 from master.dbo.sysprocesses p
                   join msdb.dbo.sysjobs j
                   on substring(p.program_name
                               , charindex('0x', p.program_name) + 18
                               , 16) 
                    = substring(replace(j.job_id, '-',''),17,16) 
                   where j.name = @JobName)
     begin
      exec admin.dbo.IsAgentJobRunnable @IsRunnable OUTPUT, @JobName, @JobOwner				
      if  @IsRunnable = 1
       begin
        exec @rc = msdb.dbo.sp_start_job @JobName
        set @ec = @@error
        if @rc <> 0 or @ec <> 0
         begin
          set @Err = 'sp_start_job ' + @JobName + ' failed'
         	goto ErrorHandler
         end
       end
     end
    -- wait a minute before checking again	
    waitfor delay '00:01:00'
   end
 end
-- cleanup
if exists (select 1 from msdb.dbo.sysJobs where name = @JobName)  
 begin
  exec @rc = msdb.dbo.sp_delete_job @job_name = @JobName
  set @ec = @@error
  if @rc <> 0 or @ec <> 0
   begin
    set @Err = 'sp_delete_job during cleanup failed'
    goto ErrorHandler
   end
 end
return
ErrorHandler:
set @ErrDesc = 'admin.dbo.IndexMaintenanceForDB failed ' 
             + '| Return Code %d ' 
             + '| Error Code %d ' 
             + '| @DatabaseName = %s ' 
             + '| @ObjectOwner = %s ' 
             + '| @ObjectName = %s ' 
             + '| @IndexName = %s ' 
             + '| Error: %s' 
raiserror 
 ( @ErrDesc
 , 16
 , 1
 , @rc
 , @ec
 , @DatabaseName
 , @ObjectOwner
 , @ObjectName
 , @IndexName
 , @Err)
return -1
GO
SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS ON 
GO
				 
				As a basic rule of thumb I find 
				that re-indexing once a week is a practical starting place for 
				index maintenance. (FWIW I also update statistics nightly even 
				though I run my database with auto update statistic enabled.) If 
				an index has more than 10% logical scan fragmentation or 
				less than 90% scan density I will maintain that index by 
				default. If an index has fewer than 10,000 pages I use DBCC 
				DBREINDEX to reduce/remove fragmentation. It's probably not even 
				necessary to maintain such small indexes, but since I have 
				thousands of tables I find it helps to stabilize my storage 
				requirements if I regularly maintain even the smallest indexes. 
				Generally speaking an index of 10,000 pages can be DBCC 
				DBREINDEX-ed on one of my Compaq DL580 (G1) production servers 
				in 30 seconds or less so that number works well for me at this 
				time.  
				For anything over 10,000 pages I 
				use DBCC INDEXDEFRAG so that the data can remain available 
				during the maintenance operation. As I mentioned earlier, DBCC 
				INDEXDEFRAG can cause gigantic bursts of log growth. to over 
				come the potential crippling effects of such growth I run all 
				DBCC INDEXDEFRAG operations through a temporary SQL Agent job. 
				This works by allowing the main index maintenance job to watch 
				the "Log File(s) Used Size (KB)" counter in the 
				master.dbo.sysperfinfo table and turn the temporary job on and 
				off as appropriate to keep the log file from growing out of 
				control.  
				I check the fragmentation using
				 
				    DBCC SHOWCONTIG (TablenName, 
				IndexName) WITH TABLERESULTS, FAST 
				both before and after the index 
				is maintained. This allows me to see not only how fragmented the 
				index was before it was maintained but also how good of a job 
				the maintenance operation did of cleaning things up for me. I 
				didn't always check the fragmentation after maintenance, but I 
				heard a few comments about the efficacy of DBCC INDEXDEFRAG and 
				decided I'd better keep a closer eye on things. As it turns out, 
				the DBCC SHOWCONTIG after de-fragmentation indicates that DBCC 
				INDEXDEFRAG does an excellent job removing fragmentation at the 
				page level. Spot checks also suggest that page scan density 
				is well maintained by the on line DBCC INDEXDEFRAG as well.
				 
				One final comment concerning 
				index maintenance is not really about indexes at all (there goes 
				that political nonsense again). When a table has no clustered 
				index there is no way to maintain the fragmentation. 
				Fortunately, most rows are simply thrown at the end of the heap 
				when there is not a clustered index. Unfortunately, before the 
				rows get thrown at the end of the heap an insert will alway 
				search the entire heap for a hole - fragmentation - to stick the 
				new row in before it throws it at the end of the heap. Not only 
				can you not maintain the holes in the heap but there is a 
				performance penalty for all inserts into a heap.  
				 
				Using the techniques outline here, I am able to maintain tables 
				in excess of 100MM rows and 300,000 pages with no indications of 
				problematic blocking in an environment with query timeouts as 
				low as 5 seconds and typical query timeouts of 60 seconds. 
				See the excellent white paper by 
				Mike Ruthruff of Microsoft's Scalability Team,
				"Microsoft SQL Server 2000 Index Defragmentation Best Practices" 
				for addition information and official Microsoft guidance on 
				logical fragmentation.  
				Bill    |