Shrink defrag databases
An the end, only you can determine whether you want to shrink or not. With above information, you hopefully have enough to go on when making that decision. You probably know by now what I think of regular shrinking. Not to talk about autoshrink. :-)
Below you find the T-SQL code I ran to prove that shrinking of a database will generate the same amount of transaction log records as the amount of data which was moved. (The script uses the 'NUL' filename as backup destination. This will "write" the backup data to the "bit bucket". I.e., the data isn't saved anywhere. This is very practical for testing but risking to state te obvious, do not use this backup destination for your production work!)
SCRIPT
--Script to show that shrink produces a lot of log record.
SET NOCOUNT ON
USE master
IF DB_ID('shrink_test') IS NOT NULL DROP DATABASE shrink_test
GO
CREATE DATABASE shrink_test
ON PRIMARY
(NAME = shrink_test_data, FILENAME = N'c:\shrink_test_data.mdf'
,SIZE = 5MB, MAXSIZE = 200MB, FILEGROWTH = 10%)
LOG ON
(NAME = shrink_test_log, FILENAME = N'c:\shrink_test_log.ldf'
,SIZE = 3MB, MAXSIZE = 200MB, FILEGROWTH = 10%)
GO
ALTER DATABASE shrink_test SET RECOVERY FULL
--Make sure the database isn't in auto-truncate mode for the log
BACKUP DATABASE shrink_test TO DISK = 'NUL'
USE shrink_test
CREATE TABLE t
(
c1 INT IDENTITY CONSTRAINT PK_shrink_test PRIMARY KEY NONCLUSTERED
,c2 CHAR(3000) DEFAULT 'hello'
)
DECLARE @i INT
SET @i = 1
WHILE @i <= 40000
BEGIN
INSERT INTO t DEFAULT VALUES
IF @i%500 = 0 --Truncate log on every 500'th row
BACKUP LOG shrink_test TO DISK = 'NUL'
SET @i = @i + 1
END
SELECT COUNT(c1) AS "Number of rows before delete, should be 40000"
FROM t
GO
--Delete some rows, in a loop so log doesn't grow!
DECLARE @i INT
SET @i = 1
WHILE @i <= 20000
BEGIN
DELETE FROM t WHERE c1 = @i
IF @i%500 = 0 --Truncate log on every 500'th row
BACKUP LOG shrink_test TO DISK = 'NUL'
SET @i = @i + 1
END
SELECT COUNT(c1) AS "Number of rows after delete, shuld be 20000"
FROM t
--Empty the log
BACKUP LOG shrink_test TO DISK = 'NUL'
--Database file should be large (160MG on my machine), logfile small (3MB)
SELECT name, size*8192/(1024*1024) AS sizeInMB FROM sysfiles
GO
--This shrink might now produce a lot of log record as 20000 rows will be moved!
CHECKPOINT
DBCC SHRINKFILE (shrink_test_data, 40)
--Database file should now be small, but logfile large
SELECT name, size*8192/(1024*1024) AS sizeInMB FROM shrink_test..sysfiles
--My result, the data file shrunk to 80MB and the log file grew from 3MB to 146MB!!!
USE master
GO
DROP DATABASE shrink_test