SQL Server Database File Size and Free Space Revisited

I recently had a comment posted to one of my first articles I wrote way back in 2012 concerning adding a column that detailed the used space as a %age. So without any further ado, here is an updated version of a script originally blogged about here.

select
        sf.FILEID AS [File ID],
        [File Size in MB] = convert(decimal(12,2),round(sf.size/128.000,2)),
        [Max Size in MB] = convert(decimal(12,2),round(sf.maxsize/128.000,2)),
        [Space Used in MB] = convert(decimal(12,2),round(fileproperty(sf.name,'SpaceUsed')/128.000,2)),
        [Free Space in MB] = convert(decimal(12,2),round((sf.size-fileproperty(sf.name,'SpaceUsed'))/128.000,2)) ,
        [Free Space in %] = convert(decimal(12,2),round(100*(sf.size-fileproperty(sf.name,'SpaceUsed'))/128.000,2)/(sf.size/128.000)) ,
	    [Used Space in %] = convert(decimal(12,2),round(100*(fileproperty(sf.name,'SpaceUsed'))/128.000,2)/(sf.size/128.000)) ,
        [File Name] = left(sf.NAME,30),
        [File Location] = left(sf.FILENAME,100)
        from dbo.sysfiles sf
        order by fileid asc

Enterprise Schema Big-Ish Data Scripts

Hello!

Last week I presented for the first time at a SQL User Group (SQL Supper.) There was a lot of content, and plenty of demos, and a few people came up to me afterwards and asked for the scripts. In case you were too shy to ask, or just want a simple database that uses partitioning and Clustered Columnstore Indexes, the scripts attached will provide you with such a thing.

final

The “00_final_create_dataload_and_populate” script will take some time to run. It basically creates a db that the later demos load data from. The “00_Reset_Partition_CCI_DB” clears all the objects out of the database that is created in the “03_final_create_partition_cci_db”. You will want to uncomment the T-SQL and run it in the “03_final_create_partition_cci_db”, and then from here on in you can just run the  “00_Reset_Partition_CCI_DB” to get the database to a working order.

Hopefully that explains how it all hangs together. Any questions let me know.

Forcing SQL Server To Order Operations

The subject of my last post was to do with returning DBCC CHECKDB results from a temp table, which was populated by running DBCC DBINFO against each database hosted on the SQL instance. I shared a query that returned any databases that had not had a CHECKDB run against it for over 7 days. The query is below:

 
select dbi.Value from #db dbi
WHERE field LIKE '%lastknow%'
AND (CASE WHEN isDate(dbi.value) = 1 THEN CAST (dbi.Value AS DATETIME2(3))END) < DATEADD (DAY,-7, GETDATE())

That “CASE” statement stands out quite a bit doesn’t it. Let’s explore why this query needs a CASE expression. Begin by creating a temp table that contains a sample of the data stored in the temp table. Continue reading “Forcing SQL Server To Order Operations”

Making “Check Last CHECKDB Date” A LOT Easier

One of the most important duties of a DBA is to ensure that CHECKDB is run frequently to ensure that the database is both logically and physically correct. So when inheriting an instance of SQL, it’s usually a good idea to check when the last CHECKDB was last run. And ironically enough, it is actually quite difficult to get this information quickly, especially if you have a lot of databases that you need to check. The obvious way is to run DBCC DBINFO against the specific database. This returns more than just the last time CHECKDB was run, and it is not especially clear which row returned tells us the last CHECKDB (FYI the Field is “dbi_dbccLastKnownGood”.) Continue reading “Making “Check Last CHECKDB Date” A LOT Easier”

Copy and Delete Data Across Tables When There is no Space

Hello!

Recently I needed to apply compression data on a particularly large table. One of the main reasons for applying compression was because the database was extremely low on space, in both the data and the log files. To make matters worse, the data and log files were nowhere near big enough to accommodate compressing the entire table in one go. If the able was partitioned then I could have done one partition at a time and all my problems would go away. No such luck.

So I had to think quite outside of the box to compressing the data. The solution I came up with was to create a new empty table, and copy 50 rows at a time (that was the largest I could move at any one time, yes I really was THAT low on space) and then delete the 50 rows copied. But instead of COPY/DELETE, which would seem the logical way, I ran the DELETE first and then run an INSERT from the output table. This way I would be certain that the same rows that were deleted were copied into the new table, and meant that I was only scanning the original table the once.

I had to run further checks into the script: the database was set to simple mode, but the log file would still grow because of ETL processes etc running on other parts of the db, and so I needed to check to make sure that I wasn’t saturating the log file with each delete. So the logic here will work for both SIMPLE, BULK and FULL recovery models. Continue reading “Copy and Delete Data Across Tables When There is no Space”

Christmas Musings

I’ve recently become a Dad again to a lovely little boy, [1] and of course the one thing everyone knows about being a parent to a newborn is that you’ll have plenty of sleepless nights to look forward to. But whilst “sleepless nights” is a bit of an over-exaggeration, I have found myself sat up wide awake at 1AM with young Ben in my arms sound asleep. Occasionally I might listen to music (headphones of course!) or have the TV on (sound down, brightness down, subtitles on), but on more than one occasion my mind has wandered to SQL Server. I don’t get to use the time to blog, but this time has afforded me time to think over a subject, and so I’m blogging about a few of them today. Continue reading “Christmas Musings”

Log Shipping: It’s Better Than Bad It’s Good!

better than bad its good

This is something of an epic post, so grab a tea and get ready…I’m probably showing my age by quoting an old Ren and Stimpy cartoon here, but to be fair it probably sums up log shipping pretty well. This post is focusing on using a read-only log shipping database for reporting purposes, and the limitations of read-only log shipped databases. I also share some monitoring scripts and a few ideas on how to improve restore performance without having to upgrade the hardware/software.

Despite the development of AlwaysOn in recent releases of SQL Server, log shipping is still a great way to set up a copy of databases to be used for reporting. One of the main reasons it is great is because, unlike AlwaysOn, it is available in less expensive editions like Standard and Web from SQL Server 2008 onwards. Sure, in 2016 AlwaysOn will be available in Standard, but in a greatly deprecated form, and you cannot read from the secondary. So it will be good for DR, but not for reporting (as an aside it still might be easier to set up log shipping for DR than AlwaysOn Basic because you need to setup a failover cluster. Read through the “how to set up Standard Edition Availability Groups” here.) However you do need to be careful though when setting up log shipping across different editions of SQL Server: whilst you can log ship between Enterprise to Standard/Web, if the database uses any Enterprise features then you’ll need to log ship to an Enterprise edition of SQL Server. And because you’ll be using the database for reporting, you’ll need to get it licensed. Continue reading “Log Shipping: It’s Better Than Bad It’s Good!”

Useful Log Shipping Script

Like all DBA’s, I have a collection of random scripts put together over time that reside in a folder and are then frequently forgotten about. Today I found a file called “useful log shipping script”, and being the generous guy I am, I’m putting it up here.

SELECT ls.primary_server AS [primaryServer]
,ls.primary_database AS [primaryDB]
,lsd.secondary_database AS [secondaryDB]
,lsd.restore_delay AS [restoreDelay]
,CONVERT(VARCHAR, DATEDIFF(SECOND, lms.last_restored_date, GETDATE()) / (60 * 60 * 24)) + '_' + CONVERT(VARCHAR, DATEADD(s, DATEDIFF(SECOND, lms.last_restored_date, GETDATE()), CONVERT(DATETIME2, '0001-01-01')), 108) AS [timeSinceLastRestore dd_hh:mm:ss]
,CONVERT (VARCHAR (20), lms.last_copied_date, 120) AS [lastCopiedDate]
,CONVERT (VARCHAR (20), lms.last_restored_date, 120) AS [lastRestoredDate]
,lms.last_copied_file AS [lastCopiedFile]
,lms.last_restored_file AS [lastRestoredFile]
,lsd.disconnect_users AS [disconnectUsers]
,ls.backup_source_directory AS [backupSourceDirectory]
,ls.backup_destination_directory AS [backupDestinationDirectory]
,ls.monitor_server AS [monitorServer]
FROM msdb.dbo.log_shipping_secondary ls
INNER JOIN msdb.dbo.log_shipping_secondary_databases lsd ON lsd.secondary_id = ls.secondary_id
INNER JOIN msdb.dbo.log_shipping_monitor_secondary lms ON lms.secondary_id = lsd.secondary_id;

Happy Scripting!

Moving TempDB Script

Although it is not an activity that I regularly do, I needed to write a script to move TempDB. This was whilst I was testing different RAID arrays on a SAN. The TempDB had quite a few files and I got bored writing out the commands over and again, so came up with a script that will print out the commands to alter the files for TempDB, and if necessary will execute the command also. This will also work for user databases, the caveat being that whilst TempDB will re-create your TempDB on an instance restart, the user database files will have to be moved whilst the database needs to be offline. So that’s some manual grunt that cannot be scripted here. Yeah I know, sucks to be a DBA sometimes…

USE tempdb;
GO

SET NOCOUNT ON

DECLARE @SQL NVARCHAR(MAX) = ''
DECLARE @EXEC NVARCHAR(MAX) = ''
DECLARE @RUN BIT = 0
DECLARE @newdatalocation NVARCHAR(256) = 'L:\temptempdb\'
DECLARE @newLogLocation NVARCHAR(256) = 'M:\tempdb_log'
DECLARE @fileName NVARCHAR(64)
DECLARE @fileLocation NVARCHAR(256)

DECLARE cur CURSOR
FOR
SELECT f.NAME
,f.filename
FROM sys.sysfiles f

OPEN cur

FETCH NEXT
FROM cur
INTO @fileName
,@fileLocation

PRINT @fileName
PRINT @fileLocation

WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @SQL = 'ALTER DATABASE ' + DB_NAME() + ' modify FILE (
NAME = ' + f.NAME + CHAR(10) + ',filename = ' + '''' + CASE
WHEN RIGHT(f.filename, 3) = 'ldf'
THEN @newLogLocation
ELSE @newdatalocation
END + '' + f.NAME + '' + RIGHT(f.filename, 4) + '''' + ')' + CHAR(10)
FROM sys.sysfiles f
WHERE f.NAME = @fileName

PRINT @SQL

SELECT @EXEC = @EXEC + @SQL

FETCH NEXT
FROM cur
INTO @fileName
,@fileLocation
END

CLOSE cur;

DEALLOCATE cur;

IF @RUN = 1
BEGIN
EXEC @EXEC
END

Happy Scripting!