I’ve been working hard on my backup/restore skills lately, getting to grips with the fundamentals and making sure that I understand backup/restore best practices and running examples to make sure I’m ready in the event of a disaster. It’s a cliche to say that you don’t want to be doing this in a real life disaster when everyone is huddled around your desk, but it’s true! Tasks like tail log backups, a type of backup that backs up the log file even if it is offline or damaged, is something that you may need to run in some scenarios. A post by Paul Randal goes into depth about this. Knowing how you can run a tail of the log backup and when its possible, and even how to attach the log to a different server are crucial in disaster recovery scenarios, so it’s well worth reading and running through the demos.
Not understanding backup/restore processes can have greater repercussions applies if you have log shipping running; although full backups do not break the backup chain, log backups that are not applied to the standby server means that you can lose a database that the business rely on to run reports on through the day. If you do need to run a log backup and you do not want to break the log chain, use the COPY_ONLY option when backing up the log.
Some time ago, I wrote a detailed script that returns the database backup history using the system tables, but sometimes all you want ot know is how to cut to the chase. So I have stripped the script back (more than a little) to return the latest full backup of all databases on the instance.
--get last backup of a database select db.name, MAX (bck.backup_finish_date) as FinishTime FROM MSDB.SYS.databases db LEFT OUTER JOIN msdb.dbo.backupset bck ON bck.database_name = db.name group by db.name
Using the left outer join to join from the sys.databases table to the sys.backupset table returns all databases that have no backups, which in most cases is far more important than the last backup time of your databases!Follow @rPh0enix
A couple of weeks back I posted a script that checks the progress of any backups running on and instance. A few days later I needed a script to check the progress of a restore. Fortunately enough, the backup script can be re-used with a very simple change. This simple script to check if any restores are running, how long they have been running and what the anticipated complete time will be.
SELECT A.NAME ,B.TOTAL_ELAPSED_TIME / 60000 AS [Running Time] ,B.ESTIMATED_COMPLETION_TIME / 60000 AS [Remaining] ,B.PERCENT_COMPLETE AS [%] ,( SELECT TEXT FROM sys.dm_exec_sql_text(B.SQL_HANDLE) ) AS COMMAND FROM MASTER..SYSDATABASES A ,sys.dm_exec_requests B WHERE A.DBID = B.DATABASE_ID AND B.COMMAND LIKE '%RESTORE%' ORDER BY percent_complete DESC ,B.TOTAL_ELAPSED_TIME / 60000 DESCFollow @rPh0enix
I received an email from the TechNet evaluation Centre this morning, stating that the CTP for SQL Server 2016 (v2.2) has been replaced as of yesterday. Apparently the previous release caused downtime with AlwaysOn Availability Groups during rolling upgrades. Though I have SQL Server 2016 installed on an Azure box, I’ve not played around with AlwaysOn in 2016. An upgrade is recommended for any users who wish to upgrade to SQL Server 2016 CTP 2.2, including AlwaysOn users. If you have already downloaded the English parts from 28th July onwards, you already have the latest build. The previous release for CTP 2.2 was 23rd of July.Follow @rPh0enix
I’ve really developed a jones for using PowerShell to make painful, repetitive, time consuming tasks much quicker. PowerShell’s great for automation, but it’s also great for running tasks on an adhoc basis.
Edit: this has proven to be a popular post, and the software has been downloaded a few times. I will post an update for SQL Server 2016 soon, though I think it is just references in the solution that needs updating.
Today I am pleased to announce the release of MSBuildSsis2012 on Codeplex. But before i get into what it does first, a bit of background:
Back in May, and subsequently a few more times since, I’ve posted about an error I get occasionally in one of our custom tasks that run in our builds. This custom task is a special case: As the SSIS 2012 project extension is dtproj, it cannot be compile using MSBuild. The typical solution would be to use DevEnv In MSBuild. And to automate deployment the solution (that would work for most people) is to use the SSIS Deployment Wizard in silent mode. However, we don’t use the SSIS Deployment Wizard in our automated builds as it does not work in Silent Mode when you have assembly references in the SSIS packages: it destroys those references and the dtsx packages fail whenever they run.
If you have backups running through jobs you may want to know the progress of the backup and where the backup media is. Script below returns this information for you.
SELECT A.NAME ,B.TOTAL_ELAPSED_TIME / 60000 AS [Running Time] ,B.ESTIMATED_COMPLETION_TIME / 60000 AS [Remaining] ,B.PERCENT_COMPLETE AS [%] ,( SELECT TEXT FROM sys.dm_exec_sql_text(B.SQL_HANDLE) ) AS COMMAND FROM MASTER..SYSDATABASES A ,sys.dm_exec_requests B WHERE A.DBID = B.DATABASE_ID AND B.COMMAND LIKE '%BACKUP%' ORDER BY percent_complete DESC ,B.TOTAL_ELAPSED_TIME / 60000 DESCFollow @rPh0enix
As an environment grows bigger and you have many jobs across many server instances, it’s important to automate as much as you can. And starting with the small, tedious but time consuming jobs is never a bad idea! A good example is having to enable/disable SQL Agent jobs during maintenance, or when a snapshot for replication is being pushed out you may not want certain jobs to run to prevent failures. So here is a stored procedure to alter the status of a SQL Agent Job. You only need to pass in the name of the Agent job. If the job is enabled it will disable it, and as an added precaution it will stop the job if it is running. If the job is disabled it will enable and start it running. Though if you don’t want it to run then you can always add a bit switch to the sproc.
Today I needed to initialize a pull subscription from a server that had been in the pool of subscribers for a published database, and had been out for sometime. In that time the distribution database had moved, so it was a case of re-creating the subscriptions all over. Once the pull subscription and distribution jobs created, I ran the snapshot agent, which uses a file share. When the snapshot agent had completed, I stared up the agent job on the subscriber. I had done this many times before, and was surprised to see an error not too dissimilar from the one below in the subscriber logs on Replication Monitor:
The process could not read file ‘\\Server1\SQLShareFiles\unc\publication\2050530095003\slmnc.pre’ due to OS error 5. (Source: MSSQL_REPL, Error number: MSSQL_REPL21022)
Get help: http://help/MSSQL_REPL21022
Access is denied.
(Source: MSSQL_REPL, Error number: MSSQL_REPL5)
Get help: http://help/MSSQL_REPL5
Oh. Access is denied? Quickly Googling the error I found a useful post at DBAStackExchange. Although we do use SQL Agent Authentication, what was missing was the permissions to the file share. Browsing to the file share and giving the Agent account full control of the file share resolved the issue for me. And though using Agent accounts does not follow best practice for the Replication Agent Security Model, it got the snapshot pushing out.Follow @rPh0enix
In my previous post I shared some SQL scripts which would give you the number of undistributed commands by publication, by publication and article, and by distribution database. So you can measure undistributed commands at a very high granularity (articles), or at a very low granularity (the entire distribution database.) How granular you go depends entirely on where your pain is: you could of course filter out to only a few articles, or a particular publication. Continue reading