SSDT and Pre-Model Scripts


Earlier this week I posted about the need to create the database as a pre-model tasks: that is creating a database before we run sqlpackage/dacfx against a database/server so that the script itself does not create the database.

Going a step further than this, pre-deploy scripts in SSDT scripts may be executed before the main diff script is executed, but not before the diff script is generated. This is an important fact to understand. If you want scripts to be executed before the diff is generated, then you need to execute pre-model scripts.

How you go about this is up to you, but there needs to be a few rules that must be followed:

Idempotent: big word, and is used in maths. But don’t let that put you off. What it means in this context is that a script can be re-run and the same result happens. So in the context of altering a primary key, the pseudo-code would look something like this:

if database exists then
if table exists then
if primary key name eq “oldname” then
drop primary key then
add primary key with “newname”

and then you add primary key “newname” to the solution. That way the drop and add part of the script will only ever be run once, or again against an older version of the database that has the old primary key.The “newname” of the primary key guarantees that this will be the case.

Stored in the Solution: within database solutions you can add scripts that are part of the solution but are not executed:

checksIf you copy the file to the output, what you’ll be able to do is add the files required to your nuspec file and get them deployed as part of the NuGet package. Neat!

predeploynuspecSo if you have the scripts in the solution and re-runnable then you can be confident that they will run against older versions of the database. It keeps a level of audit and history of the database.

Now, how to run the scripts. As with the create database script, SMO is probably the easiest way to do this. You could use SQLCMD, or Invoke-SqlCmd, but it’s entirely up to you.

Now if you want to use SMO, and you are deploying using Octopus Deploy, then you’re in (some) luck: there are scripts available in the Octopus Library to do something like this, but you may will have to alter them to execute a folder full of sql scripts. Fortunately, I already have completed this, and I will add my script to the GitHub solution to be available on the Library. When this is available, assuming it meets the quality control, I’ll blog about how to use it.

Dude, Where’s My Gold Code?

Let me tell you about a conversation I had yesterday with some developers in a team. They needed help with their database CI/CD process, and so I wanted to see the database solutions they were using.

Dev1: We don’t have solutions at the moment.

Me: OK, how come?

Dev1: Because we are making so many changes right now, there seemed to be little point in creating solutions, so we’re just using scripts.

[At this point, I bite my tongue and choose not to reply with ‘If you’re continuously making changes, wouldn’t it make sense to continuously integrate and  continuously deploy the changes. I mean, you’d hardly be taxing the definition of the word “continuously…” now are you…’]

Me: So how are you making changes?

Dev1: Oh well, we’re just using scripts to deploy the changes now.

Me: And where are the scripts?

Dev1 They’re in the wiki.

Dev2: No they’re not, because we put anything in the wiki that is changing regularly.

Dev1. Oh. So where are they?

[Dev2 stares blankly].

Dev1: So why do you want solutions?

Me: To help you build a release pipeline.

Dev2: But we’re making lots of changes regularly.

[Again, me, keeping cool in spite of the blindingly obvious contradiction presented to me.]

Me: But let’s get something built to help you make those changes regularly.

Dev1: OK, I’ll get something together by the end of the week. In the meantime we’re too busy making changes to get this together…

Sound familiar? I’m sure many of you have been in a situation like this before. It’s not the first time I’ve had a conversation like this either.

Now I can go on about how the business doesn’t buy into maintenance, or Continuous Integration or Continuous Delivery. Or I could talk about how CI/CD needs to be pushed by the devs and that associated tasks require items in the sprint to make sure that they are completed and audited. And I can talk about how the “build once, deploy many” process reduces the number of bugs and speeds up the deployment process. And you know, it is great that when wearing my DBA hat I have had to write wacky scripts  because dev teams haven’t been deploying to environments in a controlled manner, but I’d really rather be focusing on infrastructure. All these points are well documented.

But these points are consequences of implementing (or not) Continuous Delivery and Continuous Deployment. The most fundamental point of adopting CI/CD is getting your code into some sort of source control. It all starts with getting the code into source control, writing pre and post deployment scripts that are re-runnable, developers checking in changes, building the code, and taking the resulting artefact (in SSDT terms, a dacpac) and deploying that dacpac to your database. And taking the same dacpac and deploying again and again up to production. This process can be achieved in an hour. Actually, in 55 minutes! And AdventureWorks can be tough to automate! OK, it may take longer than that, but getting a repository of your code that can be audited, changes rolled back, tagged, branched, deployed, merged etc etc is the first step towards achieving Database DevOps.

Where Will SSDT Create my Database

SSDT is great for many things, but one thing it’s not great at is giving control over to the user with regards to data and log file creation. It uses prefixed values of DefaultDataPath and DefaultLogPath. These variables are system defined and cannot be overridden.  Instead it uses the default locations that are set on the instance we are deploying to. Humph. And I’d wager that if these are null then it will just use where the system database settings are. This is not particularly useful for anyone.

You may think “Well OK I’ll just use a pre-deployment script”. Nice try. I thought so too. But as it turns out pre-deployment scripts aren’t pre-deplyoment scripts per se. If you add a pre-deplyoment script to create a database, it’ll be run AFTER the initial database creation (go on try it yourself if you don’t believe me, but I have saved you the trouble already!)

Clearly, we have to create a shell database that we can control where files are created outside of SSDT, and also the primary filegroup. If you’re using something like Octopus Deploy or TeamCity, or VSTS then it’s simple enough to create a step that will create this shell database for you. Then everything else (logins, users, tables etc) can be created using SSDT. And if you have more files/filegroups to be created, then you can use SSDT to set these up; what I’m talking about is the initial creation of the database.

There’s no shortage of options as to how to do this: you could have a SQL file with the CREATE statements run, and this would work, providing you have the correct “if not exists create db” setup, but this may not be particularly useful for re-use.

If you want to go for something that can be re-used, then I think SMO is your friend. It can be downloaded via Feature Pack, so practically any box can have it deployed (and what I mean by this is that you don’t need a full SQL Server install to have access to SMO), it works well with PowerShell, and there’s no shortage of examples online with how you can create a script that does what you need to do. This post from Allen White gives a good overview of SMO in PowerShell. And if you’re using something like Octopus, then this script also helps give a good start on creating a database using SMO. Where I think this step template in the Octopus library falls short is that it only creates the database, and not sets the files and filegroups up. But if you combine Allen’s scripts with the idea of the step template, then you can pretty much see where I am coming from. It’s also worth setting the recovery model using SMO and not worrying about this using SSDT.

I’m not wild about the use of scripts outside of the database to deploy the database; it implies “magic”, or behind the scenes changes that some people won’t know exists. And usually I’d advocate the use of SSDT pre/post deploy scripts to manage everything within the database. But databases are hard to deploy: they’re not a bunch of dlls that can be rolled back. A minor error early in the setup of a deployment pipeline can cause massive problems further down the process. And file creation is the first thing you do. What if the log file is created on the C drive? How hard can it be to move log files? Very hard is how. And creating a secondary log file to compensate is awful. And to try to remove that first log file? All these issues could be removed with a bit of forethought.

Remember it’s no good assuming that the instance you are deploying to has everything set up the way you like it. You have the control over this stuff when you are using CI/CD properly, and you have to take responsibility for database deployments.

PowerShell Snippet: Trimming The String!


When building up urls from different parameters in something like TeamCity, or Octopus, it’s simple enough to get double “//” in urls if the parameters are not consistent. So little helper functions are always useful to have imported to manage such things. Below is an example of such a thing!

function Update-Parameter
[bool] $trimstart = $false,
[bool] $trimend = $false

if ($trimstart)
$StringChecker = $param.StartsWith($character)
if ($StringChecker -eq "True")
$param = $param.TrimStart($character)
if ($trimend)
$StringChecker = $param.EndsWith($character)
if ($StringChecker -eq "True")
$param = $param.TrimEnd($character)
return $param

$url = ""
$lan = "/en-us/"
$ext = "/professional-program/data-science/"
$trim = "/"

$output = "Before Trim {0}/{1}/{2}" -f $url, $lan, $ext

$url = Update-parameter $url $trim $false $true
$lan = Update-parameter $lan $trim $true $true
$ext = Update-Parameter $ext $trim $false $true

$output = "After Trim {0}/{1}/{2}" -f $url, $lan, $ext

SqlServer PowerShell Modules NuGet Package Now Available


Back in the July Update of SSMS 2016, a bunch of new SQL PowerShell functions were added, plus two neat additions to Invoke-Sqlcmd: -outputas, which allowed you to output the result set into a data object (eg, data row, data table etc), and -ConnectionString, which allows you to pass in a connection string instead of using the pre-defined parameters. All very useful stuff, go and have a read.

However, this update has two issues: firstly, it’s not updating the classic sqlps module, but rather has created a new module: sqlserver. This new module will be regularly updated and delivered with SSMS updates. And the sqlps will only be affected by changes from the DB Engine; so Cumulative Updates, Service Packs etc. So there’s quite a bit of overlap here.

Secondly, I don’t like how this new module is being delivered via a monthly release of SSMS. That’s a 900MB download and a long install to gain access to a module that is only 2MB in size. To make matters worse, the SQL Server 2016 Feature Pack, which includes a SQL PowerShell msi, IS NOT being updated! Waah!

So this may just seem like a whinge, but I have checked and it is possible to install this module on it’s own and use without having anything else other than a couple of prerequisites installed. So I’m going to be pro-active: I have taken in upon myself to upload the module to NuGet for people to consume. As far as prerequisites go, you only need SMO and SQLCLR installed, available through the Feature Packs, though they do have to be the SQL Server 2016 versions of these objects.

As mentioned in one of the links above, you have to Remove-Module sqlps first before you import the new module.

I’ll update the NuGet package every time there’s a new release of SSMS.

Oh, and here’s a quick syntax of how to import the module:

Import-Module -Name "C:\nuget\PowerShell.SqlServer.Modules.1.0.1\sqlserver" -Verbose
$conn = ".;Integrated Security=True"
Invoke-Sqlcmd -ConnectionString $conn -query "SELECT GETDATE() AS TimeOfQuery;"

Team City Meta Runner: Get Build Number

One of the neat things about TeamCity is that it gives you a great deal of control over your process. TeamCity is great to use as a control flow for your pipeline, especially if you are using Octopus to deploy. But one of the pain points here is keeping the parity between build/deploy versions in Octopus the same as in TeamCity. It’s important because it prevents users from having to jump between the UI’s to keep track of what is deployed where, especially if you are automating the whole process.

There’s several ways to keep the build numbers aligned, the most obvious being snapshot dependencies. However, snapshot dependencies can open up a world of pain: if builds are altered then a snapshot may become invalid when it’s not intended, so a build may trigger other builds being launched unintentionally.

Fortunately, there is a method to share build numbers amongst other builds without setting up snapshot dependencies. By using a meta runner, you can use the TeamCity rest api to grab a build number of a given build.

“What is a meta runner?” I hear some of you say…. to quote the TeamCity wiki: Continue reading “Team City Meta Runner: Get Build Number”

PowerShell Snippet: Verify Minimum Version of .NET

Occasionally it’s useful to verify the minimum version of .NET installed on a box. This PowerShell does exactly that.

$Netver = (Get-ItemProperty ‘HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full’ -Name Release).Release
write-host ".NET Version : $netver"

ElseIf ($Netver -le 394271)
throw "At least .NET Framework 4.6.1 needs to be installed on this tentacle to use Microsoft.Data.Tools.Msbuild."

PowerShell Snippet #7: Retrieve SQL Server Licence Key

Copied from somewhere else on the internet, this PowerShell script will return the product key used for a SQL instance Install. Super useful when changing licenses on temporary VM’s I spin up and play around with to SQL Developer whose instances have passed the Enterprise evaluation use-by date. Putting this here for my own benefit. I claim no kudos!


function GetSqlServerProductKey {
## function to retrieve the license key of a SQL 2014 Server.
param ($targets = ".")
$hklm = 2147483650
$regPath = "SOFTWARE\Microsoft\Microsoft SQL Server\120\Tools\Setup"
$regValue1 = "DigitalProductId"
$regValue2 = "PatchLevel"
$regValue3 = "Edition"
Foreach ($target in $targets) {
$productKey = $null
$win32os = $null
$wmi = [WMIClass]"\\$target\root\default:stdRegProv"
$data = $wmi.GetBinaryValue($hklm,$regPath,$regValue1)
[string]$SQLver = $wmi.GetstringValue($hklm,$regPath,$regValue2).svalue
[string]$SQLedition = $wmi.GetstringValue($hklm,$regPath,$regValue3).svalue
$binArray = ($data.uValue)[0..16]
$charsArray = "B","C","D","F","G","H","J","K","M","P","Q","R","T","V","W","X","Y","2","3","4","6","7","8","9"
## decrypt base24 encoded binary data
For ($i = 24; $i -ge 0; $i--) {
$k = 0
For ($j = 14; $j -ge 0; $j--) {
$k = $k * 256 -bxor $binArray[$j]
$binArray[$j] = [math]::truncate($k / 24)
$k = $k % 24
$productKey = $charsArray[$k] + $productKey
If (($i % 5 -eq 0) -and ($i -ne 0)) {
$productKey = "-" + $productKey
$win32os = Get-WmiObject Win32_OperatingSystem -computer $target
$obj = New-Object Object
$obj | Add-Member Noteproperty Computer -value $target
$obj | Add-Member Noteproperty OSCaption -value $win32os.Caption
$obj | Add-Member Noteproperty OSArch -value $win32os.OSArchitecture
$obj | Add-Member Noteproperty SQLver -value $SQLver
$obj | Add-Member Noteproperty SQLedition -value $SQLedition
$obj | Add-Member Noteproperty ProductKey -value $productkey


PowerShell Snippet #6

Recently I needed to run Invoke-Sqlcmd that would use a connection string of a variety of parameters. Below is the script I used to accept whatever parameters would be passed in to the “Invoke-sqlcmd”. The input file contained the same text as the $query parameter.


$ServerInstance = ".\instance"
$database =  "Northgale"
$uname = "user"
$pword = "password123"
$query = "'SELECT GETDATE() AS TimeOfQuery;'"
$inputFile = "C:\Users\Richard\Documents\inputfile.sql"

$connectionString = "-ServerInstance $ServerInstance -UserName $uname -Password $pword -Database $database -verbose"
$connectionString  = $connectionString -split '(?=prg=PowerShell-)'
$InvokeSqlcmd  = "Invoke-Sqlcmd -inputfile $inputFile $connectionstring"
write-host $InvokeSqlcmd -verbose
Powershell.exe -command "$invokeSqlCmd"

VSTS Hosted Build Agent Specs

I was interested to know just what the hardware specifications of the hosted build agent is. So I added some PowerShell to read out the info below:


2016-06-29T09:23:31.3935358Z systemname      Name                                      DeviceID NumberOfCores NumberOfLogicalProcessors Addresswidth
2016-06-29T09:23:31.3935358Z ----------      ----                                      -------- ------------- ------------------------- ------------
2016-06-29T09:23:31.3935358Z TASKAGENT5-0010 Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz CPU0                 2                         2           64
2016-06-29T09:23:31.4095356Z Total memory:  7167.55078125

What piqued my interest greater was that this is the exact same spec for a D2 v2 box that is available via Azure. Clearly, Microsoft have a build agent template which is built, stored in a pool, and provisioned whenever a build takes place. One must assume that it is disposed of after a single use to prevent and nefarious activity by other people trying to steal code….