dbachecks – A different approach for an in-progress and incremental validation

dbachecks is a new PowerShell module from the SQL Server Community! For more information, read introducing dbachecks.

If you don’t know dbachecks, we have released a good amount of blog posts that will help you:
Announcing dbachecks – Configurable PowerShell Validation For Your SQL Instances by Rob Sewell
introducing dbachecks – a new module from the dbatools team! by Chrissy LeMaire
install dbachecks by Chrissy LeMaire
dbachecks commands by Chrissy LeMaire
dbachecks – Using Power BI dashboards to analyse results by Cláudio Silva
My wrapper for dbachecks by Tony Wilhelm
Checking backups with dbachecks by Jess Promfret
dbachecks please! by Garry Bargsley
dbachecks – Configuration Deep Dive by Rob Sewell
Test Log Shipping with dbachecks
Checking your backup strategy with dbachecks by Joshua Corrick
Enterprise-level reporting with dbachecks by Jason Squires
Adding your own checks to dbachecks by Shane O’Neill
dbachecks – A different approach for an in-progress and incremental validation by Cláudio Silva

Other documentation:
dbachecks readme
dbachecks wiki (for developers)

I will share one of the ways I like to use dbachecks when I’m knocking down the problems in order to increase the green percentage and lower the red one!

Output files

How do you save the results?
Do you save one file per instance (all tests included)?
Using -Append?
Alternatively, one per check/environment?

There is not a single way of doing this. Neither a “correct way”.
Here you can find another different way grouping your results per application.

I will share the way I like to use it, when using the PowerBI dashboards to analyze the results, and explain the advantages I get from it.

Choosing a road

My personal choice is to have one file per check and environment. This means that if I’m running a check for SuspectPage I run for all instances/databases belonging to the development environment, I will end with a file named dbachecks_1_SuspectPage_DEV.json.
Keeping the same line, I will get a filename dbachecks_1_SuspectPage_PRD.json if I run it for production.

$sqlInstances = "dev1", "dev2"

$checks = (Get-DbcCheck).UniqueTag

Invoke-DbcCheck -SqlInstance $sqlInstances -Checks $_ -PassThru -Show Fails | Update-DbcPowerBiDataSource -Environment "DEV" -Path "C:\windows\temp\dbachecks"


This will output:

Total number of files

“This will create a lot of files…”

Let’s do some math

Let’s imagine for a moment that we have to manage 3 different environments (DEV, QA, PRD):
Currently, we have 80 checks if your approach is 1 file per environment you will end up with 3 files. The way I like to do it, I will end up with 240 files.

WOW! Big difference right?

Fear nothing

Yes, it is a big difference but that is no problem at all! The Power BI file will deal with this increase flawlessly as I have mentioned before on dbachecks – Using Power BI dashboards to analyse results blog post.


The biggest advantage, for me, is the possibility I have to re-run a single test for a single environment and with it, only touch just one of the files. It’s an update of that file.
By doing it, for the same destination folder, I will overwrite the existing file then I literally just need to go and hit “Refresh” button on PowerBI dashboards.
This way it took just the time of that test and not all of them. Quick and easily, I’m able to confirm that the fix I have run actually worked and my red values are lower! 😀

Real scenario

  1. You run, overnight, all your tests.
  2. In the morning you open the Power BI dashboard and hit “Refresh”
  3. You look to your red values.
  4. You pick one (for this examples purpose let’s say “Auto-Close”)
  5. You run a query to fix all databases with the wrong value
  6. Re-run just this test just for one environment (run multiple times for various environment)
  7. Go to your Power BI and hit “Refresh” again.
  8. Repeat from point 3.

The point 6 is where you will save huge amounts of time because if you have just one file for all tests for one environment, you would need to rerun ALL the tests in order to refresh your environment.

Hope this helps!

Thanks for reading!

dbachecks – Using Power BI dashboards to analyse results

For the last couple of months, members of the dbatools team have been working on a new PowerShell module called dbachecks. This open source PowerShell module will enable you to validate your SQL Instances. Today it is released for you all to start to use 🙂

dbachecks launch – the blog posts series

Here you can find a list of the blog posts made available today:
Announcing dbachecks – Configurable PowerShell Validation For Your SQL Instances by Rob Sewell
introducing dbachecks – a new module from the dbatools team! by Chrissy LeMaire
install dbachecks by Chrissy LeMaire
dbachecks commands by Chrissy LeMaire
dbachecks – Using Power BI dashboards to analyse results by Cláudio Silva
My wrapper for dbachecks by Tony Wilhelm
Checking backups with dbachecks by Jess Promfret
dbachecks please! by Garry Bargsley
dbachecks – Configuration Deep Dive by Rob Sewell
Test Log Shipping with dbachecks
Checking your backup strategy with dbachecks by Joshua Corrick
Enterprise-level reporting with dbachecks by Jason Squires
Adding your own checks to dbachecks by Shane O’Neill
dbachecks – A different approach for an in-progress and incremental validation by Cláudio Silva

Other documentation:
dbachecks readme
dbachecks wiki (for developers)

Let’s begin

In this blog post, I will write about the Power BI dashboards that we have created to analyse the output of the tests.


Here, at dbachecks team, we don’t have BI gurus. This means, that what you are about to see come from our non-BI minds plus some research on the web to achieve the output we wanted.
The main objective is to have something functional, that helps to make decisions and help to know where we should look next.
That being said if you master Power BI, M query language, DAX or other stuff that we have used or can be used on this project and you found some archaic or a too much year 2000 way to do some stuff, let us know! We would love to follow best practices and improve the dashboards.

Why Power BI? The background…

I started to work with Power BI after seeing a couple of sessions where it was and was not the main attraction. One of those sessions was from Rob Sewell (b | t) where he was showing Pester and, to show the output, he has used Power BI desktop. This means that he is the principal “culprit” for this choice 🙂. If, in addition to that, we join the ease of import the JSON (and other) files, the usefulness and the eye-candy dashboards all together made the decision easier.
With this in mind, I picked the initial Power BI from Rob and started digging on it and make some changes.

Note: If your tool of choice is not Power BI or you just feel more comfortable using another tool to work with JSON files, please share your dashboards with us. Write about it and share with the community.

Tests output format

dbachecks output consists of one or more JSON files. When you write your dbachecks scripts you can choose between just one or more files as result of your validation. For instance, you can generate one output file per context or just one that includes all contexts. You can even use the -Append to append results to an existing file as Rob described on dbachecks – Configuration Deep Dive blog post.

Before opening the file that contains the dashboards…

For a more pleasant and smooth experience, we recommend that you update your Power BI desktop to the latest version.

Nowadays, Power BI’s team releases a new version every month. Each new version brings new features and improvements. Because we may be using the most recent version of Power BI (desktop) to take advantage of some of those, we also encourage you to use the most recent version.
You can download it from the official Power BI website.

Note: If you try to open a Power BI file saved on a more recent version than the one you are using you may see an error message like:

This was the result of an attempt to open a file on the December 2017 version that was saved on a more recent (the February 2018) version.

Which file should I open? pbix or pbit?

dbachecks contains two different files:

  • pbix – is a document created by Power BI Desktop. It contains queries, data models, visualizations, settings, and reports added by the user.
  • pbit – is similar to pbix but it is a template. You can configure variables that will be asked when you open the file. Those can act, for instance, as filters.

By default, dbachecks will output the results to the “c:\windows\temp\dbachecks” folder. This folder is also the default one configured in the pbix file. This path is the only variable that we use on the pbit file, which means if you don’t need to change your path because you relied on the default one, you can just open the pbix file and click refresh.

On the other hand, the intent of the pbit file is to make it easy for people that are not so comfortable with Power BI desktop, this way they just need to type their output folder (where all the JSON files are) and they are ready to go.
Where this is different from the other file? On the other file (“pbix” extension), you need to go to edit queries menu in order to change your path. This is a more “advanced way” to do it, so if you just want to start to see your tests results in a beautiful way, you can open the template file (“pbit” extension), fulfil the requested path, hit Load and wait for it to load all of your results.

Note: When using pbit, if no objects appear, please make sure you have entered the correct path. Wrong paths will lead to empty dashboards.

You have opened the pbix file but you are not seeing (your) data?

The pbix file will keep the last data you saw on the dashboard. If you have rerun all or some tests, don’t forget to click the “Refresh” button!

Only after refreshing the data source you will get all the data (re) loaded and thus get the most recent version of it.

The dashboards

On this liftoff of dbachecks, our Power BI file includes two dashboards.

If you never used Power BI before you may not know that you can maximize just one visual (almost any type) and come back to the report whenever you want. This is cool when you have a lot of information and you want to take a closer look.

See this example (click to open on new window):

By Environment

This dashboard gives to you a glance at how good/bad your environments are. This is done, at first, by the 2 circles with percentages and green/red waves.

On the right side you can find a matrix where you can drill-down from Environment -> Test -> Context. This drill-down can be hierarchical or absolute. To navigate on the levels, select the matrix and use the 3 buttons in the top-left corner of the visual (as shown on the last animation)

On the bottom, you can find a grid that shows all the failed (and only failed) tests with a descriptive message.

To help you to read this dashboard, on the left side you have the quick filters by Environment and Instance.
There are two other ways to filter the information – you can select a row on the matrix or on the grid to see information just related to that row.

By Time

The main objective of this dashboard is helping you to understand which tests take the most time to run.
Bear in mind that the times you will see are just the time that one test or the sum of tests took. The time switching between tests are not accounted.

I use this dashboard to understand which tests should/can be running in parallel and that way make the full execution faster.
Also, you can decide that some tests don’t need to run on the same frequency than others.

On the next example, I can see that my Production environment is the one taking the most time to complete, then I filter by it and I can see that the “Testing duplicate indexes” and “Testing Column Identity Usage” are the ones that take the most time. If I want, I can exclude them from the bar chart just to take a closer look at the other test times. Finally, I can go to the filters and remove them resetting the bar char right where we started.

Take a look:


Yes, there are some rules 🙂

To accomplish these dashboards, some rules must be followed. For example, the “Context” message needs to follow a specific nomenclature. You can read more about it on our wiki on Github.
If these rules are not followed when writing the tests, you can see some weird results on the dashboards.

On this example, you can see six instances, but the last two “Procedures” and “table” appear because the test wasn’t write in the right way.

Also, we have a unit test to help you check for these “Context” rules!


The data source on the Power BI was built to be dynamic. Because we can output just 1 test result per file (a record) or multiple results in the same file (a list), we built it so they can live together!

Load times

You may be thinking how fast it is to load the data and apply all the transformations we have to the files. We had that in mind and we tried to reach a good performance on that task.
I can load 270 files, totalling 397MB of data, in less than 30 seconds (the time will vary depending on the machine specifications).

If you are curious, each file contains a single test for every instance in a specific environment. At that time I managed 7 environments with more than 100 instances.

Next steps?

Now you can run your tests, analyse the output, make the changes needed, rerun the tests and start seeing your green percentage going up and the number of errors going down!

Our next objectives

We will bring some new dashboards and improve the existing ones whenever possible.
If you have a case that you would like to see covered by a dashboard share with us. Do you already have it sort out? Share with us and we can replicate to our file.

It’s Open Source – We Want Your Ideas, Issues, New CodeNew Code

dbachecks is open-source available on GitHub for anyone to contribute.

We would love you to contribute. Please open issues for new tests, enhancements, bugs. Please fork the repository and add code to improve the module. please give feedback to make this module even more useful.

You can also come in the SQL Server Community Slack and join the dbachecks channel and get advice, make comments or just join in the conversation.

Thank You

I want to say thank you to all of the people who have enabled dbachecks to get this far. These wonderful people have used their own time to ensure that you have a useful tool available to you for free

Chrissy Lemaire @cl
Rob Sewell @sqldbawithbeard
Fred Weinmann @FredWeinmann
Stuart Moore @napalmgram
Shawn Melton @wsmelton
Garry Bargsley @gbargsley
Stephen Bennett @staggerlee011
Sander Stad @SQLStad
Jess Pomfret @jpomfret
Jason Squires @js0505
Shane O’Neill @SOZDBA

and all of the other people who have contributed in the dbachecks Slack channel

I will be speaking at SQL Bits 2018

In two weeks from now, on 23th of February, I will be speaking at SQLBits 2018!
It’s my first time on the largest SQL Server conference in Europe for data professionals.

I will deliver a session about SQL Server Reporting Services and PowerShell titled – “Administrating SSRS without boring web based clicks”.

You can check the great content that will be shared on the 4 days:
Trainning days: Wednesday and Thursday
Regular sessions: Friday and Saturday

Are you gonna be there? More than 2000 people are already registered!

If you want to know more about the conference you can follow @sqlbits on Twitter or go to sqlbits website.

See you in London!

Using dbatools to verify your SQL Server instances version compliance

One of the main DBA’s duties is to guarantee that SQL Server instances are up-to-date in terms of patches (Service Packs, Cumulative Updates or Security Updates).

Recently, dbatools added a new command that turns this validation a piece of cake. Thanks to Simone Bizzotto (@niphlod) for baking up the command that Shawn Melton (@wsmelton) and I initially requested.

Some dbatools users already expressed their happiness with the command, like Jess Pomfret

So, I thought that this information should be shared with other people too.

Let me introduce to you – Test-DbaSqlBuild

This new command is available since v0.9.150.

If you are running this version or higher you can already take advantage of it, otherwise, you need to upgrade your module version first. Assuming you have installed the module from the PowerShell Gallery and that you have internet access, you can update as easy as running the following command:

Update-Module dbatools -Force

Otherwise, you can use the Save-Module command and then copy the files to your destination host.

How does the command works?

This command uses the dbatools-buildref-index.json file that contains all the information about SQL Server builds. This is the same file that feeds the dbatools builds table already shown on the introducing the community-driven build reference blog post.

The combinations

To run the command, we need at least two parameters. The -SqlInstance or -Build and one of the following 3: -MinimumBuild, -MaxBehind or -Latest.

The most straight example is when you want to check if the instance is running the latest build (it can be a security update even if not labelled as CU). To do that you just need to run:

Test-DbaSqlBuild -SqlInstance <instance> -Latest

In this example, I’m testing an instance that I have patched with SQL Server 2012 to SP4 but after that, the new security fix for Meltdown/Spectre was released, that is why the Compliant property shows False, it is not on the Latest existing build.
Note: If you just want to check for the latest SP and CU (leaving out the security patches) you need to use -MaxBehind "0CU"

Now, let’s say that you want to confirm that a specific instance is no more than 1 CU behind.
It’s easy as:

Test-DbaSqlBuild -SqlInstance <instance> -MaxBehind "1CU"

The output:

In this example, you can see that this instance is not compliant. Why? Because it is running the SQL Server 2016 SP1 CU5 but we asked for a max behind of 1 CU and that is the SP1 CU6 (because at the moment I’m writing this text, the most recent version is SP1 CU7).

Easy, right?
Keep in mind that for -MaxBehind you can also specify the number of service packs using -MaxBehind "1SP" and even use both, SP and CU like -MaxBehind "1SP 1CU".
Now, you can use multiple instances and verify them all like:

$SQLInstances = "SQL1", "SQL2", "SQL3"
Test-DbaSqlBuild -SqlInstance $SQLInstances -MaxBehind "1SP"

Other (real and useful) scenarios

We saw the “online” example where we will query each instance at the moment. Now, I want to share with you two more examples.

Using central database as data source

Let’s say you have a central database where you keep some of the information about your estate and one of those pieces of information is the SQL Server build version.

One code example:

$Instance = "<instance>"
$Database = "<centralDatabase>"
$InstancesTable = "dbo.Instances"
$SQLServersBuilds = Invoke-DbaSqlcmd -ServerInstance $Instance -Database $Database -Query "SELECT serverName, productVersion FROM $InstancesTable" 
$SQLServersBuilds | ForEach-Object {
    $build = $_.ProductVersion.SubString(0, $_.ProductVersion.LastIndexOf('.'))
    $serverName = $_.ServerName
    Test-DbaSqlBuild -Build $build -MaxBehind "1CU" | Select-Object @{Name="ServerName";Expression={$serverName}}, *
} | Out-GridView

For this example, I will query my dbo.Instances table and get the serverName and productVersion columns.
This is how it looks when running the select statement on SSMS:

You can pick that data and pass it to the Test-DbaSqlBuild command to know if it is compliant or not.

Then for each result, we will format the productVersion value to use just a 3 part value (it is how we catalog on dbatools build reference file) and pass it to the Test-DbaSqlBuild command.
In this example, I’m piping the output to Out-GridView so I can filter my results and add a filter for compliant equals false

Doing ad-hoc testing

The other example I would like to share is using the -Build parameter.
Imagine that you know that your SQL server instance is running build “13.0.4001” corresponding to SQL Server 2016 SP1, and you want to know if it is too far behind compared with the last available CU update. If we run the following command we will know it:

Test-DbaSqlBuild -Build "13.0.4001" -MaxBehind "0CU"

From this output we know that the most recent version is SP1 CU7 and we asked for latest SP1 (without CU), this means we are not Compliant

To give another example of this “ad-hoc” testing, we can use the following code provided by Simone Bizzotto to verify if our instances have the Meltdown/Spectre fix in place:

#Meltdown/Spectre check:
$mapping = @{
    '2008'   = '10.0.6556'
    '2008R2' = '10.50.6560'
    '2012'   = '11.0.7462'
    '2014'   = '12.0.5571'
    '2016'   = '13.0.4466'
    '2017'   = '14.0.3015'
$serv = 'SQL01','SQL02'
foreach($ref in (Get-DbaSqlBuildReference -SqlInstance $serv)) {
    Test-DbaSqlBuild -SqlInstance $ref.SqlInstance -MinimumBuild $mapping[$ref.NameLevel]

Thanks for reading.

New version of sp_WhoIsActive (v11.20) is available – Deployed on 123 instances in less than 1 minute

Last night, I received Adam Machanic’s (b | t) newsletter “Announcing sp_whoisactive v11.20: Live Query Plans”.

For those who don’t know about it, sp_WhoIsActive is a stored procedure that provides detailed information about the sessions running on your SQL Server instance.
It is a great tool when we need to troubleshoot some problems such as long-running queries or blocking. (just two examples)

This stored procedure works on any version/edition since SQL Server 2005 SP1. Although, you only will be able to see the new feature (live query plans) if you run it on SQL Server 2016 or 2017.

If you don’t receive the newsletter you can read this announcement here and subscriber to receive the next ones here.

You can read the release notes on the download page.

Thank you, Adam Machanic!

The show off part

Using the dbatools open source PowerShell module I can deploy the new latest version of the stored procedure.

By running the following two lines of code, I updated my sp_WhoIsActive to the latest version (we always download the newest one) on my 123 instances in less than one minute (to be precise, in 51,717 seconds).

$SQLServers = Invoke-DbaSqlcmd -ServerInstance "CentralServerName" -Query "SELECT InstanceConnection FROM CentralDB.dbo.Instances" | Select-Object -ExpandProperty InstanceConnection
Install-DbaWhoIsActive -SqlInstance $SQLServers -Database master

The first line will retrieve from my central database all my instances’ connection string.
The second one will download the latest version, and compile the stored procedure on the master database on each of the instances in that list (123 instances).

Thanks for reading

PowerShell Modules Central – Share with community – What PowerShell modules are you using?

Like the blog post title states this is all about sharing with others! My idea is to share with the community which PowerShell modules you are using.

Let me introduce to you the PowerShell Modules Central

PowerShellModulesCentral is a GitHub repository that was founded as a central hub to a list of PowerShell modules that people know/use. Each module has a file describing its name, basic information about the module, as well as one or more blog posts/videos from people that have written about or used them.

This way we can reduce friction when people are starting out or are trying to solve similar problems.


When a new module appears on the PowerShell scene it can be difficult to advertise and gain mindshare among developers/end users who could be interested in it. There are also times when difficulties arise in finding if a good tool exists or not, if its up to date, and how relevant it is in the community.

Why not just use the PS Gallery or script center?

This is, by no means, a replacement of those. Actually it is opposite, it is meant to be a community complement. Normally, when you need to do a task that you’ve never done before you like to have some jump start like blog posts or videos, and maybe you find the ones that are very close to your real scenario.
This repository enables not only people to write blog posts and share them with the community, but also the new guy (on PowerShell or just on a new task) that is searching for a specific tool to accomplish a task.
I can go to the PowerShell Gallery and see that the module I want to use has 1K downloads. That is really cool! It will give me confidence to use it. But, next, when you want to start working with it maybe you would like to see examples. The objective here is to have a quick look on some problems and tools used to solve them, as they can also be your problem.

Let me tell you a quick story

I went to google, found a PowerShell Gallery script, and after checking that the script didn’t work with some particulars I did a further research and found (google results – page 3 or 4 due ranks) a comment on a forum pointing to the GitHub repository. Guess what? The problems I was having were already addressed. 😉

Are you a module owner? Are you writing something new? Do you contribute to a module? Share it! The ones I know and use could be very different from the ones you know and use! Why not share?

How can this help me?

Are you trying to find a module to work on a specific task? Use the search on the top of the repository page and try to find what you need.

  • Working with Windows? Type “Windows”.
  • Working with SQL Server? Type “SQL Server”.
  • Do you know the author’s name? You can search that way too.
  • Have you read a blog post before and just remember one word or the blogger’s name? Type it and see what you find.

How to contribute?

Just fork the repository, add the information and send a pull request (PR). I will merge it once everything is OK.
For new modules please use the template available here. If you find that module already exists, you just need to add your URLs and any other information to be updated, tags that you think may be useful, add something to description, etc.

If you use a module that doesn’t have a blog post and/or videos yet, you can submit a PR anyway so all of the community can know that it exists and maybe someone will write about it!

Follow up

Follow the repository news by clicking on “Watch” button and/or follow @psmcentral Twitter account.

Feel free to share this blog post! The more people we reach, the better!

Thanks for reading.

Generate Markdown Table of Contents based on files within a folder with PowerShell

Last week I was talking with Constantine Kokkinos (b | t) about generating a Table Of Contents (TOC) for a GitHub repository.

He wrote a cool blog post – Generating Tables of Contents for Github Projects with PowerShell – about it and I will write this one with a different problem/solution.


I’m working on a new project (news coming soon) that uses a GitHub repository and I expect to have a big number of files within a specific folder.


After some pull requests and merges, I want to update my readme.md file and update the INDEX with this TOC.
For this:

  • I want to be able to generate a TOC based on the existing content of a specific folder.
  • Each TOC entry must be composed by a name and a link to the .md online file.
  • This list must be ordered alphabetically.

Then, I can copy & paste and update the readme.me .
NOTE: For now, I just want a semi-automatic way to do it. Maybe later I will setup Appveyor and make this fully automated 🙂 ).


Get all files with .md extension, order by name and, for each one, generate a line with a link to the GitHub repository .md file.

To do the list I will use the “*” (asterisk) character after a “TAB” to generate a sub list. (This is Markdown’s syntax)

The code

I have three parameters:

  • the $BaseFolder – It’s the folder’s location on the disk
  • $BaseURL – to build the URL for each file. This will be added as a link
  • $FiletypeFilter – to filter the files on the folder. In my case I will use “*.md” because I only want markdown files.

The code is:
UPDATE: Thanks to Jaap Brasser (b | t) who has contributed to the this code by adding the help and some improvements like dealing with special characters on the URL (spaces). You can find the most recent version of this Convert-FolderContentToMarkdownTableOfContents.ps1 function here on my GitHub

Running this code pointing to my “NewProject” folder

I will get this output (This have fake links but just to show the output format)


Nice! This has the following code behind:

## Index
* Modules
  * [File1](https://github.com/user/repository/tree/master/Modules/File1.md)
  * [OneNewFile](https://github.com/user/repository/tree/master/Modules/OneNewFile.md)
  * [OtherFile](https://github.com/user/repository/tree/master/Modules/OtherFile.md)

Now, I can copy this markdown code and update my readme.md file.

Final thoughts

This isn’t rocket science 🙂 but it is an idea and a piece of code that will help me and maybe can help you too 🙂

Read Constantine’s blog post (Generating Tables of Contents for Github Projects with PowerShell) to get different ideas.

Thanks for reading