New version of sp_WhoIsActive (v11.20) is available – Deployed on 123 instances in less than 1 minute

Last night, I received Adam Machanic’s (b | t) newsletter “Announcing sp_whoisactive v11.20: Live Query Plans”.

For those who don’t know about it, sp_WhoIsActive is a stored procedure that provides detailed information about the sessions running on your SQL Server instance.
It is a great tool when we need to troubleshoot some problems such as long-running queries or blocking. (just two examples)

This stored procedure works on any version/edition since SQL Server 2005 SP1. Although, you only will be able to see the new feature (live query plans) if you run it on SQL Server 2016 or 2017.

If you don’t receive the newsletter you can read this announcement here and subscriber to receive the next ones here.

You can read the release notes on the download page.

Thank you, Adam Machanic!

The show off part

Using the dbatools open source PowerShell module I can deploy the new latest version of the stored procedure.

By running the following two lines of code, I updated my sp_WhoIsActive to the latest version (we always download the newest one) on my 123 instances in less than one minute (to be precise, in 51,717 seconds).

$SQLServers = Invoke-DbaSqlcmd -ServerInstance "CentralServerName" -Query "SELECT InstanceConnection FROM CentralDB.dbo.Instances" | Select-Object -ExpandProperty InstanceConnection
Install-DbaWhoIsActive -SqlInstance $SQLServers -Database master

The first line will retrieve from my central database all my instances’ connection string.
The second one will download the latest version, and compile the stored procedure on the master database on each of the instances in that list (123 instances).

Thanks for reading

Using Common Table Expression (CTE) – Did you know…

Today I will write just a short blog post to do a quick reminder!

I still hear a lot of people suggesting CTEs because they think it works like a temporary table (you populate the table and then it can be/is reutilized).

It doesn’t!

From de documentation:

Specifies a temporary named result set, known as a common table expression (CTE).

Maybe they are focusing on the “temporary” word.

Using the CTE two times will perform two different executions! Don’t believe me? See the next example!
If we run the following code do you expect to get the same value for both queries? Note: we have a UNION ALL between them.

WITH cte AS
(
	SELECT NEWID() AS Col1
)
SELECT Col1
  FROM cte
UNION ALL
SELECT Col1
  FROM cte

Sorry to disappoint you but it will run the CTE’s code twice and return the value(s) from each execution.
As we are using the function NEWID(), two different values will be generated.

output.png

To complete the question: “Did you know that CTE’s code will be executed as many times as you use it?”

Thanks for reading!

TSQL Tuesday #96: Folks Who Have Made a Difference

tsql2sdayThis month’s T-SQL Tuesday is brought to us by Ewald Cress‏ (blog | twitter) and is all about “folks who have made a difference” in our careers.

Thank you, Ewald! This is a great topic!

Here is my short list:

Paulo Silva (in)

He was my first boss in the IT world! I was his apprentice when I started my internship. He was going to move to a manager position and I  had to continue his work. He was responsible for the beginning of my career with SQL Server 2000 and VB6.

He was one of the main culprits for my growth not only in IT but also as a person!

Etienne Lopes (t | b)

After 5 years working on IT I had the tremendous pleasure to meet Etienne. This guy is a professor! He has the gift of the word!

I have worked closely with him for about 2 years and were one of the best times of my career! I always consider myself as a sponge, and as long Etienne shared is knowledge I felt I was absorbing every single word!

Much of the bases I have with SQL Server I learned from him!

André Batista (t) / Niko Neugebauer (t | b)

These two guys are the responsible for my very first talk on a user group (SQLPort).

After that, I became more and more involved with the local community and today I speak for more user groups and I help with SQL Saturday / TugaIT events in Lisbon!

Rob Sewell (t | b)

The one and only DBAWithABeard! My recent experiments were from blog posts/presentations that I read/saw from Rob. PowerBI & Pester are just two of them. He is super accessible and always willing to help.

Chrissy LeMaire (t | b)

I met Chrissy less than 2 years ago at the TugaIT conference (May 2016) in Lisbon. At the time has passed like 1 month from the dbatools.io launch date and I had written a couple of PowerShell scripts that I thought would be nice to add to the initial tool.

We talked, exchange contacts and one month later, in June, I was submitting my first pull request to the dbatools GitHub repository.

From that time until now it has been a blast! I learned so much about PowerShell with her and she is also one of the responsible for my MVP not only because she nominated me for the very first time but also because all the visibility that the project brought to me.

She was also the first person delivering a presentation with me. 🙂

People I know from the magazines or internet

People that helped me to understand SQL Server much better and from whom I have read a lot of articles: Itzik Ben-Gan (I remember the times I read the SQL Magazine with great articles from him), Paul Randal, Kimberly L. Tripp, Adam MachanicPaul White and Kendra Little.

Wrap up

I could add more people to the list but, those are the ones that I want to highlight from different periods (the beginning, middle and nowadays) of my career.

Thank you all!

 

DELETE data on SQL Server HEAP table – Did you know…

Before I complete my question let me provide context.

I’ve received an alert saying that a specific database could not allocate a new page (disk was full)

The message that you will see on the SQL Server Error log is:

Could not allocate a new page for database ” because of insufficient disk space in filegroup ”. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

I didn’t know the database structure or what is saved there, so I picked up a script from my toolbelt that shreds all indexes from all table. Just some information like number of rows and space that it is occupying. I have sorted by occupying space in descending order, look what I found…

So…my script has a bug? 🙂 No, it hasn’t!

The joy of heaps

First, the definition:

A heap is a table without a clustered index. One or more nonclustered indexes can be created on tables stored as a heap. Data is stored in the heap without specifying an order. Usually data is initially stored in the order in which is the rows are inserted into the table, but the Database Engine can move data around in the heap to store the rows efficiently; so the data order cannot be predicted. To guarantee the order of rows returned from a heap, you must use the ORDER BY clause. To specify the order for storage of the rows, create a clustered index on the table, so that the table is not a heap.

Source: MS Docs – Heaps (Tables without Clustered Indexes)

Until now, everything seems normal, it is just a table with unordered data.

Why am I talking about heaps?

Not because of table name (was created on propose for this demo), let me show to you the whole row of the script:

Do you have a clue? Yup, index_id = 0. That means that our table does not have a clustered index defined and therefore it is an HEAP.

Even so, how it is possible? 0 rows but occupying several MB…

The answer is…on the documentation 🙂

When rows are deleted from a heap the Database Engine may use row or page locking for the operation. As a result, the pages made empty by the delete operation remain allocated to the heap. When empty pages are not deallocated, the associated space cannot be reused by other objects in the database.

source: DELETE (Transact-SQL) – Locking behavior

That explains it!

So…what should I do in order to get my space back when deleting from a HEAP?

On the same documentation page we can read the following:

To delete rows in a heap and deallocate pages, use one of the following methods.

  • Specify the TABLOCK hint in the DELETE statement. Using the TABLOCK hint causes the delete operation to take an exclusive lock on the table instead of a row or page lock. This allows the pages to be deallocated. For more information about the TABLOCK hint, see Table Hints (Transact-SQL).
  • Use TRUNCATE TABLE if all rows are to be deleted from the table.
  • Create a clustered index on the heap before deleting the rows. You can drop the clustered index after the rows are deleted. This method is more time consuming than the previous methods and uses more temporary resources.

Following the documentation, it suggest we can to use the TABLOCK hint in order to release the empty pages when deleting the data.
Example:

DELETE 
  FROM dbo.Heap WITH (TABLOCK)

What if I didn’t that way or if anyone else run a DELETE without specify it?

You can rebuild your table using this syntax (since SQL Server 2008):

ALTER TABLE dbo.Heap REBUILD

This way, the table will release the empty pages and you will recovery the space to use on other objects in the database.

Wrap up

I hope that with this little post you understood how and why a HEAP can have few rows or even zero but still occupy lots of space. Also I have mentioned two ways to solve this problem.
Also, I have found databases with dozens of HEAPS almost empty or even empty that were occupying more than 50% of the total space allocated to the database. And guess what? People where complaining about space.

To finish, I need to complete the title, Did you know…you should use TABLOCK hint when deleting data from a HEAP?

Thanks for reading!

Someone is not following the best practices – dbatools and Pester don’t lie!

This month’s T-SQL Tuesday is brought to us by my good friend Rob Sewell (b | t). Together “Let’s get all Posh – What are you going to automate today?”

I have written some blog posts on how I use PowerShell to automate mundane tasks or some other more complex scenarios like:  Find and fix SQL Server databases with empty owner property using dbatools PowerShell module or Have you backed up your SQL Logins today?  or even using ReportingServicesTools module for deploy reports – SSRS Report Deployment Made Easy – 700 times Faster.

But today I want to bring something little different.  This year, back in May I saw two presentations from Rob about using Pester to do unit tests for our PowerShell code and also to validate options/infrastructure like checklists. This got my attention and made me want to play with it!

Therefore, I want to share an example with you using two of my favorite PowerShell modules dbatools and Pester.

Let’s play a game

You go to a client or you have just started working on your new employer and you want to know if the entire SQL Server state complies with the best practices.

For the propose of this blog, we will check:

  • if our databases (from all instances) have the following configurations:
    • PageVerify -> Checksum
    • AutoShrink -> False
  • if each SQL Server instance:
    • has the MaxMemory setting configured to a value lower than the total existing memory on the host.

How would you do that?

Let me introduce to you – dbatools

For those who don’t know, dbatools is a PowerShell module, written by the community, that makes SQL Server administration much easier using PowerShell. Today, the module has more than 260 commands. Go get it (dbatools.io) and try it! If you have any doubt you can join the team on the #dbatools channel at the Slack – SQL Server Community.

In this post I will show some of those commands and how they can help us.

Disclaimer: Obviously this is not the only way to accomplish this request, but for me, is one excellent way!

Get-DbaDatabase command

One existing command on the dbatools swiss army knife is Get-DbaDatabase.
As it states on the command description

The Get-DbaDatabase command gets SQL database information for each database that is present in the target instance(s) of SQL Server. If the name of the database is provided, the command will return only the specific database information.

This means that I can run the following piece of PowerShell code and get some information about my databases:

Get-DbaDatabase -SqlServer sql2016 | Format-Table

This returns the following information from all existing databases on this SQL2016 instance.

Too little information

That’s true, when we look to it, it brings not enough information. I can’t even get the “PageVerify” and “AutoShrink” properties that I want. But that is because we, by default, only output a bunch of properties and this doesn’t mean that the others are not there.

To confirm this we can run the same code without the ” | Format-Table” that is useful to output the information in a table format but depending on the size of your window it can show more or less columns.
By running the command without the “format-table” we can see the following (just showing the first 3 databases):

Now, we can see more properties available look to the ones inside the red rectangle.

I continue not to see the ones I want

You are right. But as I said before that does not means they aren’t there.
To simplify the code let’s assign our output to a variable named $databases and then we will have a look to all the Members existing on this object

$databases = Get-DbaDatabase -SqlServer sql2016
$databases | Get-Member

Now we get a lot of stuff! The Get-Member cmdlet say to us which Properties and Methods of the object (in this case the $databases).

This means that I can use a filter to find results with “auto” in its name:

$databases | Get-Member | Where-Object Name -like *auto*

Some cmdlets have parameters that allow us to filter information without the need to pipeing it so, the last line command could be written as:

$databases | Get-Member -Name *auto*

Which will output something like this:

So, we have found our “AutoShrink” property. With this in mind, lets query all the properties we want.

$databases | Select-Object SqlInstance, Name, AutoShrink, PageVerify

And here we have the result:

Scaling for multiple instances

This is where the fun begins.
We can pass multiple instance names and the command will go through all of them and output a single object with the data.

$databases = Get-DbaDatabase -SqlServer sql2016, sql2012
$databases | Select-Object SqlInstance, Name, AutoShrink, PageVerify

Which outputs:

As you can see I have passed two different instances sql2016 (in red) and sql2012 (in green) and the output brought both information.

Using Out-GridView to filter results

We can use another PowerShell native cmdlet called Out-GridView to show our results in a grid format. This grid also make it possible to use filters.
For the next example, I have misconfigurated two databases so we can find them among the others.

$databases | Select-Object SqlInstance, Name, AutoShrink, PageVerify | Out-GridView

As you can see, inside red rectangles we have two not optimal configurations regarding the SQL Server best practices. You can also see the green rectangle on the top left corner where you can type text and the results will be filter as you type. So if you type “true” you will end just with one record.

Checking the MaxMemory configuration

Now, that you have seen how to do it for one command, you can start exploring the other ones. As I said in the beginning of this post we will also check the MaxMemory setting for each instance. We will use the Get-DbaMaxMemory. From the help page we can see the description that says:

This command retrieves the SQL Server ‘Max Server Memory’ configuration setting as well as the total physical installed on the server.

Let’s run it through our two instances:

Get-DbaMaxMemory -SqlInstance sql2012, sql2016

We can see that SQL2012 instance is running on a host with 6144MB of total memory but its MaxMemory setting is set to 3072MB and also, SQL2016 instance has 4608MB configured form the 18423MB existing on the host.

Final thought on this fast introduction to dbatools PowerShell module

As you see, it is pretty easy to run the commands for one or multiple instances to get information to work on. Also you have seen different ways to output that information.
I encourage you to use the Find-DbaCommand to discover what other commands exists and what they can do for you.

Example, if you want to know which commands we have that works with “memory” you can run the following code:

Find-DbaCommand -Pattern memory

Automating even more

Using the dbatools module we could verify if the best practice is in place or not. But we had to run the command and then verify the values by filtering and looking for each row.

You may be thinking that must exists some other more automated method to accomplish that, right?

Say hello to Pester PowerShell module

Pester is unit test framework for PowerShell. I like to say If you can PowerShell it, you can Pester it.

Pester provides a framework for running Unit Tests to execute and validate PowerShell commands. Pester follows a file naming convention for naming tests to be discovered by pester at test time and a simple set of functions that expose a Testing DSL for isolating, running, evaluating and reporting the results of PowerShell commands.

Please see how to install Pester module here.

With this framework, that I really encourage you to read more about it on the project Wiki, we can automate our tests and make it do the validations for us!

As quick example – if we run the following code:

We are checking if the login returned by the whoami is base\claudio.

This return green which means it’s ok!

If is not ok (because I’m testing to “base\claudio.silva”), will retrieve something like this:

Quick walkthrough on Pester syntax

As you can see, to do a test we need a:

  • Describe block (attention: the “{” must be on the same line!)
    • Inside it, the Context block
      • And inside the Context block the validation that we want to do the It and Should.

Let’s join forces

With this in mind, I can create tests for my needs using dbatools and Pester.

I will have a variable ($SQLServers)

$SQLServers = @('sql2012', 'sql2014', 'sql2016')

with all the instances I want to test and two “Describe” blocks, one for “Testing database options” – PageVerify and AutoShrink

Describe "Testing Database Options for $Server" {
   foreach($Server in $SQLServers){
      #Just selecting some columns so it don't take too much time returning all the thing that we don't want
      $databases = Get-DbaDatabase -SqlServer $server | Select-Object Name, SqlInstance, CompatibilityLevel, PageVerify, AutoShrink, AutoClose
      foreach($database in $databases) {
         Context "$($Database.Name) Validation" {
            It "PageVerfiy set to Checksum" {
               $database.PageVerify| Should Be "Checksum"
            }
            It "AutoShrink set to False" {
               $database.AutoShrink| Should Be $false
            }
         }
      }
   }
}

And another one for “Testing instance MaxMemory”:

Describe "Testing Instance MaxMemory"{
   foreach($Server in $SQLServers){
      $instanceMemory = Get-DbaMaxMemory -SqlInstance $Server
      Context "Checking MaxMemory value" {
         It "$($Server) instance MaxMemory value $($instanceMemory.SqlMaxMb) is less than host total memory $($instanceMemory.TotalMB)" {
            $instanceMemory.SqlMaxMb | Should BeLessThan $instanceMemory.TotalMB
         }
      }
   }
}

To run this tests we should save a file with the “.Tests.ps1” ending name. Let’s save as “SQLServerBestPractices.Tests.ps1”. To run the tests we need to use the Invoke-Pester and the file that contains the tests.

Invoke-Pester .\SQLServerBestPractices.Tests.ps1

To much noise – can’t find the failed tests easily

You are right, showing all the greens make us lose the possible red ones. But Pester has an option to show just the failed tests.

Invoke-Pester .\SQLServerBestPractices.Tests.ps1 -Show Failed

But, be aware that -Show Fails can be a better solution, specially when you are working with multiple Tests.ps1 files.

This way you can see where your error come from.

Reading and fixing the errors

As you can read from the last image from -Show Failed execution, the database “dbft” on “SQL2016” instance has the “AutoShrink” property set to “True” but we expect the value “False”. Now you can go to the database properties and change this value!

Also, the “PageVerify” value that we expect to be “Checksum” is “TornPageDetection” for the database “dumpsterfire4” and “SQL2016” instance.

Finally the MaxMemory configuration on the “SQL2016” instance is set to 46080MB (45GB) but we expect that should be less than 18432mb (18GB) that is the total memory of the host. We need to reconfigure this value too.

This is great!

Yes it is! Now when a new database is born on an existing instance, or you update your instances with a new one, you can simply run the tests and the new stuff will be included on this set of tests!

If you set it to run daily or even once per week you can check your estate and get new stuff that haven’t been you to setup and maybe is not following the best practices.

Get the fails and email them (I will blog about it).

Next steps

  • Explore Pester syntax.
  • Add new instances.
  • Add new tests
    • Check if you have access to the instance (great way to know quickly if some instance is stopped)
    • Check if your backups are running with success and within our policy time interval
    • Check if your datafiles are set to growth by fixed value and not percent. Also if that fixed value is more than X mb.
    • Want to Test your last backup? Or something completely different like Rob’s made for Pester for Presentations – Ensuring it goes ok?

You name it!

Want more?

I hope you have seen some new stuff and get some ideas from this blog post!

If you want to know if there will be some dbatools presentations near you, visit our presentation page. You can find some of our presentations on our youtube channel and code example on the community presentations on GitHub.

About Pester and other examples and use cases, we have the Articles and other resources page maintained by the Pester team.

I’m looking forward to read the other blog posts (follow the comments on Rob’s post, or the roundup later) on this month’s T-SQL Tuesdays and see what people is being doing with PowerShell.

Thanks for reading.

“Invalid class [0x80041010]” error when trying to access SQLServer’s WMI classes

I was using open source PowerShell module dbatools (GitHub repository) to get the list of SQL Server services I have on a bunch of hosts so I could confirm if they are in “running” state.

— Quick note —
For those who don’t know, dbatools is a module, written by the community, that makes SQL Server administration much easier using PowerShell. Today, the module has more than 260 commands. Go get it and try it! If you have any doubt you can join the team on the #dbatools channel at the Slack – SQL Server Community.
— Quick note —

To accomplish this, I’m using the Get-DbaSqlService initially written by Klaas Vandenberghe (b | t).

This command is very handy, as it will try different ways to connect to the host and we don’t need to do anything extra. Also, it has a -Credential parameter so we can use it to connect to hosts in different domains (I have 10 different credentials, one per domain).

Everything was running fine, for the first couple of hosts, until…

I got the following message when running on a specific host:

WARNING: Get-DbaSqlService – No ComputerManagement Namespace on HOST001. Please note that this function is available from SQL 2005 up.

Trying to get more information, I have executed the same command but added the -Verbose switch

From all the blue lines, I spotted this:

VERBOSE: [Get-DbaCmObject][12:23:31] [HOST001] Retrieving Management Information
VERBOSE: [Get-DbaCmObject][12:23:31] [HOST001] Accessing computer using Cim over WinRM
VERBOSE: [Get-DbaCmObject][12:23:47] [HOST001] Accessing computer using Cim over WinRM – Failed!
VERBOSE: [Get-DbaCmObject][12:23:47] [HOST001] Accessing computer using Cim over DCOM
VERBOSE: [Get-DbaCmObject][12:23:48] [HOST001] Accessing computer using Cim over DCOM – Success!

Ok, this means that for this specific host I can’t connect via WinRM (using WSMan) but I can when using the DCOM protocol. However,  the WMI query used to get the list of SQL services fails.

Going further

I open the Get-DbaSqlService.ps1 script and spotted where the warning message comes from. Then, I have copied the code to a new window in order to isolate it and do another execution tests.

The code is:

$sessionoption = New-CimSessionOption -Protocol DCOM
$CIMsession = New-CimSession -ComputerName $Computer -SessionOption $sessionoption -ErrorAction SilentlyContinue -Credential $Credential
#I have skipped an if ( $CIMSession ) that is here because we know that works.
$namespace = Get-CimInstance -CimSession $CIMsession -NameSpace root\Microsoft\SQLServer -ClassName "__NAMESPACE" -Filter "Name Like 'ComputerManagement%'" -ErrorAction SilentlyContinue |Where-Object {(Get-CimInstance -CimSession $CIMsession -Namespace $("root\Microsoft\SQLServer\" + $_.Name) -Query "SELECT * FROM SqlService" -ErrorAction SilentlyContinue).count -gt 0}

I splitted the last command to remove the pipeline since I would like to analyze each part of the code. I ended with the following code:

$sessionoption = New-CimSessionOption -Protocol DCOM
$CIMsession = New-CimSession -ComputerName "HOST001" -SessionOption $sessionoption -ErrorAction Continue -Credential $Credentials -Verbose

Get-CimInstance -CimSession $CIMsession -NameSpace root\Microsoft\SQLServer -Query "Select * FROM __NAMESPACE WHERE Name Like 'ComputerManagement%'"
#This one is comment for now
#Get-CimInstance -CimSession $CIMsession -Namespace $("root\Microsoft\SQLServer\ComputerManagement10") -Query "SELECT * FROM SqlService"

This can return more than one line with different ComputerManagement (like ComputerManagement10). It depends on the versions you have installed on the host. The number “10” refers to the SQL Server 2008.
Now I can uncomment the last command and run it. The result is:

Get-CimInstance : Invalid class
At line:1 char:1
+ Get-CimInstance -CimSession $CIMsession -Namespace $(“root\Microsoft\SQLServer\C …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : MetadataError: (:) [Get-CimInstance], CimException
+ FullyQualifiedErrorId : HRESULT 0x80041010,Microsoft.Management.Infrastructure.CimCmdlets.GetCimInstanceCommand
+ PSComputerName : HOST001

Ok, a different error message. Let’s dig in it. I logged in on the host and confirmed that I have a SQL Server 2008 R2 instance installed. This means that I’m not accessing a lower version than 2005 like the initial warning message was suggesting.

I tried to execute locally the same query but this time using Get-WmiObject instead of Get-CimInstance (which, in this case wasn’t available because the host only have PowerShell v2.0. It’s a Windows server 2008 SP2. CIM cmdlets appears on v3.0) and it failed with the same error.

Get-WmiObject : Invalid class
At line:1 char:5
+ gwmi <<<< -Namespace “root\Microsoft\SQLServer\ComputerManagement10” -Query “SELECT * FROM SqlService”
+ CategoryInfo : InvalidOperation: (:) [Get-WmiObject], ManagementException
+ FullyQualifiedErrorId : GetWMIManagementException,Microsoft.PowerShell.Commands.GetWmiObjectCommand

I remembered, from past experiences, that SQL Server Configuration manager relies on WMI classes to show the information, so I tried to open it and I got the following error message:

Cannot connect to WMI provider. You do not have permission or the server in unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager.
Invalid class [0x80041010]

Again, that 2005 callout, but…did you recognize the last sentence? It’s the same error I was getting with Get-CIMInstance remotely and Get-WmiObject locally.

Definitely something is broken.

Let’s fix it!

To fix this problem we need to reinstall the SQL Server WMI provider. To do this we need to run 2 commands. (I found this in this post)

  1. Install classes:
    Go to C:\Program Files (x86)\Microsoft SQL Server\{Version 110 is SQL2012}\Shared
    There you can find a file with mof extension. The file name sqlmgmproviderxpsp2up.mof
    Now on the command line run the following command:
    mofcomp sqlmgmproviderxpsp2up.mof
    The output:
  2. Install localization info:
    Navigate to the Shared sub-folder that indicates the locale of your SQL Server installation. In my case is the 1033 (english-US).
    Inside that folder you will find a file with the .mfl extension. The file name is sqlmgmprovider.mfl. On the command line run the following command:
    mofcomp sqlmgmprovider.mfl 
    The output:

With these 2 actions, we are done.

Now we can try to open the SQL Server Configuration Manager again and it opens like expected! Without error messages.

Let’s go back and rerun our commands.
On the host:

Remotely:

And from dbatools Get-DbaSqlService command:

No more “invalid class” messages and we get the output we want!

Thanks for reading.

HTTP 403 error – PowerShell Remoting, Different Domains and Proxies

On my day to day work I use Nagios monitoring software. I want to add some custom SQL Server scripts to enrich the monitoring, and to accomplish this I will need to:

  • Find a folder
  • Create a sub folder
  • Copy bunch of file
  • edit a ini file to verify/add new entries

all of this for every single host on my entire estate. Obviously (for me 🙂 ) I decided to use PowerShell!

Hold your horses!

Yes, calm down. I’m working on a client where the network it’s anything but simple. As far as I know they have 10 domains and few of them have trust configured, but even those that have, is not in both ways… so I didn’t expect an easy journey to get the task done.

Side note: For those thinking how I can live without PowerShell, I can’t! But,  the majority of my time using PowerShell is with SQL Server, mainly using SMO (with the help of dbatools), which means I haven’t struggle that much until now.

“…WinRM client received an HTTP status code of 403…”

Ok, here we go!

PowerShell Remoting and different domains…

….needs different credentials. This is a requirement when using ip address.
If we try to run the following code:

$DestinationComputer = '10.10.10.1'
Invoke-Command -ScriptBlock { Get-Service *sql* } -ComputerName $DestinationComputer

we will get the following error message:

Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided.

First, I add the destination computer to my TrustedHosts. We can do this in two ways:

Using Set-Item PowerShell cmdlet

Set-Item WSMan:\localhost\Client\TrustedHosts "10.10.10.1"

Or using winrm executable:

winrm s winrm/config/client '@{TrustedHosts="10.10.10.1"}'

Note: You can use “*” (asterisk) to say all remote hosts are trusted. Or just a segment of IPs like “10.10.10.*”.

But, there is another requirement like the error message says “…and explicit credentials are provided.”. This means that we need to add, and in this case I really want to use, a different credential so I have modified the script to:

$DestinationComputer = '10.10.10.1'
Invoke-Command -ScriptBlock { Get-Service *sql* } -ComputerName $DestinationComputer -Credential domain2\user1

Now I get prompted for the user password and I can… get a different error message (*sigh*):

[10.10.10.1] Connecting to remote server 10.10.10.1 failed with the following error message : The WinRM client received an HTTP status code of 403 from the remote WS-Management service. For more information, see the

about_Remote_Troubleshooting Help topic.

+ CategoryInfo : OpenError: (10.10.10.1:String) [], PSRemotingTransportException

+ FullyQualifiedErrorId : -2144108273,PSSessionStateBroken

This one was new for me so I jumped to google and started searching for this error message. Unfortunately all the references I found are to solve an IIS problem with SSL checkbox on the website like this example.

Clearly this is not the problem I was having.

Proxies

I jumped into PowerShell slack (you can ask for an invite here and join more than 3 thousand professionals) and ask for help on #powershell-help channel.
In the meantime, I continued my search and found something to do with proxies in the The dreaded 403 PowerShell Remoting blog post.
This actually could help, but I don’t want to remove the existing proxies from the remote machine. I had to find another way to do it.

Returning to Slack, Josh Duffney (b | t) and Daniel Silva (b | t) quickly prompted to help me and when I mentioned the blog post on proxies, Daniel has shown to me the PowerTip PowerShell Remoting and HTTP 403 Error that I haven’t found before (don’t ask me why…well, I have an idea, I copy & paste the whole error message that’s why).

ProxyAccessType

The answer, for my scenario, is the ProxyAccessType parameter. As it says on the help page, this option “defines the access type for the proxy connection”. There are 5 different options AutoDetect, IEConfig, None, NoProxyServer and WinHttpConfig.

I need to use NoProxyServer to “do not use a proxy server – resolves all host names locally”. Here is the full code:

$DestinationComputer = '10.10.10.1'
$option = New-PSSessionOption -ProxyAccessType NoProxyServer
Invoke-Command -ScriptBlock { Get-Service *sql* } -ComputerName $DestinationComputer -Credential domain2\user1 -SessionOption $option

This will:

  • create a new PowerShell Session option (line 2) with New-PSSessionOption cmdlet saying that -ProxyAccessType is NoProxyServer.
  • Then, just use the $option as the value of -SessionOption parameter on the Invoke-Command.

This did the trick! Finally I was able to run code on the remote host.

Thanks for reading.