Replicate permissions to new Login/User with dbatools

Continuing to share tips and tricks using dbatools like the Refresh databases that belongs to availability group using dbatools posted a few days ago, today I will share a way to export the permissions of a login/user at instance/database level and apply them to a new login/user.

The typical request

Client: “We have a new colleague, can you please grant them the exact same permissions I have?!”

A quick note/suggestion that may help minimize your work: If we are talking about a Windows Login, please suggest the use of AD groups to help you (so that you don’t need to do this process) and also them (they don’t need to make this request).

What are the steps involved in a process like this?

Generally speaking, the quickest way I know is:
1. Get the permissions of the source login/user (For each database because we don’t want to miss any permission)
2. Save them to a .sql file
3. Open the file and replace ‘srcUser’ by ‘newUser’
4. Execute

A (boring) life before dbatools – how to do it

I did a quick poll on Twitter to know how people handled it before dbatools existed. Maybe they still use that process. That is OK 🙂 but maybe I can help change the way they do it with this post.

In my case the votes goes to ‘1 or more T-SQL scripts’. At the time it wasn’t that hard. Of course if we compare with today’s options/tools…that was hard. However, it was what I knew and felt comfortable with.

The life with dbatools – a better life

In dbatools (v1.0.111) we have 20 Export-Dba* commands.

Two of them are related with the login and user permissions.
Export-DbaLogin and Export-DbaUser (Rob Sewell (T) wrote about this last one on his blog post Export SQL User Permissions to T-SQL script using PowerShell and dbatools back in 2017.

You may be familiar with these ones to export/backup user permissions and, for example, run it again after a database refresh.

Back to our specific request of replicating permissions, let’s see a couple of options.

User database level permissions

This will run through all databases. If you just need to run from a couple of them, add the -Database parameter on the Export-DbaUser.

$sqlInstance = "<yourInstance>"
$existingUser = "<srcUser>"
$newUser = "<newUser>"
$permissionsFile = "D:\temp\ExistingUserPermissions.sql"
$permissionsFileNewUser = "D:\temp\NewUserPermissions.sql"

# Because -replace takes a regular expression, we need to escape the '\' when dealing with windows logins
$existingUserToSearch = $existingUser -replace '\\', '\\'

# Export the user from every database and its permissions at database-roles and object level
Export-DbaUser -SqlInstance $sqlInstance -User $ExistingUser -FilePath $permissionsFile

((Get-Content $permissionsFile -Raw) -replace ($existingUserToSearch, $newUser)) | Set-content $permissionsFileNewUser

And now you can open the new user script, check and execute it on the instance.

That’s cool! But, what if I also want the login and instance level permissions?

With a couple of small changes we can get it done.
NOTE: Here I’m assuming the login and user have the same name.

$sqlInstance = "<yourInstance>"
$existingLoginUser = "<srcLoginUser>"
$newLoginUser = "<newLoginUser>"
$permissionsFileLogin = "D:\temp\ExistingLoginPermissions.sql"
$permissionsFileUser = "D:\temp\ExistingUserPermissions.sql"
$permissionsFileNewLoginUser = "D:\temp\NewLoginUser.sql"

# Because -replace takes a regular expression, we need to escape the '\' when dealing with windows logins
$existingLoginUserToSearch = $existingLoginUser -replace '\\', '\\'

# Export the login and its server-roles, server-level and database-level permissions
Export-DbaLogin -SqlInstance $sqlInstance -Login $existingLoginUser -FilePath $permissionsFileLogin
 
# Export the user from every database and its permissions at database-roles and object level
Export-DbaUser -SqlInstance $sqlInstance -User $existingLoginUser -FilePath $permissionsFileUser

# Replaces:
# 1 - Replace the login/username by the new one
# 2 - Replace SID (to prevent duplicate ones) by nothing/empty
((Get-Content $permissionsFileLogin, $permissionsFileUser -Raw) -replace ($existingLoginUserToSearch, $newloginuser)) -Replace '(, SID[^,]*)', ' ' | Set-content $permissionsFileNewLoginUser 

And then, you can open the new script NewLoginUser.sql, check and execute it on the instance.

Bonus option – File free

In the last few weeks I have had some questions about this method, if it is possible and how.
The question is “What if you don’t want/need to save/keep the SQL file on the file system?”
-PassThru parameter for the rescue. This way we will do it all “in-memory”.

Using again the example with database level permissions

$sqlInstance = "<yourInstance>"
$existingUser = "<srcUser>"
$newUser = "<newUser>"

$ExportedUser = Export-DbaUser -SqlInstance $sqlInstance -User $existingUser -PassThru

$NewUserPermissions = $ExportedUser -replace $($existingUser -replace '\\', '\\'), $newUser

# Copy the permission to the clipboard. Paste on your query editor and paste there.
$newUserPermissions | Set-Clipboard

Did you notice the -PassThru at the end of line 5? This will put output on the $ExportedUser permissions.
Finally, on line 10, it will put on your clipboard the whole script after the replace is done.

“I like it…but can I avoid opening an IDE to run the query? Can dbatools also help here?”

Yes! However, here we have a caveat but we also have two possible workarounds.

The Invoke-DbaQuery is our command to run queries. Yet, it isn’t dealing “correctly” with scripts with multiple statements divided by GO.

Here is two different workarounds to run scripts with multiple statements divided by GO batch separator:

Workaround #1 – Remove the ‘GO’s from script

# This will replace the exact word GO by empty space
$scriptWithoutGO = (Get-Content $permissionsFileNewLoginUser -Raw) -replace '\bGO\b', ' '

# Or if it's from the variable that's in memory
#$scriptWithoutGO = $NewUserPermissions -replace '\bGO\b', ' '

# Run the script using Invoke-DbaQuery
Invoke-DbaQuery -SqlInstance $sqlInstance -Query $scriptWithoutGO

Workaround #2 – With this approach you can keep the GO batch separator. It’s the similar of what we do manually when running within SSMS/ADS

# Workaround #2 - Run the changed script using the ExecuteNonQuery method
$sqlInst = Connect-DbaInstance $sqlInstance

# Get content from file
$script = Get-Content $permissionsFileNewLoginUser -Raw

# Or if it's from the variable that's in memory
#$script = $NewUserPermissions

$sqlInst.Databases["master"].ExecuteNonQuery($script)

Final note:

Are you using dbatools to accomplish this process and you found that some permissions are not being scripted out? Maybe it’s new object permission that it’s not being covered yet. Please fill an issue on our GitHub repository so we can help.

Wrap up

Have I said before that dbatools can help us in many ways?
Sometimes the process that we want to accomplish needs more than one command, a.k.a a script. Write once, test and re-use them! Leverage on the power of the automation.

Thanks for reading!

Export Deadlocks to file from system_health Extended Event using PowerShell

Just a quick post as may help any of you searching for this.

Scenario

Client wants to analyze most recent deadlocks that happened on a specific instance. They asked us to send the xdl files.

How do we get the deadlocks?

Depending on the version of SQL Server that you are running, there are different ways to do it.
In this post I will share how you can do it from all files that belongs to the system_health extended event session. (Not only the current file).
This works on SQL Server 2012 or higher version.
For a better overview I recommend you to read the What are SQL Server deadlocks and how to monitor them article from SQLShack.

T-SQL query to get the deadlocks entries

This query will show you when the deadlock happened (datetime) and the XML of the deadlock.

The only thing you need to know is the path where the system_health extended event is saving the results. By default is the SQL Server log folder.
Example: \MSSQLXX.MSSQLSERVER2\MSSQL\Log
You can use, for example, the following query to get the ErrorLog file path:

SELECT SERVERPROPERTY('ErrorLogFileName')

If you remove the final \ERRORLOG part, you have the folder.

The T-SQL code can be like this:

DECLARE @LogPath NVARCHAR(255) = (SELECT CAST(SERVERPROPERTY('ErrorLogFileName') AS NVARCHAR(255)))
SET @LogPath = SUBSTRING(@LogPath, 1, charindex('\ERRORLOG', @LogPath) - 1)

SELECT 
	CONVERT(xml, event_data).query('/event/data/value/child::*') as deadlock,
	CONVERT(xml, event_data).value('(event[@name="xml_deadlock_report"]/@timestamp)[1]','datetime') AS Execution_Time
FROM sys.fn_xe_file_target_read_file(@LogPath + '\system_health*.xel', null, null, null)
WHERE object_name like 'xml_deadlock_report'


Using PowerShell to save the files to the filesystem

Now that we have the T-SQL to get the data, we just need to save it on some folder.
Each outputted file name have a name like deadlock_{Execution_Time}.xdl.

Before running the script, you need to change:
Line 1 – The SQL instance you want to query
Line 2 – The path where the files will be saved. (This folder will be created if doesn’t exists)

NOTE: The script is making use of SqlServer PowerShell module (line 15). However, if you prefer, you can use dbatools (uncomment line 18 and comment line 15).
NOTE2: The time that will take to execute the script is directly related with the number of system_health files and their sizes.

$instance = "myInstance"
$outputDirectory = "D:\Deadlocks"
$query = @"
DECLARE @LogPath NVARCHAR(255) = (SELECT CAST(SERVERPROPERTY('ErrorLogFileName') AS NVARCHAR(255)))
SET @LogPath = SUBSTRING(@LogPath, 1, charindex('\ERRORLOG', @LogPath) - 1)

SELECT 
	CONVERT(xml, event_data).query('/event/data/value/child::*') as deadlock,
	CONVERT(xml, event_data).value('(event[@name="xml_deadlock_report"]/@timestamp)[1]','datetime') AS Execution_Time
FROM sys.fn_xe_file_target_read_file(@LogPath + '\system_health*.xel', null, null, null)
WHERE object_name like 'xml_deadlock_report'
"@

# With sqlserver module
$results = Invoke-Sqlcmd -ServerInstance $instance -Query $query

# With dbatools module
#$results = Invoke-DbaQuery -SqlInstance $instance -Query $query

# Create a folder to save the files
New-Item -Path $outputDirectory -Type Directory -Force

# Save each XML as xdl file on the filesystem
$results.foreach {
    $_.deadlock | Out-File -FilePath "$outputDirectory\deadlock$($_.Execution_Time.TofileTime()).xdl"
}

The output on the folder will be something like:

Bonus step – if you want

Probably you will share this on a shared folder or even by email. It can be good idea to compress the folder into a zip file.
You can easily do that by running the Compress-Archive cmdlet (PowerShell v5+).

Compress-Archive -Path D:\Deadlocks -DestinationPath D:\Deadlocks.zip

Thanks for reading.

Refresh databases that belongs to Availability Group using dbatools

Few days ago I was surfing on Twitter when dbatools asked about how it’s PowerShell module changed the way people work.

Open and check the answers given by the community as there is really good stuff there!

My turn

I have shared one of my recipes related to database refreshes. You know, when the client says, “please restore this backup or the most recent backup on our instance.”. But what if the databases belong to an availability group? It’s not as simple as a standalone installation.

Seconds later, John McCormack (T | B) asked if I have blogged about this

The truth is that this blog post was already on the queue, so without further ado, I will share the script I normally use.

Not so fast – A couple of notes:

  • Read carefully what each command does as you normally do for every single script you use from the internet. You do that, right? 🙂
  • I have used multiple times but always only on Availability Groups with 2 nodes.
  • It’s written to use automatic seeding.
  • You can, and you should, run the script command by command in your first try as this will be much easier to understand how it’s working.

Variations that you may need

  • With some changes you can put it to work with Availability Groups with more than 2 nodes. The failover and the set dbowner part is the one that is being done just having 2 nodes in mind.
  • If you have huge databases and automatic seeding is not an option for you, you may want to leverage on the backup/restore process. Take a look on the Add-DbaAgDatabase command docs (look to -SharedPath parameter along with -SeedingMode Manual)

The script

<#
Author: Cláudio Silva
Blog: https://claudioessilva.eu
Twitter: https://twitter.com/claudioessilva
Date: 2020/05/19

.SYNOPSIS
    The goal of this script is to provide an automatic way to refresh one or more databases that belongs to an Availability Group.

.Description
    The script is doing the following major steps
         1 - Get the replicas (Primary and Secondary)
         2 - Export user permissions of the current databases (before dropping them)    
         3 - Remove databases from the Availability Group (on primary node)
         4 - Restore the backups with overwrite
            NOTE: Here I left two options:
                - one (the commented) is when the databases on the destination instance have the same name as the origin
                - The 2nd one that will be running assumes that you will need to give a different name to the database on the destination instance.
                  (Example: Database called 'carrots' is 'carrots_PROD' on destination instance)
         5 - Run the script exported on step 2
         6 - Repair orphaned users
         7 - Remove orphaned users
         8 - Remove databases from the secondary node
         9 - Add the restored databases to the availability group using Automatic Seeding
        10 - Set the dbowner of the databases
        11 - Test failover
        12 - Set the dbowner of the databases on this node
        13 - Test failback
#>

$listenerName = "listener"
$availabilityGroupName = "AGName"
$databases = "db1", "db2" # Add more if you need. Also if using the 2nd method of restore, add there too.
$dboLogin = "dbOwnerLogin"
$exportUserPath = "<Path>\Exported_Users_$availabilityGroupName.sql"
$DestinationDataDirectory = "<pathToYourDataDirectory"
$DestinationLogDirectory = "<pathToYourLogDirectory"
$backupDirectory = "<pathToYourBackups>"

# Get AG replicas
$agReplicas = Get-DbaAgReplica -SqlInstance $listenerName -AvailabilityGroup $availabilityGroupName

# Get current primary node
$primaryNode = ($agReplicas | Where-Object role -eq 'Primary').Name

# Get secondary node
$secondaryNode = ($agReplicas | Where-Object role -eq 'Secondary').Name

# Export users permissions
Export-DbaUser -SqlInstance $sqlinstance -Database $databases -Path $exportUserPath

# Remove the databses from the AG
Remove-DbaAgDatabase -SqlInstance $primaryNode -AvailabilityGroup $availabilityGroupName -Database $databases #-Confirm:$false

# Restore databases with overwrite
# 1 - You can get all backups from a folder pipe to Restore-DbaDatabase and it will do the magic.
# NOTE: This will be useful when the destination database has the same name as the source.
# Get-ChildItem -Path $backupDirectory -Recurse -Filter "*.bak" | Restore-DbaDatabase -SqlInstance $sqlinstance -WithReplace -DestinationDataDirectory $DestinationDataDirectory -DestinationLogDirectory $DestinationLogDirectory 

# 2 - If you need to restore the database with a different name, you may prefer to specify each -Database name from the specific backup
# NOTE: This will keep the database name.
Restore-DbaDatabase -SqlInstance $primaryNode -DatabaseName 'db1' -Path "$backupDirectory\db1.bak" -WithReplace -DestinationDataDirectory $DestinationDataDirectory -DestinationLogDirectory $DestinationLogDirectory
Restore-DbaDatabase -SqlInstance $primaryNode -DatabaseName 'db2' -Path "$backupDirectory\db2.bak" -WithReplace -DestinationDataDirectory $DestinationDataDirectory -DestinationLogDirectory $DestinationLogDirectory

# Put the permissions back
# Note: We need to replace the 'GO' batch separator as Invoke-DbaQuery will do this split and send execution one-by-one. This means that a database context change works but then next command will probably be run on master
Invoke-DbaQuery -SqlInstance $primaryNode -Query $((Get-Content $exportUserPath) -replace '\bGO\b', ' ')

# Repair Orphan Users
Repair-DbaDbOrphanUser -SqlInstance $primaryNode -Database $databases

# Remove Orphan Users
Remove-DbaDbOrphanUser -SqlInstance $primaryNode -Database $database

# Remove databases from the secondary instance
Remove-DbaDatabase -SqlInstance $secondaryNode -Database $databases #-Confirm:$false

# Add databases to the AG using 'Automatic' option for -SeedingMode parameter
Add-DbaAgDatabase -SqlInstance $primaryNode -AvailabilityGroup $availabilityGroupName -Database $databases -SeedingMode Automatic #-Confirm:$false

# Change database owner on the primary
Set-DbaDbOwner -SqlInstance $primaryNode -Database $databases -TargetLogin $dboLogin

# Failover the AG so we can set database DB
Invoke-DbaAgFailover -SqlInstance $secondaryNode -AvailabilityGroup $availabilityGroupName

# Change database owner
Set-DbaDbOwner -SqlInstance $secondaryNode -Database $databases -TargetLogin $dboLogin

# Failover back if wanted/needed
Invoke-DbaAgFailover -SqlInstance $primaryNode -AvailabilityGroup $availabilityGroupName

What do you need to change

1 – Fill the variable values from lines 27 to 34.
2 – Select your method of restore / seeding. Lines 51 to 59.

Same task but not for databases within Availability Groups

With this script as a starting point you can adapt to do refreshes of databases that work on single instances.

Wrap up

I hope you find it useful.
As a time metric I can tell you that (excluding restores/seeding) all the other actions took me about 15min to run manually with a lot of possible things to go wrong. Now it takes me 5min or less.

If you are facing difficulties, drop a comment.

Thanks for reading!

T-SQL Copy & Paste Pattern – Increasing a performance problem

Disclaimer: The title is my assumption because I saw it in the past happening this way.

This blog post aims to make you remember something: something that is obvious to you, might not be obvious to others.

Scenario:

A client has a process which consists of a stored procedure that wraps a bunch of other stored procedures.
The process runs for about 10 hours.

Taking a look…what is running right now?

I was curious about the process, I’ve seen this running before but never explored the code. After a couple of days of seeing it running for so long, I decided to see what would be the random query I would get executing.

I had some luck and saw one with a pattern that I knew clearly that could be rewritten it and make it faster.

sp_WhoIsActive to the rescue

If you don’t know what sp_WhoIsActive (Adam Machanic‘s creation) stored procedure is, let me copy the short definition from the whoisactive.com website:

sp_whoisactive is a comprehensive activity monitoring stored procedure that works for all versions of SQL Server from 2005 through 2017.

You can download it from the download page or if you use dbatools you can use the command that will download it for you and install it. You can read more about it in my previous blog post New Version Of sp_WhoIsActive (V11.20) Is Available – Deployed On 123 Instances In Less Than 1 Minute

Using sp_WhoIsActive to get the current running query

EXEC sp_WhoIsActive

The query

The query that was being run has the following structure

SELECT column1
  FROM table1
 WHERE EXISTS (
               SELECT column2 FROM table2 WHERE table2.column1 = table1.column2
              )
   AND (
              EXISTS (SELECT column1 FROM table3 WHERE table3.column1 = table1.column1 AND table3.column2 = @param1 and table3.column3 = 0)
           OR EXISTS (SELECT column1 FROM table3 WHERE table3.column1 = table1.column1 AND table3.column2 = @param2 and table3.column3 = 0)
           OR EXISTS (SELECT column1 FROM table3 WHERE table3.column1 = table1.column1 AND table3.column2 = @param3 and table3.column3 = 0)
           OR EXISTS (SELECT column1 FROM table3 WHERE table3.column1 = table1.column1 AND table3.column2 = @param4 and table3.column3 = 0)
           OR EXISTS (SELECT column1 FROM table3 WHERE table3.column1 = table1.column1 AND table3.column2 = @param5 and table3.column3 = 0)
           OR EXISTS (SELECT column1 FROM table3 WHERE table3.column1 = table1.column1 AND table3.column2 IN (@param6, @param7) and table3.column3 = 0)
       )

However this default call does not bring the execution plan, for that you need to run using the @get_plans parameter:

EXEC sp_WhoIsActive @get_plans = 1

I have copied the XML that contains the sqlplan to SentryOne Plan Explorer and this was what I saw

Within the red circle we can see a concatenation operator (first one on top left) which will get the result of each EXISTS sub query (8 in total).

If we want to know which outer batch or stored procedure call was issued by the application or user we can use the @get_outer_command parameter

EXEC sp_WhoIsActive @get_plans = 1, @get_outer_command = 1

However, if we have nested calls, this will not show the current batch or stored procedure where the code of the current query belongs.

To get that we need to use the @get_full_inner_text parameter:

EXEC sp_WhoIsActive @get_plans = 1, @get_outer_command = 1, @get_full_inner_text = 1

This way, the sql_text column will contain the whole batch or stored procedure where the query belongs.

Back to the query – The pattern

Can you see the pattern? A lot of OR EXISTS() conditions. That is odd indeed, it wouldn’t be so odd if each OR EXISTS() was accessing different tables…oh, wait…they are not 🙂 and that is where the problem is.

Easy to improve

We can easily re-write the query without changing the logic or affecting the output data.

I have re-written the code in the following way:

SELECT column1
  FROM table1
 WHERE EXISTS (
               SELECT column2 FROM table2 WHERE table2.column1 = table1.column2
              )
   AND EXISTS (
               SELECT column1 
                 FROM table3 
                WHERE table3.column1 = table1.column1 
                  AND (
                           table3.column2 = @param1 
                        OR table3.column2 = @param2
                        OR table3.column2 = @param3
                        OR table3.column2 = @param4
                        OR table3.column2 = @param5
                        OR table3.column2 IN (@param6, @param7)
                      )
                  AND table3.column3 = 0
           )
       )

This way we will just hit the table3 once instead of one time per OR EXISTS().

The actual plan seems to have a much better shape:

A different approach would be a single IN () condition with all variables comma separated. However, I preferred this way as it’s easy to show to the developement team the differences between now and before.

In fact, when we use the IN operator the optimizer will expand it to various OR conditions. Example:

Result

With this change, I have improved the query by 99%.
Query went down from ~4340 seconds to less than 30 seconds.

NOTE: The table had ~46M records.

Also, the number of logical reads for the table dive-bombed!

Before (optimizer used a Worktable):

Table ‘Worktable’. Scan count 924230, logical reads 1045662012

Table ‘table3’. Scan count 6, logical reads 4967238

SQL Server Execution Times:
CPU time = 3824887 ms, elapsed time = 4344872 ms.

After:

Table ‘table3’. Scan count 9, logical reads 829734,

SQL Server Execution Times:
CPU time = 86345 ms, elapsed time = 26074 ms.

This means that on the whole process we have saved 1h!

How’d this happen?

As said in my title and initial disclaimer, this smells like a copy & paste pattern. Maybe something similar to:

Client: “Hey can we have another validation for a different value?”
Dev: “Sure, it’s pretty easy to do so”
Also Dev: copy & paste existing OR EXISTS () change parameter, commit to source control and push it into QA test (with few data) and it’s good to go into PROD.
Client: “Thanks it’s working just a little bit slower but today things are slower in general”

Wrap

When you find these kind of patterns, invest a couple of minutes to test it with better logic.
You may end saving a “couple” of CPU cycles and saving a lot of time.

Thanks for reading!

Change Outlook sender mailbox with PowerShell – A workaround

The idea of this blog post, like many others I write, is to document a workaround solution to my problem. Hopefully I can also help someone that may be looking for the solution for this problem and stumbles accidentally (or not) on my blog.

Scenario

I was helping a colleague automating the creation of emails using Microsoft Outlook using PowerShell. The following are configurable options (these will be the parameters of the script):

  • list of TO email addresses
  • list of CC email addresses
  • Signed and Encrypted (they are being forced)
  • Subject
  • Keep the default signature but add the message before
  • Sender can be different that the default (example: use team’s email instead the personal one)

We can use the COM object to create Outlook objects.

$Outlook = New-Object -ComObject Outlook.Application

We need to get the MAPI namespace and logon on the default profile (or existing session)

#Get the MAPI namespace.
$namespace = $Outlook.GetNameSpace("MAPI")
        
# Log on by using the default profile or existing session (no dialog box).
$namespace.Logon($null, $null, $false, $true);

Note: To do this you need to be running your PowerShell session with the same privileges. If you are running Outlook as a non-administrator (and you should!) you need to open a PowerShell session with the same privilege level.

Then, we need to create a new Mail item

# Create new email
$Mail = $Outlook.CreateItem(0)
$Mail.GetInspector.Activate()

Note: The second line (`Activate()`) is being used because the mail will not be sent automatically, it will be a manual process after the colleague checks a couple of things. This way, activate will bring the focus to the window and this way we can see the magic happening!

To set the To and CC (which is optional) emails is easy as setting the properties’ values.

$Mail.To = $EmailTo 
if ($EmailCC) {
   $Mail.Cc = $EmailCC
}

Adding the Subject

$Mail.Subject = $EmailSubject 

To sign and/or encrypt the email we need to set like this:

#Sign and Encrypt email
$Mail.PropertyAccessor.SetProperty("http://schemas.microsoft.com/mapi/proptag/0x6E010003", 0x03)

The possible values are:

    0x00 – Message has no security
    0x01 – Message is encrypted
    0x02 – Message is signed
    0x03 – Message is signed and encrypted

Keeping the signature

After opening the message we can get the current HTML of the body which will include the default signature (with images, links, etc)

# Get default signature
$signature = $Mail.HtmlBody

Then we can just add our $EmailHtmlBody parameter and append the $signature

$Mail.HtmlBody = $EmailHtmlBody + $signature

The problem

Finally, from our list of properties to be changed we have the sender mailbox.

Using the SendUsingAccount property I tried to set the email address that I wanted. But…no luck!

Using Get-Member to know more

I took a closer look and using the Get-Member I was able to be sure what data type this property accepts.

As we can see it expects an object of type Account.

Now will work, right?!

To get the account as an account type I have used the following code where the $EmailFrom variable contains the email address I want to use:

#Get account object from email addres
$account = $outlook.Session.Accounts.Item($EmailFrom)

NOTE: You need to have this email address configured on your Outlook, otherwise this will not work.

With our $account variable set let’s assign it to our SendUsingAccount mail object property.

$Mail.SendUsingAccount = $account

But…this also didn’t work!
All the others properties were pretty easy to change so I didn’t expect that.

Time for some google-fu

After googleing for a bit I found an generic (works for many objects) alternative way to set a property to an object

function Invoke-SetProperty {
    # Auxiliar function to set properties. The SendUsingAccount property wouldn't be set in a different way
    param(
        [__ComObject] $Object,
        [String] $Property,
        $Value 
    )
    [Void] $Object.GetType().InvokeMember($Property,"SetProperty",$NULL,$Object,$Value)
}

I have put this code as an internal function. So, inside our code we can call it like this:

#Change Sender mailbox
Invoke-SetProperty -Object $mail -Property "SendUsingAccount" -Value $account

And finally, it worked!

Conclusion

In this case we are using reflection (by calling InvokeMember()) .Normally we use this, for example, when we want to change private properties of an object. I’m not sure in this case why the “normal” way didn’t work but, at least, this can be used as a workaround for the future in case other similar cases appear.

The whole function code

You can get the whole function code from here

A final curiosity

Normally when talking about automation one of the things that we measure is how much time we can save when in comparison with all the manual steps. Just to kill the curiosity the best case scenario for a manual process (which includes Excel files) takes 1h, after the automation we went down to 30 seconds. However this is a story for a another day with another blog post. Stay tuned!

Thanks for reading!

Temporarily enable TLS 1.0/1.1? – Be sure you check this keys

There are some actions that we know that will have to be repeated from time to time, but the surprise comes when it’s time to do so and the process that used to work does not work anymore.

This is a short post to document a slight change that I had to do to achieve the same final result as the first time.

Long story short

A client needed to run a setup to install an application. This setup uses a utility called minisql.exe to test database connection. The problem is that this utility only works with TLS 1.0/1.1 and, on our systems, we only have TLS 1.2 enabled.

Temporary workaround

We got into an impasse where we would not enable TLS 1.0 on the database server (which shares multiple databases) hence, as a workaround, we have suggested installing a SQL Server express edition instance on the application server just to surpass this step of the application setup. The funny (or not) part is that after installation, the application works just fine with any other driver that supports de TLS 1.2.

NOTE: It is strange why the vendor decided on this approach. I mean, having two different ways to test the access to the database. Even more when one of them is very limited. Maybe legacy… ¯\_(ツ)_/¯

Easy! Just repeat the steps that have worked before

With all the approvals to get these settings turned on as an exception for a short period of time, I just have to run the reg file that will turn on the TLS 1.0 and 1.1.

Registry keys to

Enable TLS 1.0
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server] "Enabled"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server] "DisabledByDefault"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client] "Enabled"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client] "DisabledByDefault"=dword:00000000

Enable TLS 1.1
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server] "Enabled"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server] "DisabledByDefault"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client] "Enabled"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client] "DisabledByDefault"=dword:00000000

You can find more about this and other keys on Microsoft Documentation.

After this change, we need to restart the server. At this time I was confident that when we come back I could test the connections and everything would be as wanted to move on. However…that was not the case!

This continued to yield the following the error message:

Connection Problem: [DBNETLIB][ConnectionOpen (SECDoClientHandshake()).]SSL Security error

Analyse

Because it didn’t work, I first scratched my head trying to understand what could be different… I haven’t touched the reg file since the first time, when it worked.

Digging on the interwebs…again

Using my google-fu skills lead me to all of the 99% of the posts about this error message where they just talk about the 4 registry keys (for each version) that need to be changed. I gave up when I was already on the third/fourth page of results. Yes, I was starting to think I was crazy 🙂
I decided to request some help from a colleague of the Windows team. I needed a different pair of eyes to be sure that I wasn’t missing something.

I was not seeing the obvious

After some time, I decided to compare the two servers to be sure that, on these registry keys everything was equal. Surprise..surprise…was not!
On the most recent server, there was a Hashes folder (on HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes) with another two folders (MD5/SHA) which contains entries named Enabled with configured value 0.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\MD5]
"Enabled"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\SHA]
"Enabled"=dword:00000000

Fixing it

I decided to backup the registry keys and then, I have deleted the Hashes folder. After the restart…yes you guess it right, it worker!

Bottom line

The keys exported on the registry didn’t have the Hashes entries and when imported they also didn’t remove the existing one.

NOTE: Don’t forget to revert the changes after you finish your tests/workaround.

This time, using PowerShell 🙂
Disable TLS 1.0

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server' -name 'Enabled' -value 
'0' -PropertyType 'DWord' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null
	
New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null
Write-Host 'TLS 1.0 has been disabled.'

Disable TLS 1.1

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null
	
New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null
	
New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null
Write-Host 'TLS 1.1 has been disabled.'

Add Hashes folder back

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes' -Force | Out-Null

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\MD5' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\MD5' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\SHA' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\SHA' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null

Thanks for reading!

More PowerShell Remoting coverage in dbatools

Starting on dbatools version 1.0.31 we introduced better coverage for commands that make use of PowerShell Remoting.

Which commands?

Commands like Get-DbaComputerCertificate, Get-DbaPrivilege, Get-DbaClientAlias, just to mention a few of them, make use of the internal function Invoke-Command2 which uses New-PSSession when we run it against a remote computer.

“Why have you changed it?”

Let me give you a little bit of background…
At my company, I found that some dbatools commands were not working. They were returning errors related with WinRM configurations, as seen here when attempting to create a new session using New-PSSession:

Or even trying to use the Invoke-Command directly

I wondered why and asked the Windows team if they could provide any insight. A colleague explained to me that I needed to change three things to make my remoting commands work on our network:

  1. Use the FQDN on -ComputerName and/or -SqlInstance parameters
  2. Use -UseSSL parameter on the New-PSSession command
  3. Use -IncludePortInSPN parameter for the New-PsSessionOption command

And voilà with these settings in place it worked like a charm!

NOTE: Currently, in my environment if I respect the points 1 and 2 it’s ok. However, by reading the documentation about -IncludePortInSPN I understand why it may be needed.


The option is designed for enterprises where multiple services that support Kerberos authentication are running under different user accounts. For example, an IIS application that allows for Kerberos authentication can require the default SPN to be registered to a user account that differs from the computer account. In such cases, PowerShell remoting cannot use Kerberos to authenticate because it requires an SPN that is registered to the computer account. To resolve this problem, administrators can create different SPNs, such as by using Setspn.exe, that are registered to different user accounts and can distinguish between them by including the port number in the SPN.

Time to make this available to everyone!

Let’s add this as a configurable settings on dbatools module itself!

“Nice! Can you share what you have changed in the module?”

Sure I can!
The improvements that we have added to dbatools covers the points 2 and 3. (point 1 just depends on the way you use the name, so this one is already covered by default as it is today)

If you are not aware of it, dbatools has some wide configurations itself. From sqlconnection, through logging and caching, remoting and others.
Thanks to Fred Weinmann (b | t) (our infrastructure code wizard) and his PSFramework module is used to manage configurations, logging and others, we can use the Set-DbatoolsConfig to change these values.

If you are asking which values, I encourage you to use the Get-DbaToolsConfig to have an overview of them. You will find some neat stuff! To know more about these configurations, read my post on dbatools blog named dbatools advanced configuration.

Back to our scenario…
We have added two new configs, PSRemoting.PsSessionOption.IncludePortInSPN and PSRemoting.PsSession.UseSSL in the remoting.ps1 file available in the configuration folder. This configurations are loaded when you import the module. If, you have any setting registered (using Register-DbatoolsConfig) nothing will be overwritten. In this case, your current registered values are preserved and will be used in your current session.

Within Invoke-Command2 we have changed the code to use this variable with the configured values.

Now we can import our module and test the changes. But first we need to set these new configurations to the desired values. In my scenario, set both values to $true

Set-DbaToolsConfig -Name 'psremoting.pssession.usessl' -Value $true
Set-DbaToolsConfig -Name 'psremoting.pssessionoption.includeportinspn' -Value $true

Then we can use a dbatools command that previous was failing and check that now, it works!

Get-DbaComputerCertificate -ComputerName "hostname.domain"

This way I can set the configuration value to what value I want and next time I execute the command, it will make use of it!
Note: remember these settings are on a user scope basis. Which means that if you have a service account running dbatools commands, you will want to add the Set-DbatoolsConfig code at the beginning of your scripts to make sure that it will use the settings with the values that you need.

The before and the after

Before the change, I got these errors:

  • Just setting -UseSSL to $true
    As said before, in my case it works. (No picture here :-))

  • When specifying $false for both options and with or without FQDN

WARNING: [HH:mm:ss][Get-DbaComputerCertificate] Issue connecting to computer | Connecting to remote server “ComputerName” failed with the following error message : The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: “winrm quickconfig”. For more information, see the about_Remote_Troubleshooting Help topic.

  • Just setting -IncludePortInSPN to $true and with or without FQDN

WARNING: [HH:mm:ss][Get-DbaComputerCertificate] Issue connecting to computer | Connecting to remote server “ComputerName” failed with the following error message : WinRM cannot process the request. The following error occurred while using Kerberos authentication: Cannot find the computer “ComputerName”. Verify that the computer exists on the network and that the name provided is spelled correctly. For more information, see the about_Remote_Troubleshooting Help topic.

  • Using both but with no FQDN

WARNING: [HH:mm:ss][Get-DbaComputerCertificate] Issue connecting to computer | Connecting to remote server “ComputerName” failed with the following error message : The server certificate on the destination computer (“ComputerName”:5986) has the following errors:
The SSL certificate contains a common name (CN) that does not match the hostname. For more information, see the about_Remote_Troubleshooting Help topic.

After I set these two values using the Set-DbatoolsConfig and using the FQDN it worked perfectly!

Wrap up

You should talk with the team that manages how your network/SPNs are configured and which parameters/values you need to be using to take advantage of PowerShell Remoting successfully. Now you can go to dbatools and set the values you need to use the commands natively!

Note: if you found that some parameters you need are not available to be configurable yet, let us know by opening an issue on the GitHub repository.

Thanks for reading!