Ordering and Choices in PowerShell

Ah! Hash tables!

Ever since I saw a presentation by Anthony Howell ( Blog ), aka PoshWolf, about them, read the blog post by Kevin Marquette ( Blog ) about them, and was enlightened by multiple PowerShell community members (e.g. Chris Dent, Mathias “IISResetMe” Jessen, etc.) about them, I’ve loved hash tables.

However, there is one drawback I have with hash tables: they don’t default to the order in which they are inserted. I’m OK with this since I come from a DB background, and I’m used to order not being enforced unless I specify an ORDER BY. Not everyone is as lenient as we are, though, and the vast majority of the louder masses expect this ordering.

Now, there are ways around this unordered aspect of hash tables. The main way is the [ordered] type accelerator; you can add this to the start of your hash table to keep the default orderings.
I use this when I want hash tables, but also want guaranteed orderings.

Well, that was a short blog post. Thanks!

Sacrificing Idiomatic for Changeable

I know about using [ordered], and now you’ve learned about using [ordered], but what about people using my scripts that aren’t part of either knowledge group?
Oh, I could add a comment, and I could train the people, but what happens if I get hit by a bus win the lottery in the morning? There will eventually be a time when I’m not there, and people will have to use my scripts.

So, [ordered] seems a bit too unknown at the moment for the people who regularly use my scripts.

Which is a pity, cause the current version uses [ordered] hash tables pretty extensively.
Is there something else that we can use instead?

Yes, Arrays

Ah, the humble array. They’re not that bad once you get to know them.
Once I started thinking about these little data types, I realised that they could be a potential answer to this problem.

$pscustomobjectArray = @(
    [PSCustomObject] @{ ID = 0; Name = 'Test'; Type = 'Dev' }
    [PSCustomObject] @{ ID = 1; Name = 'Test2'; Type = 'Dev' }
    [PSCustomObject] @{ ID = 2; Name = 'PreProd_1'; Type = 'PreProd' }
    [PSCustomObject] @{ ID = 3; Name = 'PreProd_2'; Type = 'PreProd' }
    [PSCustomObject] @{ ID = 4; Name = 'Prod_1'; Type = 'Prod' }
    [PSCustomObject] @{ ID = 5; Name = 'Prod_2'; Type = 'Prod' }
    [PSCustomObject] @{ ID = 6; Name = 'Prod_3'; Type = 'Prod' }
)

for ($i = 0; $i -lt $pscustomobjectArray.Count; $i++) {
    "The $i-th object is $($pscustomobjectArray[$i])"
}
The index matches the ID.

It would just need a bit of a rewrite…

Script

Initialize-Choice_v2.ps1

Old = Left. Rewrite = Right.
The default is because there’s always one person who just runs these things as is…

Thankfully, arrays keep their order, so I can rely on them in the script.

In Use

What does this mean? This means that you can use the standard input that people are accustomed to in PowerShell, such as PSCustomObject, CSV, JSON, etc.

As long as we can select the property, we can then filter it down to this specific property.

Example

For each of these examples, we’re passing in a list of choices. Then we’re filtering down the list to the option we chose.

We can do this multiple times, if we want.

Here’s our list of choices to… aha… choose from.

1…2…3, a…b…c…

Example PSCustomObject


$ChoicesCustomObject = [PSCustomObject]@{
    Name = 'Test01'
    Value = '1a'
}, [PSCustomObject]@{
    Name = 'Test01'
    Value = '1b'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2a'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2b'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3a'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3b'
}
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesCustomObject | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesCustomObject | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
$ChosenOption
1a. 1b.

Example Csv

$ChoicesCSV = @'
"Name","Value"
"Test01","1a"
"Test01","1b"
"Test02","2a"
"Test02","2b"
"Test03","3a"
"Test03","3b"
'@ | ConvertFrom-Csv
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesCSV | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesCSV | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
$ChosenOption
2a. 2b.

Example JSON

$ChoicesJSON = @"
[
  {
    "Name": "Test01",
    "Value": "1a"
  },
  {
    "Name": "Test01",
    "Value": "1b"
  },
  {
    "Name": "Test02",
    "Value": "2a"
  },
  {
    "Name": "Test02",
    "Value": "2b"
  },
  {
    "Name": "Test03",
    "Value": "3a"
  },
  {
    "Name": "Test03",
    "Value": "3b"
  }
]
"@ | ConvertFrom-Json
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesJSON | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesJSON | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
$ChosenOption
3a. 3b.

Example Multiple

$ChoicesCustomObject = [PSCustomObject]@{
    Name = 'Test01'
    Value = '1a'
}, [PSCustomObject]@{
    Name = 'Test01'
    Value = '1b'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2a'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2b'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3a'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3b'
}
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesCustomObject | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesCustomObject | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
<# Filter down to the value #>
$ChosenValue = Initialize-Choice_v2 -Title "Choose Value" -Caption "Select a value for $($ChosenOption.Name)" -Choices $ChosenOption.Value
<# Return the chosen option #>
$ChosenOption[$ChosenValue]
1b only.

Caveat

Yes, I am aware that the values are off by one; for example, the first value shows 1 on the confirmation screen but returns a 0. I bowed to requests from people who would be using the code but were not comfortable with indexes starting at 0.

Feel free to change that in your script.

Overall

This is a case of “exception making the rule”. I’ve said before that I prefer not to have user prompting in my scripts, since a script that requires human intervention is not something that can be fully automated.
However, whether it’s risk-averse business leaders or novices to scripting, there are times when an expectation of human interaction is present.

If there is no getting away from this, then I can make the process easier. Here’s hoping that making it easier and easier to use will lead to more automationโ€ฆ or at least more.

Hopefully, this script can help.

How SQL Server’s Compute Scalar Will Impede You

Time to read: ~ 2 minutes

Words: 417

Postgres Blogs

We’re getting more and more Postgres instances at work. To get up to speed on Postgres, I’ve started to expand my RSS feed to include Postgres blogs.

It was one of those blogs, by David Stokes, that captured my attention; namely, “How PostgreSQL’s Aggregate FILTER Will Spoil You

It got me thinking, “Huh, does SQL Server not have filters? And, if not, how can we work around that?”

So, cheers, David, for the idea for this post.

The Big Set up

We’re going to use the same setup that David has in his post

USE tempdb;
GO

CREATE TABLE z (a int, b int, c int);
GO

INSERT INTO z VALUES (1, 10, 100), (2, 20, 200), (3, 30, 300), (4, 40, 400);
GO

SELECT a,b,c FROM z;
GO
I learnt how to increase font size.

Stupid, but does it work

Now, SQL Server doesn’t have the filter option, but we can do some pretty weird things, like a SELECT...WHERE statement with no FROM clause.

SELECT
	a,b,c,
	[filter?] = (SELECT b WHERE b > 11)
FROM z;
GO
Does it B useful though?

So, is that it? Can we just throw that into a COUNT() function and essentially have the same thing as Postgres?

SELECT
	all_rows	= COUNT(*),
	b_gt_11		= COUNT((SELECT b WHERE b > 11))
FROM z;
GO
B better.

Workaround

We can rewrite that strange little filter we have using an OUTER APPLY.

SELECT
	a,b,c,
	[filter?] = bees.bee
FROM z
OUTER APPLY (SELECT b WHERE b > 11) AS bees (bee);
GO
Same, same, but different.

Now, can we throw that in a COUNT() function?

SELECT
	all_rows	= COUNT(*),
	b_gt_11		= COUNT(bees.bee)
FROM z
OUTER APPLY (SELECT b WHERE b > 11) AS bees (bee);
GO
B-utiful!

Things I don’t know

  • Postgres, yet. Still learning this one.
  • Why the OUTER APPLY works, but the subquery doesn’t?
    Let’s review the plans and see if we can identify a culprit.
Does not Compute Scalar

Strangely, the first Compute Scalar (from the left, after the SELECT) is simply a scalar operator applied to the other Compute Scalar.

It’s enough to block aggregation, though, but I’ll leave it to more specialised people than I to tell me why. I’ll just store this under “errors out” for future reference.

  • Performance Implications
    This is a test on 4 rows. What would happen if this were tens of millions of rows?

    I leave that to the reader.

Getting SQL Server Version using dbatools

Time to read: ~ 4 minutes

Words: 730



I will discuss different ways to get the SQL Server version using PowerShell.

I’ll explore a function you can use, and how a dictionary/hash table could also work.

Finally, I’ll discuss how neither of these are needed since dbatools/SMO has a better way.

The Initial Issue

I’m sure you’ve needed the version of a SQL Server instance for a report a few times.
And, surprisingly, not many people are up for parsing the output of @@VERSION.

I can’t see why not, it’s perfectly fine. It is not the easiest thing in the world, but it is also not the hardest. But, you do you.

Invoke-DbaQuery -SqlInstance localhost\SQL2019 -Query "SELECT version = @@VERSION;" | FL
No, I haven’t updated this instance in a while. Thank you for asking

My team lead has a function that he uses to get these versions, but it encountered an issue. We recently updated to a later version of SQL Server, and the function stopped working.

$instance = Connect-DbaInstance -SqlInstance localhost\SQL2019

$instance.Version.Major

Get-WhatVersionAmI -Version $instance.Version.Major
15? Never heard of it.

It seems to work fine if I pass in an older version of SQL Server.

Get-WhatVersionAmI -Version 14
Oh! 14! Yeah, I know 14!

So, what’s the issue?
I’m going to take a look under the hood of that function, and see if I can spot what’s wrong.

Checking under the function’s hood

If you don’t have the code, or are screen-sharing, you can query the function provider to get the definition of most PowerShell functions.

Get-ChildItem function:\Get-WhatVersionAmI | Select-Object *
There’s more, but it boils down to “this is a simple function”.

Taking a look at the function, here’s the gist of the function block:

param([string] $Version)

    switch ($Version) {
        14 { "SQL Server 2017" }
        13 { "SQL Server 2016" }
        12 { "SQL Server 2014" }
        default { "Unknown" }
    }

This seems simple enough. Hell, it’s just a switch statement that takes the input and spits out the matching output.

Seeing something this simple I’d almost suggest using a dictionary

Using a dictionary instead

This is the equivalent code of the function, but using a hash-table.

$dict = @{
    14 = "SQL Server 2017" 
    13 = "SQL Server 2016" 
    12 = "SQL Server 2014" 
}

$version = $dict[15]
if (!$version) {
    $version = "Unknown"
} 

"Version: $version"
Still unknown, but visibly unknown.

The benefit of this approach is:

  • it’s easier to modify. I don’t need to update the function and either dot-source it, e.g. . <path_to_function_file.ps1>, or re-run it in memory to use it. I only need to add an entry to the hash table.
$dict = @{
    15 = "SQL Server 2019" 
    14 = "SQL Server 2017" 
    13 = "SQL Server 2016" 
    12 = "SQL Server 2014" 
}

$version = $dict[15]
if (!$version) {
    $version = "Unknown"
} 

"Version: $version"
Updated to know about the number 15.

The problem with this approach is:

  • I have to keep it updated, and I’ll find it difficult if I have to add it to multiple scripts.

Shift left?

The above approaches have common issues. They have to be ported to every script I have, run into memory, be in sync, etc., etc.

Can I do something about that? Can this be shifted left somehow? Can I remove some of the work that I’ve to do? In the words of Homer Simpson “can’t someone else do it?”.

Someone else is normally “me” though.

Why yes, yes they can.

Using dbatools

When dbatools connects to a SQL Instance, the usually returned object is a Microsoft.SqlServer.Management.Smo.Server, and that object has a few methods. The one I’m interested in is GetSqlServerVersionName.

Connect-DbaInstance -SqlInstance localhost\SQL2019 | Get-Member -MemberType Method -Name GetSql*
Many methods have been returned, but I only care about one now.

Now, I can connect to an instance, and query what version it is without any external functions or scripts:

Connect-DbaInstance

$instance = Connect-DbaInstance -SqlInstance localhost\SQL2019
$Instance.GetSqlServerVersionName()
Oh, look! The “somebody else”.

Hey, the function is even there on Microsoft.SqlServer.Management.Smo.Database from Get-DbaDatabase, in case I get the strange notion that the version changes from database to database.

Get-DbaDatabase

$databases = Get-DbaDatabase -SqlInstance localhost\SQL2019
$databases.GetSqlServerVersionName()
So many versions, all the same kind.

I don’t think they change, by the way.

End

I’m a dbatools fan. I’m also slightly lazy, which is why I like automation so much.

Now that I know that there’s a built-in way to get a proper human readable version of an instance. You better believe I’m using that!

T-SQL Tuesday 183 – Tracking Permissions

Words: 593

Time to read: ~3 minutes

PowerShell Scripts for Checking SQL Server Login SIDs

A picture of the T-SQL Tuesday logo; a blue database with a github style commit banner in a different blue wrapped around it.

Welcome back to T-SQL Tuesday, the monthly blogging party where we are given a topic and asked to write a blog post about it.

This month, we have Steve Jones [ blog ] asking us to talk about permissions.

Before


Normally, I would have skipped this month because I’ve talked about permissions before, and I feel like it’s one of the better posts I’ve written.
But I’m trying to write more, so that can’t be an excuse anymore.

So, let’s talk about AGs, DistAGs, and SQL Server authentication login SIDs.

AGs and DistAGs


In work, we have Availability Groups (AGs) and Distributed Availability Groups (DistAGs), depending on the environment level, e.g. Prod, Pre-Prod, QA, Dev, etc.

When we plan failovers for maintenance, we run through a checklist to ensure we can safely complete the failover.

One of these checks is to ensure that the correct logins with the correct SIDs are on each replica. Otherwise, we could get flooded with a lot of “HEY! My application doesn’t connect anymore! Why you break my application“, when apps try and log into the new primary and the user SIDs don’t match the login SIDs.

While I don’t have a problem with doing a query across registered servers, I threw together a quick script in PowerShell with dbatools that does the job of checking for me. And, like most things that happen in business, this temporary solution became part of our playbook.

Who knows! Maybe this quick, throwaway script could also become part of other people’s playbooks!

I’m not sure how I feel about thatโ€ฆ

Scripts

We’re using dbatools for this because I think it’s the best tool for interacting with databases from a PowerShell session regardless of what Carbon Black complains about.

Getting the AG Replicas

Getting the AGs for each server is relatively simple, using a combination of Get-DbaRegServer and Get-DbaAvailabilityGroup.

Then, we can use the ever-respected Get-DbaLogin to get a list of logins and their SIDs per replica.
If we have a mismatch, we will have an issue after we failover. So best nip that in the bud now (also, I had this phrase SO wrong before I looked it up!).

$InstanceName = 'test' 
# $InstanceName = $null # Uncomment this line to get ALL
$Servers = Get-DbaRegServer

if ($InstanceName) {
	# Find the server that ends in our instance name
	$Servers = $Servers | Where-Object ServerName -match ('{0}$' -f $InstanceName)
}

# Get all replicas per Instance name that are part of AGs...
$DAGs = Get-DbaAvailabilityGroup -SqlInstance $Servers.ServerName |
	Where-Object AvailabilityGroup -notlike 'DAG*' | # Exclude DAGs which just show our underlying  AG names, per our naming conventions.
	ForEach-Object -Process {
		$instance = $null
		$instance = $_.InstanceName

		foreach ($r in $_.AvailabilityReplicas) { # I'd like 1 replica per line, please.
			[PSCustomObject] @{
				InstanceName = $instance
				Replica      = $r.Name
			}
		}
	} |
	Select-Object -Property InstanceName, Replica -Unique |
	Group-Object -Property InstanceName -AsHashTable

# Get a list of Logins/SIDs that don't match the count of replicas, i.e. someone forgot to create with SIDs...
foreach ($d in $DAGs.Keys) {
	Write-Verbose "Working on instance: $d" -Verbose
	Get-DbaLogin -SqlInstance $DAGs[$d].Replica |
		Group-Object -Property Name, Sid |
		Where-Object Count -ne $DAGs[$d].Count |
		Select-Object -Property @{ Name = 'Instance'; Expression = { $_.Group[0].InstanceName }},
			@{ Name = 'Occurances'; Expression = { $_.Count }},
			@{ Name = 'Login/SID'; Expression = { $_.Name }}
}
I got 99 problems, and Login/SIDs are one! Or I have 5 Login/SID problems.

Cure


They say “prevention is better than cure”, and I’d love to get to a stage where we can “shift left” on these issues. Where we can catch them before someone creates a login with the same name but different SIDs on a replica.

But we’re not there yet. At least, we can find the issues just before they impact us.

That’s something, at least!

T-SQL Tuesday #182 – Integrity

Time to read: ~ 3 minutes
Words: 531 words

Welcome to T-SQL Tuesday, the monthly blogging party where we are given a topic and have to talk about it. Today, we have Rob Farley ( blog | bluesky ), talking about integrity.

I’ll admit that it’s been a while since I’ve written a blog post. It’s been a combination of either burnout or busyness, but let’s see if I still have the old chops. Plus, I’m currently sick and resting in bed, so I have nothing better to do.

I’m one of the few who haven’t had experiences with corruption in our database. Apart from Steve Stedman’s ( blog ) Database Corruption Challenge, that is.

With that being said, what do I have to say about this topic then? Well, let’s talk about the first version of corruption checking automation that we introduced in work.

Now, this is just the bare bones and many different iterations since then, but the essence is here.

Overview

Like many shops out there, we can’t run corruption checking on our main production database instance. So, then, what do we do? We take the backups and restore them to a test instances, and then run corruption checking on those restored databases.

At least this way we can test the backups we take can be restored, as well.

But, I don’t want to spend every day manually restoring and corruption checking these databases, so let’s automate this bitโ€ฆ

Limitations

A quick scour of the interwebs brought back something extremely close to what I want by Madeira Data Solutions. It had some extras that I didn’t want, though.

More importantly, it used some functions that our dreaded antivirus software still screams false positives about. So, they would stop our script from running if we even tried.

So, a script re-write was required.

Script

Here’s the link to the script in GitHub if you want to check it out.
Use at your own peril. Also, default gist embedding on this platform is visible off-putting.

General Rundown

Here’s a general rundown of the parts of the code, just to make it easier to grok. It’s not to bump the word count on this post.

Cave Canum

This is an old version of the script. The first version, if my flu-riddled brain can remember correctly, so there are small bugs here.

Off the top of my head, I think it still:

  • only works if the backup file is called FULL
  • only works if there is a single FULL backup file per database ( _that got fixed real fast!_ )
  • A.N.Other

Grab the Full Backup

Does it expect FULL in the file name? Yes. Should it? Arguably not.

Restore Everything on the Path

We did a database per database basis. “RBAR” is fine outside a DB, right?

Faire le Corruption Check

Sacre Bleu! No corruption for you!

Report on the results

Because if you don’t tell people, have you actually done it?

Ar aon nรณs

Like I said, that script is the first iteration.

Use at your own peril.

Here be dragons and krakens and DBCC WRITEPAGE.

My main point is have the INTEGRITY to say “Yes, I have regular corruption checking of my databases”.

See what I did there?

#TSQL2sday 171: Describe the Most Recent Issue you Closed

What happens when nothing changes, sp_send_dbmail gets given sysadmin, and you still can’t get emails.

Words: 817

Time to read: ~4 minutes

Invitation

Welcome to TSQL2sday, the monthly blogging party where we are given a topic and are asked to write a blog post about it.

This month we have Brent Ozar ( b ) asking us about the latest issue closed.

I don’t like posting about issues unless I fundamentally understand the root cause. That’s not the case here. A lot of the explanation here will be hand-waving while spouting “here be dragons, and giants, and three-headed dogs”, but I know enough to give you the gist of the issue.

sp_send_dbmail

Like most issues, even those not affecting production, this one was brought to the DBA team as critical and needed to be fixed yesterday.

A Dev team had raised that a subset of their SQL Agent jobs had failed. The error message stated:

Msg 22050, Level 16, State 1, Line 2
Failed to initialize sqlcmd library with error number -2147467259

That makes sense; the only jobs that were failing were ones that called sp_send_dbmail using a @query parameter. And I know that when you use that parameter, the code is given to the sqlcmd exe to run it for you.

Google fu

From research (most of the time, I ended up in the same post), the error fragment “failed to initialize sqlcmd library with error number” could be related to a number of things.

  1. The database object not existing
    โ€ƒ No, the agent fails even when the query is SELECT 1;
  2. The wrong database context used
    โ€ƒ No, SELECT 1;
  3. sqlcmd not being installed or enabled
    โ€ƒ It was working beforehand, so I would say not.
  4. Permissions

Permissions

Well I had already tested that the query worked as long as it was valid SQL, so let’s try permissions.

I increase permissions…no luck.
I grant a bit more permissions…nope.
A bit more permissions…still nothing.
ALL the permissions… MATE, YOU’RE A SYSADMIN! WHAT ARE YOU ON ABOUT?!! …ahem… nothing.

Workaround

Strangely enough, the post mentioned that using a SQL Authentication account worked.
So we tested it using EXECUTE AS LOGIN = 'sa'; and it worked.
Which was weird, but I’ll take a workaround. Especially since it gave us time to investigate more.

Thanks to dbatools, I threw together a PowerShell script that went through all of the SQL Agent jobs that contained sp_send_dbmail and wrapped them up.

EXECUTE AS LOGIN = 'sa'; EXEC dbo.sp_send_dbmail ...; REVERT

I’m not going to share that script here cause it is a glorious mess of spaghetti code and if branches.
I gave up on regex and did line-by-line thanks to the massive combinations of what was there e.g.

  • EXEC msdb.dbo.sp_send_dbmail .
  • EXECUTE sp_send_dbmail.
  • EXEC msdb..sp_send_dbmail.
  • sp_send_dbmail in a cursor so we need to revert after each call in the middle of the cursor.
  • sp_send_dbmail where there are spaces in the @body parameter so I can’t split on empty lines.

What’s Happening?

After getting the workaround in place, the team lead and I noticed something strange.

Sure, EXECUTE AS LOGIN = 'sa'; worked, but try it as a Windows Domain login and you get something different.

Could not obtain information about Windows NT group/user '<login>', error code 0x5

Something weird was happening between SQL Server and Windows.
Great for me! I get to call in outside help.
Not great for ye! The remaining explanation is going to be shallower than the amount of water I put in my whiskey.

What Changed

Nothing!

Or so we were told. Repeatedly.
We did not believe that. Repeatedly.

Next, we started to get “The target principal name is incorrect. Cannot generate SSPI Context” on the servers.

Not being able to send emails from a SQL Agent Job is one thing, but not being able to Windows Authenticate into a SQL Instance at all is another thing all together.

Eventually, as awareness of the issue increased, the problem was narrowed down to a server configuration on a Domain Controller. I’m assuming that the name of this server configuration is “nothing”.

“Nothing” was set on one server but not the other meaning that using one DC over another meant Kerberos did something that it was not supposed to. I’m reliably informed that “nothing” has something to do with encryption and gatekeeping. I reliably replied that Kerberos should be spelt Cerberus but was ignored.

Testing

With “nothing” in place properly, the SSPI Context errors disappeared.

I reverted the workaround EXECUTE AS wrapper on sp_send_dbmail and emails started flowing again, even without the wrapper. Even with permissions reduced back to what they were.

Research

Sometimes the problem is actually outside SQL Server. Those are the good days. Other days it is a SQL Server problem and you have to fix it yesterday.

All I can do is take what happens, do a bit more research, and learn from them. That way, if it were to happen again, I can speed up the resolution process.

At least this way, if someone ever asks me if I know anything about Kerberos, I can tell them that I know about “nothing”.

I’ll enjoy that.

Initialize-Choice

Dear Host, you don’t have to use Read-Host. There is a choice

Words: 590

Time to read: ~ 3 minutes.

Interactive

I’m an advocate of automation. I don’t think automation that requires user input is the correct way to do automation. However, there are times when user interaction is wanted.

I didn’t say “needed”, I said “wanted”.

Either a case of severe risk-aversion, or being burnt by bad bots, or skirting the fine line between wanting to be involved with the work but not wanting to do all the work.

Effectively, creating a Big Red Button, with a secondary “are you sure?” screen, followed by a “last chance” pop-up. You know, the “let’s introduce extra options to make sure things won’t go wrong” approach.

Sorry, did I say I was an automation advocate? I think I meant to write automation snob.

The Question

Most of the work I’ve been doing lately has been project-orientated, production DBA work e.g. building new servers, migrating across cloud providers, enabling TDE…

…working with Distributed AGs. I think this is the first SQL Server technology about which I swing from love to hate about in equal measure.

So, automation is nearly non-existent right now on my tasks. But, when I got asked if there was an easier way to get user input; taking into account bad answers, shortcuts, default options, etc., rather than using a plethora of Read-Host commands and double-checking the user input, well…

I didn’t know.

But, thankfully, I speak to people who are more experienced and smarter than I am when it comes to PowerShell, and I remembered them mentioning $Host.UI.PromptForChoice() before.

Initialize-Choice.ps1

Here’s the link to the code. I’m going to have to ask you to excuse the formatting. I’ve forgotten how to embed the code in an aesthetic way and have misplaced my desire to research how again.

https://github.com/shaneis/RandomScripts/blob/28e2c96e08de0e863134962f7c120c5ffc97abc0/Initialize-Choice.ps1

What this provides us with is a short-hand way to generate a choice option for the user, that will not accept answers outside of the given list.

Pudding

$Choices = [ordered]@{
  "Option 1" = "This is the first option"
  "Option 2" = "This is the second"
  "Option 3" = "The third"
  "Option 4" = "You're not going to believe this..."
}
Initialize-Choice -Title 'Test 01' -Choice $Choices

Nice and simple.

The choices that we laid out have been given a shortcut number, and their ordering has been preserved. Any user input outside of the choices are automatically rejected, we can add descriptions for extra help, and we’ve even thrown in a “Cancel” choice as well.

In fact, ordering is the only thing left to mention about this function. It’s why it’s written the way it is and why it only accepts hashtables with the [ordered] accelerator on them.

Here’s how it works when I first wrote it using only hashtables.

$Choices = @{
  "Option 1" = "This is the first option"
  "Option 2" = "This is the second"
  "Option 3" = "The third"
  "Option 4" = "You're not going to believe this..."
}
Initialize-ChoiceUnordered -Title 'Test 01' -Choice $Choices

4, 2, 3, 1! Debugging this would sure be fun!

Option 1 is still option 1.
But now Option 3 has gotten jealous and pushed Option 2 down.
While Option 4 is just happy to be included.

Seeing as any code that parses the choice relies on the user picking the right option and the choices being in a determined order, it seemed like a bug to not have the input order preserved.

Sin รฉ

So that’s it.

Feel free to use and abuse it. I would prefer that you use and improve it but it’s your party, you can do what you want to.

It’s your choice.

T-SQL Tuesday #164: Code That Made You Feel A Way

Time to read: ~ 4 minutes

Words: 899

Welcome to T-SQL Tuesday, the monthly blogging party where we receive a topic to post.

This month we have Erik Darling ( blog ) asking us to post about “code that makes [us] feel a way“.

A while back, I was tasked to performance tune some code that brought me through all five stages of grief. It’s best to take you through the events from the developer’s viewpoint. There’s less cursing that way.

Denial

Hey, what’s up? You were asked to troubleshoot some code. Not a big deal; these requests come in from time to time.ย 

You have code that keeps coming up as a high consumer of CPU on our systems? OK?

It had gotten to the stage where it kept appearing in sp_WhoIsActive? What’s that? And should it not be “whom is active”? Well, agree to disagree.

Let’s see the code so. Wow, that’s a small scroll bar! Yeah, that’s one of ours.

No, it is. I swear it is.

I’m not surprised you can’t find it in stored procedures; we dynamically create it. Here’s the procedure. Yeah, all 900 lines! Pretty cool, huh?

What do you mean, why? We had to! We had to react to the different parameters that get passed in.

Anger

Alright, alright! Calm down. I misspoke.

Yeah, that’s it, breathe. There aren’t different parameters. It’s a parameter. Singular.

No, we still need to generate the query dynamically. Cause the XML could be different. Huh? Yeah, the parameter is XML; it could be anything in there. So we pass the XML in as a nvarchar(max) string.

You spilt some of your coffee there when you banged the desk.

Then we use sp_xml_preparedocument at the start, but then we have to use sp_xml_removedocument at the end.

You’ve never heard of those before? I thought you were a DBA?

We use the “prepare” to get the XML into a state that’s easier to consume, and we use the “remove” so we don’t get memory leaks!

Your face is getting a bit red, did you know that?

It’s SQL Server; it uses enough memory it can share some!

Did we read the docs? No, why? It can use one-eighth of the total memory available for SQL Server. Ah, but that’s “can”, not “will”.

Yes, yes, cursors upon cursors upon cursors. Why? We don’t know how many values are in each XML node in the params, so we have to split them out.

We then join them into a comma-delimited string, which is then used in IN clauses… woah! Is that the sound of your teeth grinding? You know that’s not good for your health. What do you mean neither am I?

Anyway, then we parse all of that down to a massive, what do you call it, swiss-army knife, lego-block, dynamic query built based on what’s passed in. You don’t call it that? I thought you did. What do you call it so? Wow, that’s not a word I had heard of before.

It’s not too bad, though! We pass in everything as literal values, so it’s faster that way. We read up on that parameter sniffing issue you said can happen. That isn’t going to catch us out here!

Modern problems require modern solutions, as they say.ย What’s that? It’s not a modern problem? I’m a what? Are you allowed to call me that?

Bargaining

You want me to re-write it? You’ll help? It’ll take a lot of time. It’s not really a priority.

Plus, it’s not even worth it. How much CPU? Is that a “illion” with an “m” or “illion” with a “b”? Per run? Is that a lot? Well, I really wouldn’t know, now would I?ย 

Yeah, good point; you’ll also have to check how much memory it uses. Hey! Probably one-eight of the total memory available for SQL Server. Alright, it wasn’t that bad a joke; everyone’s a critic!

You’ll give me credit? But it’s fine in Dev; there’s no point. I suppose we could remove some of the WHERE OR IS NULL branches since we know if they’re empty when we create the query.

Nah, that would make the procedure a bit too big. Anything over 1000 lines is too much. Well, 985 is still less than 1000!

Depression

Why are you crying? No, we care, we do, we really do. No, not all the code is like this; this is old. Yes, we stopped using that XML parameter pattern. Yes, and the cursors. Well, most of them, at least.ย 

Your tears are mixing with the coffee you spilt, you know that?ย 

Here, there’s really no need for the fetal position in the middle of the office. You have yet to see the code for the busier databases.

You know what, why don’t you go home and come back in tomorrow, and I’ll go over that way, OK?

Acceptance

Hey! There’s our favourite DBA. You seem a lot cheerier.

Oh, you don’t have to perf-tune that query anymore? That’s great; see, it wasn’t so bad. What’s that? The latest release failed? It ran fine in Dev.

You’re permitted to set up a QA environment to test performance before code gets released? What kind of code? Any stored procedure?

Ah, that’s OK. No, really, it’s fine. We started using ORMs and embedding SQL into the app a while ago. This won’t affect us.

You’ve started crying again.

T-SQL Tuesday 161: Having Fun with SQL

Words: 1,283

Time to read: ~ 7 minutes

Welcome to T-SQL Tuesday, the monthly blogging party where a host provides a topic, and we blog about it.

This month we have Reitse Eskens ( blog | twitter | mastadon ) asking us to talk about something fun we have done with T-SQL.

It has been quite a while since I have coded just for fun, so I’m thankful to Reitse for suggesting this. Unfortunately, I don’t have a pre-baked idea for this T-SQL Tuesday, so let’s see what we can come up with.

Echos

Around December 2021, Wordle hit the virtual scenes. Yeah, nearly two years ago. How do you feel about that?

I got swept up in that wave for a while in the same way I got swept up in the other trends of my time, like Pokemon, Sodoku, and Angry Birds.

Eventually, I stopped when I found a PowerShell script by Kieran Walsh ( github | twitter ) where you could narrow down to the correct answer by putting in the results of your guess each round.

This hack led to me realising how much time I was spending on Wordle and that I should stop, much like I did with Pokemon, Sodoku, and Angry Birds.

So, what better thing to do than to try and recreate that PowerShell script in T-SQL

Rules

I must recreate as much of the script as possible in T-SQL in only one hour.
Yes, I’m aware that’s more of a rule than rules but Wordle needs five letters dammit, and “rule” was crying out for that prosthetic “s”!

Total (code)

Don’t worry, you just have to fill in the variables on lines 19-26.

Split

A few things need to be taken care of out of the bat.

The potential answers have to be stored somewhere in the database. Thankfully, I had the answers in a text file, so creating a table and then inserting them was easy.

I could do the insert with flat files, but I already have PowerShell open so…

$WordleAnswers = Get-Content -Path $HOME\Documents\wordle-answers-alphabetical.txt |
    ForEach-Object -Process {
        [PSCustomObject] @{
            WordleAnswer = $_
        }
    }

$WriteTableParms = @{
    SqlInstance = 'localhost\SQL2019'
    Database = 'Wordle'
    Schema = 'dbo'
    Table = 'WordleAnswers'
    ColumnMap = @{
        "WordleAnswer" = "wordle_answers"
    }
}
$WordleAnswers |
    Write-DbaDataTable @WriteTableParams


Next, we need the variables that we can create. If I can finish this before the 1-hour mark, I’ll turn this into a stored procedure with parameters and everything! Until then, it’s script and variable times.

DECLARE
	@known_letters AS varchar(5),
	@excluded_letters AS varchar(26),
	@position1 AS char(1),
	@position2 AS char(1),
	@position3 AS char(1),
	@position4 AS char(1),
	@position5 AS char(1),
	@correct_letters AS xml,
	@all_answers_sql AS nvarchar(MAX);

/* region Enter Variables here */
SET @known_letters = '';
SET @excluded_letters = '%[]%';

SET @position1 = NULL;
SET @position2 = NULL;
SET @position3 = NULL;
SET @position4 = NULL;
SET @position5 = NULL;
/* endregion Enter Variables here */

The PowerShell code has known_letters, excluded_letters, positions, and wrong_positions.

I can do all these easily enough, except for wrong_positions. I can’t think of a way to do hashtables in SQL that doesn’t equal a secondary table or user-table type, etc. I’ll leave that to the end if I have time.

known_letters is an array of strings. I haven’t updated the SQL Server version on my laptop in a while, so there is no string_split for me. Let’s do the XML way so.

/* region KnownLetters */
SELECT @correct_letters = CONCAT(
	'<known_letters>',
	REPLACE(@known_letters, ',', '</known_letters><known_letters>'),
	'</known_letters>'
);

SELECT
	[known] = [l].[y].value('.', 'char(1)')
INTO #KnownLetters
FROM
(
	VALUES
	(@correct_letters)
) AS [x] ([kl])
CROSS APPLY [kl].nodes('/known_letters') AS [l] (y);
/* endregion KnownLetters */

excluded_letters I can get away with by using some LIKE jiggery-pokery, where it will search for any characters between the square brackets.

positions I can split out into individual variables. I can more easily deal with them then, and it only ends up as an extra five variables this way.

Creating the table would have been handier if I had made a column for each character, but I didn’t, so it’s some SUBSTRING logic for me to get the individual characters out.

SELECT
	[wa].[wordle_answers],
	[g].[char1],
	[g].[char2],
	[g].[char3],
	[g].[char4],
	[g].[char5]
FROM [dbo].[WordleAnswers] AS [wa]
CROSS APPLY (
	VALUES (
		(SUBSTRING([wa].[wordle_answers], 1, 1)),
		(SUBSTRING([wa].[wordle_answers], 2, 1)),
		(SUBSTRING([wa].[wordle_answers], 3, 1)),
		(SUBSTRING([wa].[wordle_answers], 4, 1)),
		(SUBSTRING([wa].[wordle_answers], 5, 1))
	)
) AS [g] ([char1], [char2], [char3], [char4], [char5])

If we do know the positions of some of the letters, then I can strip out a lot of the potential answers straight away. I’m not a fan of Swiss-army-knife WHERE clauses, so I’ll do the dynamic SQL.

I’m also not a fan of WHERE 1=1 in my dynamic code, but I’m running low on time here, and it’s faster to add that in first and start everything else with an AND than it is to check if this is the first clause in the WHERE section or not.

Plus, I’m less against WHERE 1=1 than I am against Swiss-army-knife WHERE clauses.

/* region Known Positions */
CREATE TABLE #AllAnswers
(
	[wordle_answers] char(5),
	[char1] char(1),
	[char2] char(1),
	[char3] char(1),
	[char4] char(1),
	[char5] char(1)
);
SET @all_answers_sql = N'SELECT
	[wa].[wordle_answers],
	[g].[char1],
	[g].[char2],
	[g].[char3],
	[g].[char4],
	[g].[char5]
FROM [dbo].[WordleAnswers] AS [wa]
CROSS APPLY (
	VALUES (
		(SUBSTRING([wa].[wordle_answers], 1, 1)),
		(SUBSTRING([wa].[wordle_answers], 2, 1)),
		(SUBSTRING([wa].[wordle_answers], 3, 1)),
		(SUBSTRING([wa].[wordle_answers], 4, 1)),
		(SUBSTRING([wa].[wordle_answers], 5, 1))
	)
) AS [g] ([char1], [char2], [char3], [char4], [char5])
WHERE 1=1';

IF @position1 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char1] = ',
	QUOTENAME(@position1, '''')
);

IF @position2 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char2] = ',
	QUOTENAME(@position2, '''')
);

IF @position3 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char3] = ',
	QUOTENAME(@position3, '''')
);

IF @position4 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char4] = ',
	QUOTENAME(@position4, '''')
);

IF @position5 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char5] = ',
	QUOTENAME(@position5, '''')
);

SET @all_answers_sql = CONCAT(@all_answers_sql, N';')

PRINT @all_answers_sql;

INSERT INTO #AllAnswers
EXECUTE [sys].[sp_executesql] @stmt = @all_answers_sql;
/* endregion Known Positions */

Finally, we can UNPIVOT the individual characters for the words and join them with the known_letters to single down to those answers.
As well as excluding characters that we know aren’t in the word.

Or else just return everything we have, minus excluded characters.

IF LEN(@known_letters) > 0 BEGIN
	SELECT
		*
	FROM #AllAnswers AS [w]
	UNPIVOT 
	(
		[chars] FOR [chr2] IN ([w].[char1], [w].[char2], [w].[char3], [w].[char4], [w].[char5])
	) AS [unpvt]
	JOIN #KnownLetters AS [kl]
		ON [unpvt].[chars] = [kl].[known]
	WHERE
		[unpvt].[wordle_answers] NOT LIKE @excluded_letters
END
ELSE
BEGIN
	SELECT
		*
	FROM #AllAnswers AS [a]
	WHERE [a].[wordle_answers] NOT LIKE @excluded_letters;
END;

Guilt

In the PowerShell script, you can add characters in the excluded_letters parameter that exist in the known_letters parameter, and it will correctly ignore them.

Alas, Tempus fugit and I didn’t get to do the same for this T-SQL version. Maybe that has something to do with translating “time flies” into Latin and then looking up other sayings in Latin, but we can’t say for sure. Mea culpa!

However, it’s been around 50 minutes with minor troubleshooting here and there, so time to test this bad boy.

Tests

Let’s hop over to https://wordleplay.com/ and test it out.

I’ll take the first answer returned each time unless it is the answer we chose previously.

PowerShell

I’m not going to use the wrong_positions parameter here since I didn’t re-create that in T-SQL. Still, I got lucky and got the correct answer on the third guess

T-SQL

The T-SQL method doesn’t show each iteration as well as the PowerShell does. And, there’s more human brain power required to make sure you don’t enter the same letter in the known_letters and the excluded_letters variables.

Overall though, well done with a respectable four guesses

Point

I’m not going to say that there is no point to these exercises.

Fun is a valid a point as any other. In a work world filled with more demands on our time than the number of Pokemon (there’s still only 150, right?), more technologies to learn than combinations in Sodoku puzzles, and more people demanding the seemingly impossible out of you so that you want to yeet them at solid objects … something something Angry Birds, it’s a welcome change to do something just for fun once in a while.

Thanks Reitse

T-SQL Tuesday 157 – End of Year Activity

Words: 544

Time to read: ~ 2 minutes

Read-Clipboard

Welcome to T-SQL Tuesday, the monthly blog part where we are given a topic and asked to blog about it. This month we have Garry Bargsley, asking us to discuss end-of-year activities.

Here

My current workplace has a change freeze in place for the end of the year. Usually, I would say that this is risk-averse.
But seeing as the nature of the business (payments) means that the Thanksgiving/Black Friday/Christmas time is the busiest time of the year, I’m willing to cut them some slack.

So, what to do when we cannot deploy to production? Oh, we’ll still be busy! There are always management-approved fixes that get through, annual processes to complete, and project planning that has to be…well, planned.

But, my priority for this end-of-year is documentation.

Docs

We have a few different tools for documentation. Examples are Confluence, Google Sheets, Google Docs, etc.

But most of the time, documentation takes the form of scripts saved to source control.

These scripts are multiprocess and cross-team dependent and can quickly end up like me doing DIY. One hand trying to steady the nail, the other wielding the hammer, and the whole situation collapsing into swear words and tears.

We can’t currently have a “hit-Enter-and-leave-it” bunch of scripts because we have to stop midway for another team’s work or to check the results of what we’ve just run.

Notebooks

If we used Notebooks, this would be so much easier. I could create the code, save a snippet of the results to the notebooks, and then future executors could see what to expect.

No-books

We don’t use Notebooks.

Plain .sql files for me, it is! 

To ease the documentation burden and have some semblance of tidiness, I created a PowerShell tool that splits it out “all pretty like”.

Format-TextTable

Now, with a combination of Read-Clipboard from the ImportExcel module, I can grab results and turn them into a text table that I can add back into the script.

Simple example: we want to save the database name, create date, and compatibility level of the databases on an instance.

… I said simple example – not good or useful example.

SELECT 
	database_name = name,
	create_date,
	compatibility_level
FROM sys.databases;
Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows

Now, let’s copy and paste the results into a comment on that script as a form of “documentation”.

Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows. Underneath the query in a comment are the pasted results that all have different spacing and are not aligned

Now, this time I’m going to copy those results, and run the following code in my PowerShell Core window, before pasting into the comment block.

Read-Clipboard | Format-TextTable | Set-Clipboard
Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows. Underneath the query in a comment are the pasted results that are wrapped in a text table format and aligned under their header name

Given the option, I know which one I’d choose.

Code

You can pick up the source-code here.

Feel free to check it out and add/remove from it.

Set-Clipboard

I am aware that I could spend minutes of my time to evenly align the spaces. And, yes, I know that SSMS can do find-and-replace using simple Regex.
But if I manually structure the output every time, for all different types of columns & data types, with different spacings…

I’d be better off arguing with the company to start using Notebooks.

Until then, this works for me. Hopefully, it will work for you.

And hey, maybe it will help improve your documentation in the New Year. That has a higher chance of happening than my one to improve my DIY skills.