One of the great advantages of the cloud computing is the ability to power off resources that are not in use to save some money. Sure, your production database servers should be running 24×7, but that VM, or SQL Data Warehouse you are developing against during the week? You can shut it down at 7 PM (1900 for the Europeans reading this) and not start it up. Azure even recently introduced an auto-shutdown feature for VMs.
Unfortunately, there is no auto-startup feature, but that is easy enough to code using an Azure automation job.
This sounds great, can it walk my dog, too?
Unfortunately, there’s one problem with our awesome budget saving proposal. Sometimes developers have jobs that run beyond the time they leave the office. For example, last night at one of my clients a developer had an SSIS package running after he left, and it got killed when the SSIS machine auto-shutdown at 7. That isn’t good.
The solution for this is Azure resource locks—you can put a lock on any resource in Azure. A lock can do one of the two things—first there are delete locks which simply keep a resource from being deleted. It is not a bad idea to put a delete lock on all of your production resources to prevent any accidental deletion from happening. The second type of lock is a read-only lock, and these are a little more aggressive. You can’t do anything to a resource with a read-only lock—you can’t add a drive to a VM, you can’t resize, and most importantly, you can’t shutdown the resource.
You can use the portal, PowerShell, or CLI to create a lock. It’s a fairly simple construct that can be extremely beneficial. You can get current details for lock creation from the Azure Documentation.
My developers have access to the portal (thanks to role based access control and resource groups), so I’ve instructed them on how to place locks on resources, and how to remove them. As an administrator, you probably want to monitor for locks, to ensure that they aren’t left in place after they are needed.
I had this dream that other week. I was in the big room at PASS Summit, sitting in the audience. I was relaxed, as I thought I was presenting later in the day, when I quickly realized, due to the lack of speaker on the stage, that I was the next speaker, and the room was full. And I was playing with my laptop and I didn’t have a slide deck. In my dream, this talk was a 300 level session on troubleshooting SQL Server, something I feel like I could do pretty easily, you know with slides. Or a whiteboard.
I woke up, before I started speaking. So, I’m not sure how I would have handled it—interpretive dance? I’m a pretty bad dancer. One thing, I will mention, and I saw my friend Allan Hirt (b|t) have to do this last month in Boston—really good (and really well rehearsed) speakers, can do a very good talk without their slides. Slides can be a crutch—one of the common refrains in Speaker Idol judging is don’t read your slides. It is bad form—do I sometimes read my slides? Yeah, everyone does occasionally. But when you want to deliver a solid technical message, the best way to do that is telling stories.
I’m doing a talk next month in Belgium (April 10, in Gent), right before SQL Bits. It’s going to be about what not to do in DR. My slide deck is mostly going to be pictures, and I’m going to tell stories—stories from throughout my career, and some stores from friends. It’s going to be fun, names will be changed to protect the guilty.
So my question and guidance for you dear readers, is to think about what you would do if the project failed and you did not have a whiteboard. I can think of a number of talks I can do without a whiteboard–in India last year, another instructor and I demonstrated Azure networking by using our bodies as props. What would you do in this situation?
As I mentioned in my post a couple of weeks ago, monitoring the plan cache on a readable secondary replica can be a challenge. My customer was seeing dramatically different performance, depending on whether a node was primary or secondary. As amazing as the Query Store in SQL Server 2016 is, it does not allow you to view statistics from the readable secondary. So that leaves you writing xQuery to mine the plan cache DMVs for the query information you are trying to identify.
My friends at Solarwinds (Lawyers: see disclaimer at bottom of post) introduced version 11.0 of Database Performance Analyzer (DPA, a product you may remember as Ignite) which has full support for Availability Group monitoring. As you can see in the screenshot below, DPA gives a nice overview of the status of your AG, and also lets you dig into the performance on each node.
There are a host of other features in their new releases, which you can check out some of their new hybrid features in their flagship product Orion. Amongst these features, a couple jumped out at me—there is now support for Amazon RDS and Azure SQL Database in DPA, and there is some really cool correlation data that will let your compare performance across your infrastructure. So, when you the DBA is arguing with the SAN, network, and VM teams about where the root cause of the performance problem, this tool can quickly isolate the root cause of the issue. With less fighting. These are great products, give them a look.
Disclaimer: I was not paid for this post, but I do paid work for SolarWinds on a regular basis.
I’m still fighting with some challenges about inconsistent performance between a primary and secondary replica, so I’ve been waste deep in undocumented system views looking at temporary statistics. One of the things I thought about doing was talking advantage of the Force Plan option in the Query Store in SQL Server 2016. If you are not familiar with this feature, it allows you to force a “preferred” execution plan. In this scenario, our query was running in about 20-30 seconds on the primary, and 20-30 minutes on the secondary. The plans were reasonably close, but I wanted to see what would happen if I forced a plan on the primary.
Primer about the Query Store and Availability Groups
Since readable secondary replicas are read-only, the query store on those secondary replicas are also read-only. This means runtime statistics for queries executed on those replicas are not recorded into the query store. All the stats there are from the primary replica. However, I wasn’t sure what would happen if I forced a plan on the primary—would the secondary replica honor that plan?
Let’s Find Out
The first thing I did was to query the query store catalog views to verify that the plan was forced.
I have to copies of the forced plan. If I run an estimated query plan on the primary, I see that plan is forced. You can see this by looked for UsePlan in the XML of the plan.
I did the same thing on the secondary (in the case of the secondary, we are looking at the actual plan, but it doesn’t matter).
You will note that there is no UsePlan. There are extended events and a catalog view that reflect plan forcing failure (Grant Fritchey wrote about this behavior here), While, I wouldn’t expect the catalog view to get updated, I was hoping that the Extended Event might fire. It did not.
The query store, as awesome as it is, doesn’t really do much for you on readable secondary replica. It does not force plans, nor does it record any of your data.
Thanks to Grant Fritchey and Erin Stellato for helping with this post!