How to Save Money with Your Azure Virtual Machine Demos

Published On: 2020-09-23By:

Sorry for spammy, SEO title, we got to pay the bills. Sometimes it’s fun to just write some code to solve problems, and not think about the world’s larger problems for a few hours. Last week, I learned something new from a client—that you can change managed disks in Azure in Premium Storage to Standard Storage if the VM connected to those disks is powered off. This is a cost savings of nearly $100/month per month per disk (assuming 1 TB disks) and since the SQL Server image in the marketplace uses two 1 TB disks, this can save you a good amount of money from your Azure spend.

silver and gold coins
Photo by Pixabay on Pexels.com

This code will loop through each resource group in your subscription and look for resource groups with the Tag “Use:Demo”. If you aren’t familiar with Tags in Azure (or AWS) they are a metadata application layer that allows you to more easily identity and filter resources. The most common use case is to make your Azure bill easier to navigate. However, you can also incorporate tagging into your management operations, as you see in this example.

After it identifies each resource group with that tag, it will then look for VMs in those resource groups, and power them down if they are running, and then migrate each premium disk on the VM to Standard. I have similar code in Github to do the opposite, however, I haven’t glammed it up to support the tagging functionality yet.

This code is available at DCAC’s GitHub here. To take this a step further you could create an Azure Automation runbook to deploy this code. In order to do that you would need to import the modules Az.Resources and Az.Compute into your automation account.  

Contact the Author | Contact DCAC

Why I’m Not Speaking at PASS Summit and You Shouldn’t Either

Published On: 2020-09-22By:

If you saw any of my angry tweets last night, it’s not just because the Saints weren’t good. I’ve been writing a lot about PASS and C&C the for-profit event management firm that runs virtually all of PASS’ operation. I personally think C&C imposes a financial burden on the Microsoft Data Platform community that will ultimately kill PASS. I want to run for the board of directors (once you agree to run for the board you have to agree not to speak or write poorly of PASS, but it doesn’t say anything about C&C) to try to return PASS to being a community oriented organization. PASS has been a great organization and the connections I have made have been a great foundation for the career success that myself and many others have achieved. The reason I agreed to speak at PASS Summit this year was to help enable the organization’s survival, despite my lasting frustrations with C&C.

PASS had a couple of options for doing PASS Summit virtually, and they’ve failed at every turn. The best option would have been to do a super low-cost virtual summit, using Microsoft Teams, and tried to keep the pricing at level the average DBA could pay out of pocket. This big reduction in revenue is bad for C&C’s business, but frankly given that there likely won’t be a big conference until 2022, C&C should be operating on an austerity budget, since PASS’ main income source has been severely constrained.

The Burden on Speakers

I’ve lost count of how many webinars I’ve done this year—it’s been a lot. 98% have been live—in some cases with some really dicey demos, like I did at Eight KB.  Doing a webinar or a user group meeting is a decent amount of effort, but no more than doing in-person session. However, PASS Summit has asked speakers to record their sessions—recording a session takes me at a minimum 2-3x the amount of time to execute than to simply deliver a session. Setting up cameras, lighting, and doing small amounts of editing all add up to considerable amounts of time. Additionally, you have to render the video and then upload it to the site. I say this with experience, because I just recorded three sessions for SQLBits.

You might ask why I was willing to record sessions for Bits, but not PASS Summit. That’s a good question—SQLBits is truly a community run event, for the community, by the community. Sure it can be rough around the edges, but it’s a great event, and in general the conference is great to work with. Additionally, SQLBits always pays for speaker’s hotel rooms, it’s nominal in the cost of an international trip, but it’s something that makes you feel wanted as a speaker and I remember it. PASS Summit, unless you have a preconference session (precon) doesn’t offer any renumeration at all to speakers, nor have they ever. All that being said, after recording my Bits sessions, I said “I’m never doing that for free again”. In addition after doing the work for your session, you have to show up and do Q&A for your session.

Why You Shouldn’t Speak at PASS Summit (and TimeZones are hard)

PASS has asked speakers to record their sessions just six weeks before the conference. These recordings will only ever be seen by paid attendees of the conference, and possibly PASS Pro members. Speakers received a highly confusing email informing them of this late last night, which included the time and date of their sessions. It wasn’t clear if “live sessions” still needed to be recorded—which is even more confusing to speakers. Speakers weren’t consulted about the need to record their sessions when the revised speaker agreement went out. This burden has been imposed at the last minute. I haven’t gotten any official communications since July when I received my speaker code. It’s not fair to impose this on speaker’s this late in the process, especially when you aren’t compensating them for their time.  Also, this is insignificant, but we were supposed to get the slide template in July, and it’s still not in my inbox. I’ve have no communications from PASS about Summit since July.

Precons are all starting in the speaker’s native time zone, which will limit the audience for many precon speakers—European speakers are starting at early a 3 AM EST, which means basically no one in North America (PASS’ main market). Most regular conference sessions are 8-5 PM EST—which probably is a decent compromise, but still greatly limits the west coast in the morning and other regions of the world like Asia. There are some evening and overnight sessions but those are extremely limited compared to EST business hour sessions. All schedules for a worldwide event are going to be a compromise, but I feel like some creativity could have been used to better support a virtual audience. For example, Ignite has replays of all its sessions available for broader time zone coverage.  As far as I know, no speakers were consulted during the making of this schedule.  

Doesn’t This Hurt the Community?

A successful PASS Summit is a good thing for the community. However, with the poor management of C&C, the marketing for the event has been poor, and with most other events either going to free or freemium models, PASS continues to charge a premium for the event. The platform that PASS is using hasn’t been demoed to speakers or attendees, to show how it would have value over a free conference like EightKB or Ignite.

I’m not going to speak at PASS Summit. I’m going to record my session, and put it on YouTube, so everyone can watch the session. And I’ll do a live Q&A to talk about it—it’s a really cool session about a project I’ve worked on to aggregate query store data across multiple databases. I challenge other speakers to follow me—the conference is so bad and so expensive, because C&C is trying to prop itself up on the back of the community. C&C needs to go away before we can move forward. I was frustrated before, but this Summit fiasco has really pushed me over the top.

Contact the Author | Contact DCAC

Azure SQL Offers Manual Failover for PaaS Resources

Published On: By:

Sometime having the right command in place opens up new doors to test things, like a failover for example.  In this post we will take a look at a new ability that has recently surface within the Azure eco-system to help manage fail-overs.  Let’s jump to it.

High availability is a crucial component for data professionals, even when operating in a cloud environment such as Azure.  Thankfully, Microsoft Azure Platform as a Service (PaaS) is architected in a way that offers high availability for service right out of the gate.  This helps to ensure that your databases, such as Azure SQL Database and Azure SQL Managed Instances, are always available without having to lift a finger.  What even better is Microsoft now offers the ability to manually control a failover over for these resources which gives data professionals more granular control.

Previously, the service would manage this aspect and Microsoft would initiate the failover if needed.  But what if I wanted to test the failover to see how my applications would react?  Would a failover impact my end users?  There was not any way to test this even though the service offers a high level of availability. Thankfully that has changed and we can now control, to a degree, failovers for Azure SQL Platform as a Service resources, including Azure SQL Database, Elastic Pools, and SQL Managed Instances.

How can we manage a high availability failover in Azure SQL PaaS?

To facilitate the failovers, you must do this through some type of command line interface. This means either PowerShell, Azure CLI, or a REST API call.  There is currently not a way to manage this through the portal.  In the future we could possibly see such capability, but I do not know if or when that would come to fruition.  For the purposes of this post, we will look at PowerShell.

There are three powershell cmdlets that will failover Azure SQL resources.

Invoke-AzSQLDatabaseFailover

This cmdlet will failover an individual database.  If the database is involved within an elastic pool, the failover will not affect the entire pool and will only affect the database itself.  In testing, failing over a database involved with an elastic pool did not affect the databases membership in the pool.   Furthermore, if the database is within an Availability Zone, the database will be failed over to a secondary zone and all client connections will be redirected to the new primary zone.

It is also worth noting that there is a “-ReadableSecondary” switch that would instead a failover the readable secondary.  Since you could be using a readable secondary to off-load read workloads it would make sense to test how its failover would impact those workloads.

Invoke-AzSQLElasticPool

This cmdlet will failover an entire elastic pool which means all the databases within the pool will failover.  This cmdlet will be handy if you are utilizing elastic pools to help minimize Azure costs but still want to test a failover.

Invoke-AzuSQLInstanceFailover

Like it’s the two predecessors, this cmdlet will failover a SQL Managed Instance.  It also has a readable secondary switch that you can utilize to failover the readable secondary.

Are there any limitations?

With great power comes great responsibility and such is the case here.  Given the intrusive nature of the failover within the Azure eco-system, it stands to make sense that you can only failover the resources every so often.  Currently, at the time of this post, the documentation states you can only failover every 30 minutes.  However, during testing things, I got a different error message that states it’s every 15 minutes.

Image of error message stating 15 minute delay between failovers
Click on the image to enlarge

I have given feedback to Microsoft regarding this discrepancy and they were able to get it resolved and the documentation will be updated to reflect a 15 minute duration between failover events.

What else would this help fix?

Even with the highly durable infrastructure that Microsoft has built, there are occasions where hardware issues arise where the service might not failover.  While failing over to a DR solution (such as active geo-replication or automatic failover groups) would help to resolve it, if things are configured correctly that is more intrusive to the application.  By having the ability to failover, customers can now initiate a failover when hardware issues surface without having to implement their disaster recovery solutions.

Summary

Microsoft continues to enhance and improve the Azure SQL eco-system.  By having the ability to control and test failovers for Azure SQL resources just further provides a deeper level of control for data professionals.  If you are utilizing Microsoft Azure or even planning on moving Azure, I highly recommend you get familiar with how this feature works so that you can verify with certainty how your applications will handle a database high availability failover.

© 2020, John Morehouse. All rights reserved.

The post Azure SQL Offers Manual Failover for PaaS Resources first appeared on John Morehouse.

Contact the Author | Contact DCAC

Azure SQL Offers Manual Failover for PaaS Resources

Published On: By:

Sometime having the right command in place opens up new doors to test things, like a failover for example.  In this post we will take a look at a new ability that has recently surface within the Azure eco-system to help manage fail-overs.  Let’s jump to it.

High availability is a crucial component for data professionals, even when operating in a cloud environment such as Azure.  Thankfully, Microsoft Azure Platform as a Service (PaaS) is architected in a way that offers high availability for service right out of the gate.  This helps to ensure that your databases, such as Azure SQL Database and Azure SQL Managed Instances, are always available without having to lift a finger.  What even better is Microsoft now offers the ability to manually control a failover over for these resources which gives data professionals more granular control.

Previously, the service would manage this aspect and Microsoft would initiate the failover if needed.  But what if I wanted to test the failover to see how my applications would react?  Would a failover impact my end users?  There was not any way to test this even though the service offers a high level of availability. Thankfully that has changed and we can now control, to a degree, failovers for Azure SQL Platform as a Service resources, including Azure SQL Database, Elastic Pools, and SQL Managed Instances.

How can we manage a high availability failover in Azure SQL PaaS?

To facilitate the failovers, you must do this through some type of command line interface. This means either PowerShell, Azure CLI, or a REST API call.  There is currently not a way to manage this through the portal.  In the future we could possibly see such capability, but I do not know if or when that would come to fruition.  For the purposes of this post, we will look at PowerShell.

There are three powershell cmdlets that will failover Azure SQL resources.

Invoke-AzSQLDatabaseFailover

This cmdlet will failover an individual database.  If the database is involved within an elastic pool, the failover will not affect the entire pool and will only affect the database itself.  In testing, failing over a database involved with an elastic pool did not affect the databases membership in the pool.   Furthermore, if the database is within an Availability Zone, the database will be failed over to a secondary zone and all client connections will be redirected to the new primary zone.

It is also worth noting that there is a “-ReadableSecondary” switch that would instead a failover the readable secondary.  Since you could be using a readable secondary to off-load read workloads it would make sense to test how its failover would impact those workloads.

Invoke-AzSQLElasticPool

This cmdlet will failover an entire elastic pool which means all the databases within the pool will failover.  This cmdlet will be handy if you are utilizing elastic pools to help minimize Azure costs but still want to test a failover.

Invoke-AzuSQLInstanceFailover

Like it’s the two predecessors, this cmdlet will failover a SQL Managed Instance.  It also has a readable secondary switch that you can utilize to failover the readable secondary.

Are there any limitations?

With great power comes great responsibility and such is the case here.  Given the intrusive nature of the failover within the Azure eco-system, it stands to make sense that you can only failover the resources every so often.  Currently, at the time of this post, the documentation states you can only failover every 30 minutes.  However, during testing things, I got a different error message that states it’s every 15 minutes.

Image of error message stating 15 minute delay between failovers
Click on the image to enlarge

I have given feedback to Microsoft regarding this discrepancy and they were able to get it resolved and the documentation will be updated to reflect a 15 minute duration between failover events.

What else would this help fix?

Even with the highly durable infrastructure that Microsoft has built, there are occasions where hardware issues arise where the service might not failover.  While failing over to a DR solution (such as active geo-replication or automatic failover groups) would help to resolve it, if things are configured correctly that is more intrusive to the application.  By having the ability to failover, customers can now initiate a failover when hardware issues surface without having to implement their disaster recovery solutions.

Summary

Microsoft continues to enhance and improve the Azure SQL eco-system.  By having the ability to control and test failovers for Azure SQL resources just further provides a deeper level of control for data professionals.  If you are utilizing Microsoft Azure or even planning on moving Azure, I highly recommend you get familiar with how this feature works so that you can verify with certainty how your applications will handle a database high availability failover.

© 2020, John Morehouse. All rights reserved.

The post Azure SQL Offers Manual Failover for PaaS Resources first appeared on John Morehouse.

Contact the Author | Contact DCAC
1 2 3 475

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via
Copy link