If you do any work in Azure you’ve probably done deployments using JSON scripts. Originally JSON in Azure didn’t support much, you could create objects, but if you put it in the JSON script, it would be created.
Thank thankfully isn’t the case anymore. We now have conditions in JSON in Azure which when used correctly can be very powerful. I’ve only had a couple of opportunities to use this new functionality, but it’s incredibly powerful. It’s all done by adding in a single line of code for the object in question. This syntax puts the following within the object.
“condition”: “[equals(parameters(‘newDeployment’), ‘true’)]”,
As long as the condition evaluates to true, then the object with this in the code will be created. Let’s look at an example to see how this can be used.
“condition”: “[equals(parameters(‘newDeployment’), ‘true’)]”,
As we can see in the sample code, as long as the value of the parameter “newDeployment” is “true” then this availability set in Azure will be created. This variable is set to $true or $false in PowerShell before it’s passed into the JSON, so normal handling can happen in PowerShell beforehand.
Hopefully, others will find this as useful as I did when creating JSON deployment scripts in Azure.
The post JSON documents in Azure and IF/THEN blocks appeared first on SQL Server with Mr. Denny.
Back in the old days of SQL Server 2012, we’d always tell people that creating indexes was half art and half science. And that’s true, but we’d create some non-clustered indexes (designed correctly of course), and the system would magically go faster. These days, in modern versions of SQL Server (or SQL Database if you’re using Azure) it can be a lot more art than science.
You have to know the data, at least somewhat, to create truly useful indexes. You need to know what the application is going to do with the data (you don’t need to know what the application looks like, just what the queries that it runs against the database). If you have a general idea of the queries that are hitting the server that’ll point you to the kinds of indexes that should be created.
If your application is doing summary reports on large amounts of data, sales by month over the last ten years (as an example) then a Clustered ColumnStore index will probably be your best option for the table.
If the application is doing summary reports on small amounts of data, more of a data dump than a summary report, then a ColumnStore index probably isn’t the best option. Instead, a non-clustered index would probably serve the application, and therefore performance, better.
It just depends on what the application needs and what it does. As a consultant, I have to drop into the company, review the application and quickly figure out what kind of application it is, and what kind of indexes need to be added (as well as how to define the indexes). It’s a hard job, but it’s a fun job, and I’m thrilled that I get the chance to do it.
But when push comes to shove, indexes are still an art form to some degree; and I think they always will be to some extent. We have some great tools like pssdiag and the query store that can help with this, but when you come down to it, it’s still people looking at a server and making it faster.
The post Indexing in SQL Server is becoming more of an art form with each new version appeared first on SQL Server with Mr. Denny.
In just a couple of short weeks, I’ll be presenting an all day session on Microsoft Azure. The session is titled “Designing Azure Infrastructure for Data
Platform Projects” and will be held on the 21st of May, 2018. This session will cover all the componets of the Azure platform including touching on the database platforms.
Hopefully everyone who’s interested in Azure can come out and see this great session on the various components of Azure. The session will be in Lingon, Germany on the 21st, then the SQL Grillen conference will be the next day on the 22nd. I’ll see everyone there in Lingen.
The post Azure session taught in Germany in just a couple of weeks appeared first on SQL Server with Mr. Denny.
This question actually comes up a lot. With the new managed storage it’s less of an issue as Managed storage takes care of the high availability for you
when you are building your storage accounts. However if you are still manually building storage accounts, or you have older systems in Azure that either uses classic storage accounts, or manually created storage accounts in Resource Manager (basically not managed storage) then you have to take care of this yourself. Doing this is fairly easy, but it requires a little checking and manual work to do this.
Setting this up requires an understanding of how to Azure backend is working. You’ve probably heard by now that Azure keeps three copies of all data which is uploaded to it. This is true, but that isn’t the HA that I’m talking about in this post. What I’m talking about here is if there are two members of an availability group, web farm, etc. Azure storage accounts are created on what is known as stamps. A stamp is basically the physical hardware the sits under VMs and storage accounts. The trick is to make sure that the storage accounts are on different stamps. Now there’s no way to do this in the Azure portal, or with PowerShell. The upside to all this is that it’s all still pretty easy to do. All it takes to see if your storage accounts are on different stamps is a DOS window.
Each storage account has a public name, and an IP address to go with it. Each stamp has an IP address. Looking at the IP of the storage account will tell you if storage accounts are on different stamps or the same one. If the storage accounts are on different stamps, then the VMs that use the storage accounts will be highly available. Looking at the storage accounts behind the two MySQL servers that run the www.dcac.co website (see the image) we can see that the IP addresses for the storage accounts are different, which means that they are on different stamps.
If when you check the storage accounts the IP addresses match, fixing the problem is pretty straightforward. You have to delete the storage account and recreate it. Odds are it’ll be created on a new stamp; if it isn’t, delete the storage account and wait a few minutes and create the storage account again. That should cause the storage account to be created on another stamp and give you the high availability that you’re looking for.
The post How do I get High Availability in my Azure Storage Accounts? appeared first on SQL Server with Mr. Denny.