Using Powershell for SQL Saturday Logos

Published On: 2019-09-13By:

Out of necessity are born the tools of laziness.  This is a good thing.  I have found that organizing and running a SQL Saturday event is a great way to create scripts and processes that make an organizers life that much easier.  The 2019 Louisville SQL Saturday helped me to create a quick script that would download all of my sponsor logos into a single folder.

The script is written in PowerShell simply because PowerShell has great versatility and it suited my needs.  The goal was to be able to download the logos and send them off to one of my volunteers who then was going to put them on signage and get them printed.At the time I had no easy way to do with without manually pulling each one off the site.

Let’s get to it!

The Script

First, I need to set some variables to make it easier to use.  I could make these into parameters for usability, however, for my needs since I only run this once a year having just variables is acceptable for me.

The event number is the number that corresponds to the particular SQL Saturday event you want to download the logos.  Note that this would work for any event, not just the one you might be organizing.

$eventNum = "883"
$outputfolder = "c:\temp\SponsorLogos"
$compress = $True

Next, I need to fetch the XML feed from the event.  The XML feed has a wealth of information in it, including the image URL for all of the sponsors.

I will also get the sponsor name, what level they sponsored at (that’s the label column) and the URL for their logo.

#let's get the XML from the SQL Saturday website
[xml] $xdoc = Invoke-WebRequest -Uri "http://www.sqlsaturday.com/eventxml.aspx?sat=$eventNum" -UseBasicParsing

#we only need a subset of each node of the XML, mainly the sponsors
$sponsors = $xdoc.guidebookxml.sponsors.sponsor | select name, label, imageURL

We want to ensure that our output folder (the path from the variable above) exists otherwise the process won’t work.  If the folder doesn’t exist, it will be created for us.

If there is an error, there is a CATCH block that will capture the error and react accordingly.

#let's make sure the folder exists
"Checking folder existence...."
if (-not (test-path $outputFolder)) {
    try {
        New-Item -Path $outputFolder -ItemType Directory -ErrorAction Stop | Out-Null #-Force
    }
    catch {
        Write-Error -Message "Unable to create directory '$outputFolder'. Error was: $_" -ErrorAction Stop
    }
    "Successfully created directory '$outputFolder'."
}else{
    "Folder already exists...moving on"
}

Now that I have a destination folder, I can begin to download the logos into the folder.   In this block, I will loop through all of the sponsors.

First, I need to do some clean up in the sponsor names.  Some sponsors have commas or “.com” in their name and I wanted to use the sponsor name as the file name so I knew who the logo belonged to.  Once the cleanup is done, I used the INVOKE-WEBREQUEST cmdlet to fetch the file from the respective URL and output the file into the destination directory.

#give me all of the logos
foreach ($sponsor in $sponsors){
    $filename = $sponsor.imageURL | split-path -Leaf

    #get the file name and clean up spaces, commas, and the dot coms
    $sponsorname = $sponsor.name.replace(" ", "").replace(",","").replace(".com","")
    invoke-webrequest -uri $sponsor.imageURL -outfile $outputfolder\$($sponsorName)_$($sponsor.label.ToUpper())_$($fileName)
}

Since I will be sending this to a volunteer to utilize, I wanted the process to automatically zip up the folder to make it easier.  I’ll name the archive the same name as the folder so I can use the SPLIT-PATH cmdlet to get the leaf leave of the directory path, which is the folder name.

Using the COMPRESS-ARCHIVE cmdlet, I can then compress the folder and put it put it into that same folder.

# zip things up if desired
If ($compress -eq $true){
    $filename = $outputfolder | split-path -Leaf
    compress-archive -path $outputFolder -DestinationPath $outputfolder\$($filename).zip
}

Finally, I wanted the process to open the folder when it was done.  This is simple accomplished by calling “explorer” along with the destination folder name.  This will launch the folder in Windows Explorer

# show me the folder
explorer $outputfolder

Summary

Powershell is a great way to quickly and easily accomplish tasks.  Whether that is working with online data or even manipulating things on your local computer, this was a quick and easy way to make my life as a SQL Saturday event organizer that much easier.

You can also find the full script on my GitHub Repository.

Enjoy!

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC

Moving to Azure SQL Database Serverless

Published On: 2019-08-30By:

In a previous post, I discussed the public preview of Azure SQL Database Serverless.  This is a newer product released from Microsoft for the Azure ecosystem.  Moving to this new product is really easy to do and I thought that I’d show you how.

Moving the Database

From the screen shot below, you can see that I have a standard Azure SQL DB, Demo1, that is on the sqldbdemo-east server.   In order to move this to serverless, first click on Configure located on the left hand menu under Settings.

On the next screen, we will need to make some decisions on how you want the database configured in this new compute tier.  If the database is currently in the DTU price offering, you will need to move to the vCore-base pricing since that is the only pricing tier available.

Once we are in the vCore-based pricing, we can continue to configure the database.

The Serverless compute tier is only available through the General Purpose service tier so, if necessary, you will need to first click on that service tier to be able to see the Serverless Compute Tier.   Once you are in the appropriate tier, just select the Serverless compute as shown in the image below.

Next, is the compute generation.  For this particular tier, you are limited to the Gen5 generation which consists of up to 16 vCores and a maximum of 48GB of memory.  You can also select the maximum number of vCores that you want.  You have a limit of 16 cores and it is easily adjustable with the slider scale.  In the image below, you will see that I have set this database to a maximum of 4 vCores.

You can also specify a minimum number of vCores to ensure that the database always has at least that number of vCores available.   I’ve set it to a minimum of 1 vCores.

Next we can adjust the Auto-pause delay values.  This is a valuable feature which allows you to automatically pause the database after a period of inactivity.  You can set this delay up to 7 days and 23 hours which provides quite a bit of flexibility.  Pausing the database will help save costs as when the database is paused you are only charge the cost of the storage and not the compute.

Next select the appropriate data max size.  Notice that you can go up to 1 terabyte in size which gives you quite a bit of room to grow.

Finally, since it is still public preview, you must select the preview terms.  Once you have done that, you can then click the Apply button and the scaling process will commence.

Once the scaling has been completed, the database is shown in SSMS like it was before.

Summary

The process to move from Azure SQL DB on a server to the new Serverless compute tier is quick and easy to accomplish.  This new service tier might be a good fit for your particular workloads and I highly suggest you take a look at it.  Keep in mind, however, it is still in public preview and as such I would not use it for production workloads until it is fully available.  It is a great for for non-production environments, especially if those environments are not used 24×7, as you can help to save some costs by pausing them!

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC

Azure SQL DB Serverless

Published On: 2019-08-23By:

Microsoft continues to expand the Azure ecosystem rapidly.  One of the most recent delivered improvements was Azure SQL Database Serverless option.    The new serverless model, which is currently in public preview, provides a compute tier for a single database that scales dynamically with the workload.   This new compute tier is optimal for single database that have unpredictable usage patterns.  Previously, you would provision the compute tier to your Azure SQL Database which allowed to have more granular control on scalability. The serverless tier scaling is effectively controlled by the service itself and will scale when needed.

Compute Tier

The compute tier is a little different from our previous run of the mill Azure SQL Databases.  Up until recently, you were only allowed to have a maximum of 4 vCores.  However, on August 15, 2019, Microsoft increased that limit to 16 vCores and 48GB of memory.  This is a 4x increase and could really help with your workload.   Currently, you can also designate a minimum of 0.5 vCores.

Azure SQL Database Serverless is only offered in the General Purpose service tier. This means that you are not currently able to place a serverless database into a Business Critical or Hyperscale service tiers.  You are also locked into using the vCore model so no Database Transaction Units (DTUs) for you.

Microsoft offers up a good comparison chart of when you would use serverless compute versus provisioned compute:

Cost Saving Feature

One of the major perks of this new serverless offering is the ability to pause the database to help save on costs.   In fact, there is a setting in which you can pause the database automatically after a duration of say, 60 minutes or so.  If there is not activity on the database within that time frame, the database will automatically pause itself thus only costing you whatever you are using for storage.  You will not be charged any compute fees.

Note that you can currently specify an auto-pause delay of up to 7 days and 23 hours.  This helps to provide you with a good deal of flexibility.

To summarize some overall costs:

  • The cost for a serverless database is the summation of the compute cost and storage cost.
  • When compute usage is between the min and max limits configured, the compute cost is based on vCore and memory used.
  • When compute usage is below the min limits configured, the compute cost is based on the min vCores and min memory configured.
  • When the database is paused, the compute cost is zero and only storage costs are incurred.
  • The storage cost is determined in the same way as in the provisioned compute tier.

(note: the above was borrowed directly from https://docs.microsoft.com/en-us/azure/sql-database/sql-database-serverless)

Of course, there are some features that won’t work with autopausing.  If you run any of the following features, sorry, but you won’t be able to use the autopausing feature:

  • Long-term backup retention
  • Utilize Data Sync
  • Geo-Replication

Summary

This new product from Microsoft just further enhances the product richness that comes with the Azure offering.  If you need to utilize a SQL Server relational database and don’t need the higher level service tiers, this might be a really good fit for you.  Being able to pause the database automatically is just more icing on the cake to help you save costs.  Take it for a spin and see what you think.  Just remember that it is still in public preview so things could potentially shift.

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC

Finding the Best Azure Datacenter

Published On: 2019-07-26By:

When dealing with cloud technology there is a phrase that everybody should remember:  Physics Always Wins.   There isn’t any way to get around the speed of light (2.98 x 108 m/s) other than getting as close to your cloud resources as possible.  I believe that Microsoft does a great job about having data centers in strategically placed regions to help get you to your resources as quickly as possible.  When it comes to latency, it’s pretty simple.  The further away you are from the data center, the higher the latency.  The closer you are, the better latency you’ll get.

Did you know there is a website that will show you the latency from your given IP address location to various Azure regions around the world? Welcome to AzureSpeed.com

AzureSpeed.com was created and maintained by Blair Chen (T) and does not have any observable official affiliation with Microsoft.  Regardless, it is a useful tool to keep in your tool belt when working with Azure.

Closest Datacenters

Since we need to work around physics, AzureSpeed.com will tell us the closest data center to Microsoft Azure.  This is done using  your public IP address is.  For example, I am in Kentucky so my closest datacenter is almost always located in Virginia.  As you can see, the latency (the numbers on the right) for East US 2 is about 50% faster than Central US.   If I was looking to put resources into Azure and my facility was located in Louisville, Kentucky, East US 2 is the region of choice.

Azure Latency

The site also has, which I think is a good visual representation, depicted below, of what the latency looks like up to 500 milliseconds.  It does not, however, identify on the moving visual which data center is represented by which line and/or color. Beyond just the physical limitations of the speed of light, there are other factions that could play into what the latency actually is.  Network congestion, down lines somewhere between your location and the data center, or even someone pulled the wrong cable in the data center itself.  Ok, so the last one is probably unlikely, but it happens, we are all human.

Latency Test

The latency test will give you the average latency in milliseconds for a hand full of regions.  In the chart shown below, you can see that at the moment I grabbed the screen shot, Central US (Iowa) has better latency than Virginia does.   These are the values that are displayed in the Azure Latency visual.   While you usually want your Azure resources to be in the closest data center to you, if you have data consumers spread out all over the world, this chart can help you determine the best place to put things.

Summary

The Azure ecosystem is large and vast.  However, even with that said, there a multitude of tools available to make you successful.  Tools like migration assistants & tools, pricing calculators and even latency tests as I’ve shown here.  If you are looking to make the leap to Azure, be sure to check out Azurespeed.com and figure out your closest data center.  Afterall, physics always wins but knowing is half the battle!

© 2019, John Morehouse. All rights reserved.


Contact the Author | Contact DCAC
1 2 3 18

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner    Microsoft MVP    Microsoft Certified Master    VMWare Partner    VMWare vExpert
   Best Full-Service Cloud Technology Consulting Company    Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers