Finding Cluster Log Errors

Published On: 2019-01-18By:

Sometimes you know that a problem occurred, but the tools are not giving you the right information.  If you ever look at the Cluster Failover Manager for a Windows Cluster, sometimes that can happen.  The user interface won’t show you any errors, but you KNOW there was an issue.  This is when knowing how to use other tools to extract information from the cluster log becomes useful.

You can choose to use either Powershell or a command at the command prompt.  I tend to lean towards Powershell. I find it easier to utilize and gives me the most flexibility.

The Powershell cmdlet get-clusterlog is a quick and handy way to get information from the Windows Cluster. This cmdlet will create a text log file for all nodes or a specific node (if specified) within a failover cluster.  You can even use the cmdlet to specify a certain time space, like the last 15 minutes which can be really handy if you know that the issue occurred within that time frame.

This is great and all, however, it can give you a lot of information to try to shift through.  As you can see, the three log files shown below is about ~600MB worth of text!  That’s a lot!

If you are looking through cluster logs, you are looking for specific messages, most likely error messages.  Thankfully, the log categorizes each message and those error messages can be located.

Since ~600MB is a lot of data to try to manually shift through, there are easier ways.  We can use Powershell again to extract all of the error messages.  This allows us to efficiently locate error messages that might be prudent to the issue at hand.

So, I wrote a quick and easy script that iterates through a folder of cluster log files and extract any line that has the pattern “ ERR “ in it out to another file.  Notice that the pattern has spaces at the beginning and ending of the pattern so that if there are instances of strings like “error”, they are not returned.

$dir = "C:\users\john\Desktop\Temp\"
$files = Get-ChildItem $dir -Recurse -Include "*.log"

foreach ($file in $files){
$out = "ERR_" + $file.BaseName + ".txt"
select-string -path $file.FullName -Pattern " ERR " -AllMatches | out-file "$dir\$out"
}

Once ran, we can look in the destination folder and see that the amount of data has been reduced significantly, down to ~31MB down from ~600MB!

This was an easy script to write and it helped to narrow down just the error messages quickly so that I can help resolve the issue the cluster might be having.  The effectiveness and flexibility of Powershell continues to shine in situations like this.

If you haven’t started to learn Powershell, you should.  It’ll make your life easier in the long run.

© 2019, John Morehouse. All rights reserved.

Azure – Backing up to URL

Published On: 2019-01-04By:

In a previous post, I talked about some of the available storage options that can be used to back up your databases directly to the cloud.  Doing this is a really good way to get your backups off-site and replicated in a single motion, depending on the region and redundancy option select, for minimal cost.

So how do you back up to URL, the cloud?  I’ll show you. Before we start, this assumes that you have a storage account in Azure setup and configured.  If not, go do that first and then come back. You will also need SQL Server 2016 or higher in order to make this work.

Before you start backing up to Azure, you will need to decide on whether you want to use block or page blobs for your storage.  As I mentioned in my previous post, using block blobs is recommended for cost as well as performance.  However, you can use either.

With storage in Azure, the method used to back up the database will actually determine which storage type is used.  In either case you must use a credential.  If you use a Credential with a Share Access Signature, it will be a block blob.  If you use a Credential with a storage account access key, it will be a page blob.

If you look on the blade for the storage in the Azure portal, you will see both an option for Access Keys as well as the Share Signature.

Page Blobs

Azure allows you to have two access keys which allows you to rotate keys as needed when using page blobs.  You will need one of these keys in order to successfully back up using page blobs.

 

Like with any keys, you want to safe guard them.  If someone has the keys and knows the storage account name, they could gain access.  In short, rotate your keys regularly.

Once you have the key, you can create the CREDENTIAL.

CREATE CREDENTIAL [sqlbackups] WITH IDENTITY = 'backupsdemo' 
,SECRET = '<mystorageaccountaccesskey>';
GO

The identity value is the name of your storage account. The Secret is the key that you obtained from the portal.  Once the credential has been created, you can then backup your database.

BACKUP DATABASE ScratchDemo 
TO URL = 'https://backupsdemo blob.core.windows.net/sqlbackups/ScratchDemo_pageblog.bak'  
     WITH CREDENTIAL = 'sqlbackups';
GO

Block Blobs

With block blobs, using the Shared Access Signature (SAS) is a little different.  When you configure the signature, you will specify a time frame in which access will be allowed.  You also need to specify an IP address or range.   Once the signature has been created, you’ll see a bunch of options like what is shown below:

Keep in mind that when generating the shared access signature, it utilizes the access keys.  You can specify either key1 or key2 for this process.  If you regenerate either of those keys, you could need to regenerate the SAS token and then update.   Once the SAS token is generated, a connection string, token, and various URL endpoints will be displayed like shown below.

In order to use the signature for backups, we need to copy the SAS token.  We will use this token to create the corresponding credential.  One thing to note, the SAS token will begin with a ‘?’ which SQL Server already expects.  When you create the credential, remove the ‘?’ from the token.

CREATE CREDENTIAL [https://backupsdemo.blob.core.windows.net/sqlbackups]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE', 
SECRET = '<SAS_TOKEN_GOES_HERE>';
GO

The credential name needs to be the URL that points to your storage account name and subsequent container.  The “Secret” is the SAS token that we gathered from the portal after generating the shared access signature (SAS).

BACKUP DATABASE ScratchDemo
TO URL = 'https://backupsdemo.blob.core.windows.net/sqlbackups/ScratchDemo_blockblob.bak'
GO

Note that it is completely possible to have a credential for both types of storage (block or page) created.  The page blobs credential can be named anything, however the block blob credential must to be named the URL of the storage endpoint including the container name.

Verification

Once the database has been backed up you can see the files in the Azure portal.

Note that I have these stored in Cool storage so unless I want to pay a penalty for early deletion, they will sit there for at least 30 days.  In a future post, I’ll show you how you can remove the files using Powershell.  If you are using Ola Hallengrens maintenance solution (which we recommend) it fully supports backing up to URL.

Summary

Ensuring your backups are safe and sound is a primary tenant of being a DBA.  Backing up to URL is a great way to accomplish this.  For a minimal cost, your backups could be off-site or even redundant across the country.   Keep in mind, however, that even with Azure things fail.  If and when your internet connectivity drops, make sure that you have a backup plan to handle temporary onsite backups until connectivity is restored.

Happy backing up!

© 2019 – 2018, John Morehouse. All rights reserved.

Azure – Creating a storage account

Published On: 2018-12-28By:
Azure offers a lot of features that enable IT professionals to really enhance their environment.  One feature that I really like about Azure is storage accounts.  Since disk is relatively cheap, this continues to hold true in the cloud.  For less than $100 per month, you could get up to 5TB of storage including redundancy to another Azure region. Once you have created Azure account, you will need to go to the Storage Accounts blade from the left-hand Resource menu in the Portal.  If you don’t see Storage Accounts listed in the favorites already, click on All Services at the top and filter for “storage accounts”.  Do not select the classic storage account.  You want the new and improved storage account. Click on Storage Accounts. Click Add On the following screen, the subscription should auto-populate however, if you have multiple subscriptions you can change it to reflect that one that you want. The resource group is a container for all assets within Azure.  I tended to think of this as a bowl of things.  Things like virtual machines, storage accounts, virtual networks, etc. that are related all go into the same bowl.  If you want to delete all those resources, you simply empty the bowl, and everything will be deleted for you. In this example, I’ll create a new resource group, called “CreateStorageDemo”. Then give the storage account a name.  This name must be globally unique simply because the name is a part of the endpoint that can be used to manipulate the storage.  You’ll need to choose wisely.  In this case, I have supplied a name of “prodsqlbackupdemo”. Next, choose the region of the storage. Usually you want to choose a region that is closest to you to help reduce latency.  If I’m in Kentucky and I choose a region in Europe, the latency across the pond is going to be greater than if I were to choose the East US 2 region.  You can see the regions for Azure via this map.  The East US 2 region is on the east cost of the United States. For now, select Standard storage.  This will give you the most versatility in what you can storage.  Premium storage only currently allows page blobs.  Depending on your needs, premium storage may or may not meet them. Leave the account kind to StorageV2, which is just a general purpose storage.  Like with standard storage selection, this will give you more versatility in the long run. Replication.  The replication option refers to if you want your data replicated somewhere else, you can select it here.  Here’s a synopsis of what these options mean:
  • LRS – Locally redundant storage which means that your data would be redundant within the data center it sits in.  There can be multiple data centers within a region so you don’t know exactly where it is but it’ll be redundant within that building.  If the building goes away, your data will go away along with it.
  • ZRS – Zone redundant storage gives you greater redundancy as your data is replicated within data centers within a region.  This would allow your data to survive in the event that your initial data center went away unexpectedly.
  • GRS – Geographically redundant storage provides redundancy across a large distance.  Think redundancy from Altanta, GA to Seattle, WA.  This spans many miles and allows for redundancy in the event an entire region is affected by some outage, like a hurricane on the east coast.  The data is not readable on the secondary.
  • RA-GRS – Read Accessible geographically redundant storage gives you the ability to read your data from a secondary while being geographically redundant.
The more redundant your storage is the greater the cost but if your data is critical, it’s worth it.  For this example, because I’m just showing you how to spin up an account, I’ll select LRS, Locally-redundant storage. The final option is the access tier, hot or cool.  These tiers each individually have requirements around them.  If you need to access the data frequently, select Hot.  Hot will be more expensive and similar latency than cool however it’ll have a higher availability.  If you can leave your data alone for at least 30 days, you can use the Cool tier.  The cost will be cheaper however you could get hit with a penalty if you delete the file(s) prior to 30 days.  Here is a chart provided by Microsoft that shows the differences:

Summary

As you can see, setting up a storage account for any purpose is fairly straight forward and simple.  With a couple of mouse clicks, you can easily have storage readily available to upload files to, whether they are family pictures or backups of your production SQL Server databases.   Either way, a great solution to get things out of your local data center if needed.  Keep in mind that Microsoft is continually pushing updates to the portal so don’t worry if things change slightly over time. Enjoy!

© 2018, John Morehouse. All rights reserved.

Azure Portal Enhancements – Switching Accounts

Published On: 2018-12-21By:

More and more I am impressed on the Azure Portal.  Microsoft continues to make enhancements to the user interface (UI) and add in new features.  One of these new features that I’ve recently become aware of is the ability to easily switch accounts.

As a consultant, I work with clients that have a presence in Azure.  They can grant me access to their cloud assets in two ways:

  1. They can add one of my email accounts to their subscriptions.
  2. They can grant me an email account within their domain and then add that email to their subscriptions.

Most of my clients select the second option.  This means that I could have multiple email accounts that I have to use in order to sign into the portal.  Using a password manager such as 1Password, not usually a big deal and more of an annoyance rather than a headache.

Within the past month or so, Microsoft has updated the portal to allow me to easily switch accounts.  Previously you had to log out of the portal and then log back in.

While I haven’t tested it, I’m assuming that these entries are cached.  Once you’ve logged into the portal with the account, they will appear in the drop down list as shown above.

This really makes using the Azure Portal that much easier, especially for consultants that might have to manage multiple identities.

Still haven’t looked at Azure?  You get $200 worth of credits to try out resources for 30 days.  Sign up now!

© 2018, John Morehouse. All rights reserved.

1 2 3 13

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.