Configuring SSRS when ReportServer databases are in an AG

Published On: 2017-06-28By:

Running SSRS with the ReportServer and ReportServerTempDB databases has been supported for a while now. If however you want to use scheduled delivery of reports you need to do some extra stuff to make this work.

https://www.flickr.com/photos/gotcredit/33371980600/in/

https://www.flickr.com/photos/gotcredit/33371980600/in/

 

The first thing that you need to do, if when setting up the SSRS instance you need to move the AG from Replica to Replica to setup the correct permissions within the MSDB database (or manually setup the permissions for SSRS within the MSDB database. The easy solution is to make the SSRS database a member of the db_owner role in msdb. This is because apparently the permission that the SSRS configuration manager gives the SSRS account doesn’t include the ability to add a category to the msdb database, which it needs to do. db_owner was the easy fix in our case, and the risk of SSRS doing something it shouldn’t in MSDB is minimal.

That’ll allow SSRS to create all the needed jobs on each node of the Availability Group.

The next thing you need to do is configure the Availability Group to restart SSRS after the AG fails over. This needs to happen because SSRS doesn’t recreate the jobs when it reconnects to the AG, it only does this on SSRS startup. If you are running SSRS in a farm (which you should be doing for high availability) you only need to restart SSRS on one node of the farm.

This can be easily done with a three line batch file, which you configure as a generic resource in failover cluster manager within the resource group which hosts the availability group. Personally I put this script in c:\scripts with some other stuff.

sc \\ssrs-server stop ReportServer > c:\scripts\restart_ssrs.txt
sc \\ssrs-server start ReportServer >> c:\scripts\restart_ssrs.txt
pause

The first line stops SSRS, the second starts it. The third line pauses the batch file so that it doesn’t complete and quit. If it did that then failover clustering would see that as a failure and a failure trigger a failover (or if you disable that a failed service). And since I like my clusters all green and happy, I pause the batch file so it sits there forever. The > c:\scripts\restar_ssrs.txt is logging the output to a text file.

Yes, I could have done this with PowerShell but it wouldn’t have gotten me anything except for more complex code while I check to see if the service is down, then attempt to start it.  And SC works just fine for this.

The last thing you need to do is make the computer accounts for all the members of the availability group members of the local administrators group of the SSRS server. This needs to be done so that the service can be restarted (or in some other way give these accounts access to restart services). It needs to be the computer account of the member nodes, not the computer account of the cluster as the batch files run under the context of the computer.

After that, when an AG failover occurs your SSRS will restart within a minute or two and create any missing jobs.

Now the downside here is that you’ll have a bunch of jobs failing on the secondary replicas, but I’ll take that over a non-working configuration.

Denny

The post Configuring SSRS when ReportServer databases are in an AG appeared first on SQL Server with Mr. Denny.

Azure Internal Load Balancers, Availability Groups, and Multiple Front End IPs

Published On: 2017-06-21By:

Recently Microsoft Azure introduced the ability to have multiple front end IP addresses on an Internal Load Balancer.  This allows for

https://www.flickr.com/photos/torkildr/3462607995/

https://www.flickr.com/photos/torkildr/3462607995/

Failover Cluster Instances to have multiple instances per cluster, and Availability Group deployments to have multiple Availability Groups per deployment.

But with this improvement comes the need to be very careful about how our probe ports are setup on things. Otherwise things aren’t going to work exactly as we expect them to.  And we’ve seen this on a couple of different deployments with customers now.

Much of this problem comes from the fact that all the sample code that’s out there and talks about how to setup an FCI or AG assumes a single front end IP and a single probe, as that’s all that was available for years.

Thankfully solving this is actually pretty straight forward.  We need a single probe configured in the Load Balancer for each front end IP address that we configure in the load balancer. That means that each IP that we have configured in failover cluster manger also needs it’s own probe as well.  If you don’t setup a separate probe for each clustered IP address then the only way to ensure that everything works as expected is to have all of those Clustered IP addresses on the same node of the windows cluster.

What I like to do is something like this.

Name IP Probe Port
SQL01 192.168.10.10 60000
SQL01AG1 192.168.10.11 60001
SQL01AG2 192.168.10.12 60002

The reason that we have to do this is pretty basic, you just have to remember how the probe ports work.  The probe port that you configure in Windows isn’t listening on the clustered IP Address.  It’s listening on 0.0.0.0 or all IP addresses.  Also, the Internal Load Balancer doesn’t connect to the probe port on the clustered IP, but instead on the clustered node’s IP address.

We can see this is we run netstat -a -n on the node that’s hosting the Availability Grnetstatoup replica.  In this case the node of the VM is 10.0.0.8 and the probe for the AG is listening on port 59999.  The SQL Server Availability Group is listening on 10.0.0.14 in this case and the cluster root of the Windows Cluster is listening on 10.0.0.13 with it’s probe on 59998.  Before we reconfigured this cluster in this way, if the Cluster root and the AG were running on the same node then everything was fine.  However if either one was moved to a different node of the cluster then connectivity to SQL became “interesting”.  SSMS would only draw 1/2 the folders it is supposed to show.  The application got impossible for users to use, etc.  Basically everything broke and fell apart.  Once we realized the problem, the fix became pretty straight forward. Change the probe configuration for the cluster root (10.0.0.13 in this case) to another port and configure the Internal Load Balancer with another probe, the configure the load balancing rules for the cluster root to use the new probe.  Once all that way done everything started working as expected.

These sorts of configuration problems can creep up on you. This environment was 6 months old when this problem first showed up. Because until then, the cluster root and the AG always happened to reside on the same node.  But recently that changed because the third node of the cluster (we have an extra machine in the config so that we are using a majority node set quorum configuration) lost network, then got it back. This caused a cluster vote and this machine became the cluster manager so the cluster room (10.0.0.13 and it’s network name) moved to this node causing the probe to be open on two machines and causing all hell to break loose as 1/2 the SQL connections were being directed to a server that didn’t even have SQL installed on it, much less have the Availability Group listener running on it.

The probe is very important to configure for everything that you want to load balance to, it just needs to be configured correctly.

Denny

The post Azure Internal Load Balancers, Availability Groups, and Multiple Front End IPs appeared first on SQL Server with Mr. Denny.

Making WordPress authenticate off of Azure Active Direction

Published On: 2017-06-14By:

https://www.flickr.com/photos/120360673@N04/13856199984/Yep, you read that title correctly. You can use Azure Active Directory to handle authentication to your self hosted WordPress website.

It’s shockingly easy to setup as well.  You’ll need this WordPress plugin to handle the actual authentication. It’s not in the WordPress gallery, instead it’s on GitHub so you’ll have to download it from there and install it.

Once it’s setup you’ll need to log into the Azure portal and tell Azure Active Directory that you’ll have a new application using Azure Active Directory. Then copy the keys from Azure Active Directory to the settings page of the Plugin.

Now don’t worry, there’s instructions on the GitHub page for the plugin that’ll walk you through setting it all up. It frankly took me all of about 5 minutes to get it setup and working.

Now if your WordPress is configured to prompt you to do a math problem when you login, you’ll need to disable the Protect feature in the Security tab in the JetPack Security tab.

The benefits of using Azure Active Directory for your authentication are enormous, including robust Two Factor Authentication, corporate password policies, granting access via AD group membership as well as the fact that your login process is now backed by the Worldwide Azure Active Directory infrastructure.

If you have WordPress and have access to Azure Active Directory I would HIGHLY recommend setting up this authentication process.

Denny

The post Making WordPress authenticate off of Azure Active Direction appeared first on SQL Server with Mr. Denny.

Azure Import Export Services – Notes from the field

Published On: 2017-06-07By:

One of my clients recently had the need to upload tens of terabytes of data into Azure Blob Storage. This gave us the perfect opportunity to use the Azure Import Export service to get these several terabytes of data into Azure by using the Azure Import/Export Service.  The Azure Import/Export Service allows you to ship hard drives to Microsoft with your data on them, which is then copied up to the Azure Blob Storage. You can then move the data around from there as needed.  All this is done using the Azure Import/Export Tool which is a command line tool, which has a few quirks.

The biggest querks that we ran into getting the Azure Import/Export Tool working was that it doesn’t support quotes in the parameters unless there are spaces in the folder or file names.  So the fix here, don’t use spaces and don’t use quotes.  For example at first I was trying to use the following in my command line.

/logdir:”C:\UploadFiles\upload\”

But what I needed to use was actually this.

/logdir:C:\UploadFiles\upload\

That’s a pretty small difference, but an important one.  And the error message that the tool gives doesn’t say that you have invalid characters in the logdir parameter. It just tells you that you have invalid characters in the path or file name.  Which means any of the paths or file names, and you have to specify several of them including the journal file (/j) the log folder (/logdir) the drive configuration layout for what drives it’s writing to (/InitialDriveSet) and what folders to pull onto those drives (/DataSet).

Another annoying thing was that WAImportExport.exe didn’t like having /DataSet at the right end of the command line.  It only liked it when it was at the left hand side of the command line.  Now this was before I figured out the double quotes issue, and that may have been part of it but with the double quotes on all the parameters and with the DataSet parameter on the right hand side, it complained that there was no DataSet parameter provided.

When configuring the DataSet CSV file, you need to put the container name in all lowercase.

Overall I was pretty impressed with how the processed worked.  The CSV files were pretty easy to put together.  The DataSet file that you provide just tells the application what folders to move where.  In this example I’m moving the files from C:\something to the “something” container in my blob storage account (you can use a network share instead of a local file).

BasePath,DstBlobPathOrPrefix,BlobType,Disposition,MetadataFile,PropertiesFile
“C:\something\”,”something/”,BlockBlob,rename,”None”,None

In the InitialDriveSet parameter you tell the application which drives that are attached to your computer that it’s using to ship the data.

DriveLetter,FormatOption,SilentOrPromptOnFormat,Encryption,ExistingBitLockerKey
F,AlreadyFormatted,PromptOnFormat,AlreadyEncrypted,[YourBitLockerKey]

In my base the drive was formatted and the disk was already bit lockered.

The application has some quirks to it, like I said earlier in the post. But once those got figured out, it was pretty easy.

Denny

The post Azure Import Export Services – Notes from the field appeared first on SQL Server with Mr. Denny.

1 2 3 334

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.