Exchange Public Folder migration code

Published On: 2019-01-21By:

This last week John Morehouse and I did a significant office migration with one of our clients. As part of the migration, we decided to move their public folders from Exchange 2016 to Office 365 so that their public folders were hosted in the same place as their mailboxes; allowing us to

https://www.flickr.com/photos/spidere/5101386622/in/

https://www.flickr.com/photos/spidere/5101386622/in/

decommission Exchange as the public folders were the last thing on the servers.

Microsoft has some documentation on your the migration, but the Microsoft documentation is rather lengthy and takes a long time to go through. We only had about 400 Megs of data to move, most of which is contacts and calendars. Instead of going through the rather lengthy process to move such a small amount of data, we instead opted to export the data through Outlook and reimport it to Office 365. This process required that we reset the permissions on the data after the migration.

The first thing we needed to do was to export the current permissions from the on-prem Exchange 2016 server. The export of the permissions was done via the Get-PublicFolderClientPermission PowerShell CmdLet. We combined this with Export-CSV as so.

Get-PublicFolderClientPermission “\” | select of user, identity, accessrights | format-table -width 500 | Export-Csv c:\temp\rights.csv -notypeinformation

This produced a fixed width text file that we could work with. We then opened the file in Notepad++ and did some Find/Replace operations to turn our fixed width file into a file we could more easily work with.

First, we searched for anything that has three spaces and replaced it was three {pipe} values (we had a couple of values with two spaces in them). We then did a search for {pipe}{space}{space} and replaced that value with {pipe}. We then did a search for {pipe}{space} and replaced that with {pipe}. We then ran a search and replace for {pipe}{pipe} and replaced it with {pipe}. We did this a bunch of times until there were 0 results found to the file. This case of a pipe delimited file that we could work with.

We then told Exchange to use the Office 365 version of the public folder, instead of the local version. We did this with the Set-OrganizationConfig CmdLet. Keep in mind, this CmdLet will delete all the data in your Exchange server’s Public Folder database, so make sure that you export your data from the public folder, and that you pull down your permissions BEFORE you run this.

Set-OrganizationConfig -RemotePublicFolderMailboxes $Null -PublicFoldersEnabled Local

After this was all done, we connected a PowerShell window to Office 365 and processed the text file to put the permissions back. This was done using a short PowerShell script.

$a = Get-Content C:\temp\rights2.csv
$a | foreach {
$data = $_
$arr = $data.Split(“|”)
$perm = $arr[2].replace(“{“, “”).replace(“}”, “”)
$perma = $perm.split(“,”)

$permb = [collections.arraylist]$perma

$data

Add-PublicFolderClientPermission -Identity $arr[0] -user $arr[1] -AccessRights $permb

}

This script told us what it was working on (in case there was an error), then it did the work (using the Add-PublicFolderClientPermission CmdLet). The biggest trick with this script was converting the “normal” array named $perma into the “collection.arraylist” named $permb and then using that.

This script ran for a few hours, as Add-PublicFolderClientPermission isn’t very fast, but the result was that the permissions were back where they should have been, the Public Folder was hosted in Office 365 instead of Exchange, and we were one step closer to being able to get rid of the exchange server. And most importantly, the users didn’t report a single problem after the migration was done.

Denny

The post Exchange Public Folder migration code appeared first on SQL Server with Mr. Denny.

Finding Cluster Log Errors

Published On: 2019-01-18By:

Sometimes you know that a problem occurred, but the tools are not giving you the right information.  If you ever look at the Cluster Failover Manager for a Windows Cluster, sometimes that can happen.  The user interface won’t show you any errors, but you KNOW there was an issue.  This is when knowing how to use other tools to extract information from the cluster log becomes useful.

You can choose to use either Powershell or a command at the command prompt.  I tend to lean towards Powershell. I find it easier to utilize and gives me the most flexibility.

The Powershell cmdlet get-clusterlog is a quick and handy way to get information from the Windows Cluster. This cmdlet will create a text log file for all nodes or a specific node (if specified) within a failover cluster.  You can even use the cmdlet to specify a certain time space, like the last 15 minutes which can be really handy if you know that the issue occurred within that time frame.

This is great and all, however, it can give you a lot of information to try to shift through.  As you can see, the three log files shown below is about ~600MB worth of text!  That’s a lot!

If you are looking through cluster logs, you are looking for specific messages, most likely error messages.  Thankfully, the log categorizes each message and those error messages can be located.

Since ~600MB is a lot of data to try to manually shift through, there are easier ways.  We can use Powershell again to extract all of the error messages.  This allows us to efficiently locate error messages that might be prudent to the issue at hand.

So, I wrote a quick and easy script that iterates through a folder of cluster log files and extract any line that has the pattern “ ERR “ in it out to another file.  Notice that the pattern has spaces at the beginning and ending of the pattern so that if there are instances of strings like “error”, they are not returned.

$dir = "C:\users\john\Desktop\Temp\"
$files = Get-ChildItem $dir -Recurse -Include "*.log"

foreach ($file in $files){
$out = "ERR_" + $file.BaseName + ".txt"
select-string -path $file.FullName -Pattern " ERR " -AllMatches | out-file "$dir\$out"
}

Once ran, we can look in the destination folder and see that the amount of data has been reduced significantly, down to ~31MB down from ~600MB!

This was an easy script to write and it helped to narrow down just the error messages quickly so that I can help resolve the issue the cluster might be having.  The effectiveness and flexibility of Powershell continues to shine in situations like this.

If you haven’t started to learn Powershell, you should.  It’ll make your life easier in the long run.

© 2019, John Morehouse. All rights reserved.

My SQL Saturday Chicago Precon–Managing and Architecting Azure Data Platform

Published On: 2019-01-16By:

After the MVP Summit in March, I’m headed to Chicago to speak at SQL Saturday Chicago, and on Friday March 22nd, I’ll be delivering an all-day training session on the Azure Data Platform. The term data platform is somewhat of a Microsoft marketing term, but we will talk about a wide variety of topics that will help you get up to speed on Azure.

All of the morning, and some of the afternoon will be spent talking about the core infrastructure of Azure. You’ll learn about topics like:

• Networking
• Storage
• Virtual Machines

While these are topics normally outside of the scope of the DBA, in the cloud you will have to at least understand them. Want to build an Availability Group in Azure? You’ll need to build an internal load balancer and map probe ports into your VM. Remember how you normally complain to the SAN team about your lack of IOPs? In the cloud, you can fix that yourself. You’ll also learn about what’s different about managing SQL Server in the Azure environment.

In the afternoon, we’ll spend our time talking about platform as a service (PAAS) offerings from Microsoft. While we will spend most of our time talking about Azure SQL Database and Azure SQL Managed Instance, I’ll also spend some time talking about other offerings like CosmosDB, and when it is appropriate to use them.

It will be a packed day, so put your learning hat on. You can register at Eventbrite here—there are five discounted tickets remaining.

When is the Right Time to Look at New Services?

Published On: 2019-01-14By:

Microsoft Azure is rolling out new features at a fantastic speed.  But when is the right time to evaluate those new features?  It might be right as the feature is released, it might be later in the lifecycle.  The basic answer to when to look at using new services is, it depends.

If the company has some problem that needs to be solved, that we can’t address today using available technology, or the solution we have built today could be made better, then a new feature or new service might be worth looking into.

If there’s nothing that needs to be solved, then the new feature isn’t helping you do anything. The feature is just shiny and new.

What it comes down to, is can the new feature potentially solve a problem that you or the company is having?  If you can’t solve the problem, then the new feature isn’t going to help you.  Now, this is going to mean some research into the new feature to see if it’s the right solution or not.  But if the feature that is being researched turns out to not be a solution to the solution to the problem (or a better solution to the problem than what you have today), then it’s time to move to another solution to the problem.

All too often, companies decide that they are going to use a solution to a problem, no matter what that right solution might be.  Even if the solution that they have decided must be used, costs thousands of millions of dollars a year.

Selecting a solution to a problem gets more complicated when there’s politics in the office involved.  All to often someone from upper management will decide that some product needs to be included in the solution.  This isn’t a new problem, either.

Back in the early 2000s, I was tasked with building a new knowledge base for a department at the company I worked at.  XML was getting popular and was being mentioned in all the magazines.  The knowledge base was supposed to be a normal relational database so that people could look up.  A Senior Manager wanted to use XML instead of a relational database.  I refused because XML wouldn’t perform well, XML wouldn’t scale well, and it would make no sense to build a database as a single XML file (which is what he wanted).  He insisted we use XML, and I asked him if he wanted to use XML, or he wanted the application to scale and perform well. It took a while to get him to see reason, but eventually, he saw reason and we used SQL Server for the relational data of the application.  And shockingly the application was able to be used successfully by thousands of employees on daily biases.

What it came to show, is that applications should be built to be successful, not to use some shiny bit of new technology.

Denny

The post When is the Right Time to Look at New Services? appeared first on SQL Server with Mr. Denny.

1 2 3 399

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.