It’s getting all cloudy in here

Not all cloud providers are the same. Some are flexible and will grow and shrink with your needs and business (Azure, Amazon, Google Cloud), and some will not (IBM, Rackspace, Other Bare MetaCloudflare being the cloudl clouds). The whole goal of using a cloud provider is to be able to scale up and scale down as needed, perfebably without having to do anything past the initial setup.

For example, lets say that we have an IoT application that typically gets 100,000 new messages per second uploaded to it.  At that rate any cloud (or “cloud”) provider will do.  Now say that a feture in Time magazine is written about out product and our IOT product sales shoot through the roof so instead of 100,000 new messages per second, we now are getting between 10,000,000 and 100,000,000 messages per second. How will your cloud handle this?  If you’re in one of the public clouds like Amazon, Azure or Google Cloud then your resources should magically scale.  Boxes that are on a bare metal cloud will stop responding without someone in IT knowing that they need to scale up, provisioning a bunch of machines and configuring those machines.  This is assuming that your cloud provides even has some IOT framework in place to handle these messages.

Now odds are you don’t have a wildly successful IOT application.   But you’ve probably got a website that customers hit to access your company in some way. Maybe they place orders on your website.  What would happen if a massive amount of web traffic started coming in with no notice, and IT doesn’t hear about it until it crashes?  Would your want you’re IT department deploying and configuring new servers (bare metal) or would you want the cloud to handle this by automatically scaling the width of the web tier wider so that you can handle the additional requests?

I can tell you want I want for our customers, I want the web tier scaling automatically so that we can keep taking orders rather than our website not be available for hours (or days) depending on how quickly your bare metal provider can respond to your need for new hardware, and your IT departments ability to spin up new services on those new resources.

If you’re using some bare metal cloud provider and thinking that you are getting the uptime that you were promised, you probably aren’t unless you have an architect in place to make sure that you’ve got HA and DR built into your platform.  Because that’s the biggest thing you aren’t getting with bare metal cloud providers (beyond Auto-scale), is any HA/DR. If you think you are getting some HA/DR, you probably aren’t (at least not what I’d called HA/DR) without paying for a lot of extras. (The same will apply if you are doing IaaS in the cloud, I’m talking about PaaS services in Azure, Amazon, or Google Cloud.)

What this all boils down to, is that “cloud” has become a marketing word and not an actual word that means that it says anymore.  Companies will use the cloud for anything from “VMs on someone else’s hardware” all the way through “auto scaling Platform as a Service.”  And the closer you are to “VMs on someone else’s hardware” the further you are away from the true nature of cloud platforms.

Denny

The post It’s getting all cloudy in here appeared first on SQL Server with Mr. Denny.

Share

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?