Becoming a cloud DBA (II)

Cost driven cloud movements

One of the advantages of the cloud is that it offers a build or buy decision for your database infrastructure. For some companies, stuck with legacy systems and outdated hardware, the prospect of a greenfield deployment seems very appealing. Let’s get rid of the old junk, let’s enter the rocky road of transformation, possibly under the supervision of a third party – for advice or eventual blame gaming.

It’s true that you can get into the cloud like this and save money, but the same forces of mismanagement that created the ‘old junk’ are still present when moving to the cloud. And mismanagement in the cloud can be a costly ‘learning experience’. Or looking at it from the other way, the better you are in control, the less appealing the cloud is money wise. For example, DropBox started initially on Amazon, but decided it was better off building its own infrastructure, saving $75 million over two years. That’s the tongue in cheek observation aka paradox: the better you are onpremis, the less the cloud has to offer. And the more the cloud has to offer, the less you can take advantage of it.

Furthermore, the law of conservation of misery dictates the presence of at least some disadvantages, and the cloud is not an exception. Clouds can go down for a several hours in high profile incidents, questioning the promised uptimes. Or they can suffer from capacity problems, as some reported that “azure seems to be full“. But apart from the Single Point of Failure problem, some databases will run into problems in the cloud and are exported back to the on-premise machines, a move called repatriation. The reason of such repatriation can be both costs or technical reasons. As an example of the latter: the cloud can prone to the Noisy Neighbor effect, and certain workloads won’t cope well with this. Or the network latency to the cloud is prohibitive for certain applications.
Surely this can be remedied in a more expensive tier in the cloud, but the on-premis datacenter comes in scope again when the ROI of these solutions are calculated.

Gartner puts it this way: “What remains on-premises are business processes that are mission-critical and require greater oversight and more detailed levels of control than is available via cloud infrastructure and hosted models.

So, what’s the DBA role here?

If money is the main motivator, then TCO calculations are on your tasklist. For you as a DBA it revolves around data placement strategy, and there’s a whole information war about the TCO costs involved. “Microsofts cost comparisons are misleading” says this article from Amazon ( “Fact-checking the truth on TCO for running Windows workloads in the cloud”). Or have a look at Amazons TCO calculator. Just simply insert the number of vCPU and memory and the cost comparison can be made. A total omission of the virtual/physical ration of CPU’s is apparent missing vital element. Be prepared to face a lot of TCO gaming out there, so be sure to understand the revenue model of your cloud vendor.

You need to be able to measure how much bang do you actually get for your buck in the cloud. As I showed in one of my previous post, a lot of DBA’s are not impressed with the cloud performance. Therefore, your second task in this will be Benchmarking the Cloud. This can be a science on its own, but you may suffice by capturing and replaying a workload with the open source toolset WorkLoad Tools of which I’ll post some of my testresults in a later post. Be adviced that, apart from capacity measuring of the cloud, latency may also play a role in your application. If you need to go deeper into benchmarking than a replay, you’re into serious readings such as this one.

Conclusion

The Cloud is just someone else’s computer” – and this surely rings true in this scenario. Money is the main motivator, so making TCO calculations and benchmarks is on your tasklist. As most companies will have a hybrid cloud/onprem environment, you therefor need to be able to advice where to place the workloads. And be advised that capacity planning is different too. In the onprem scenario, you can calculate your capacity requirements every few years and take some healthy overcapacity into the mix. Compare that to the cloud you’ll have to revise the capacity every few month depending on the volatility of your environment. You may want to automate the reports of the resourceconsumption in the cloud, so hone your skills in Powershell/ PowerBI/ Bash or any of your favorite scripting tool. Tasks to be automated are on their way to you.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published.