Now Is the Time for the Public Cloud

Public Cloud

The cloud is not new. While AWS gets much of the credit for kickstarting the cloud “revolution”, companies like SalesForce have been delivering mission-critical hosted software for nearly 20 years. We also have been delivering our own cloud services, giving our clients unique capability without the then-big investments required to operate effectively within a so-called hyperscale cloud provider like AWS. Everyone from AWS to SalesForce and Rhythmic falls under the broad umbrella we now simply call cloud. So, when we talk about shifting focus to the cloud, it can be a bit confusing — where were we all this time if not the cloud?

The cloud is frequently summarized as “someone else’s server”, reflecting the notion that the cloud is a new label for old-fashioned outsourcing, whether that be running data centers or shifting on-premise software somewhere else. Traditionally, we have thought of infrastructure and systems in distinct layers, with well-built data centers underpinning well-built networks, networks underpinning well-built servers, and so on to the application that gets consumed by the end user. Through this lens, moving to the cloud is merely deciding to outsource more of those layers than you might have in the past.

As the cloud has matured, this increasingly simplistic view masks a paradigm shift, one where small teams can now compose systems out of infrastructure, service and application components dynamically. Creating infrastructure and entire systems is a matter of writing code. Layers are gone and everything intermingles freely, without prescriptive rules. Hardware, software and even data centers are all synthesized together, and different combinations can be experimented with in real-time to quickly find the optimal pattern to solve the problem at hand.

Virtualization was an important step in this journey, making infrastructure plastic and adaptable, but not quite composable. This was compatible with the skills, systems and practices established over the past 30 years of modern IT. We could bend the rules a bit and apply our traditional ways of thinking to determine what the best practices should be. Of course, once people got a taste for what you could do when the rules were relaxed, people did what people do best: they got rid of all of the rules and waited to see what happened. Many of these ideas were incubated in some of the largest tech companies, like Facebook, Google, Amazon and Netflix. The results were spectacular improvements in quality and speed of development, two forces that have been diametrically opposed in the past.

There were consequences to getting rid of those rules, to be sure, and the investment required to completely abandon the skills, systems and practices established in 30 years of modern IT was substantial. Some early adopters were able to innovate their way past the challenges and deliver software to their customers at warp speed, but many others got bogged down in cost and complexity. Not only did our ways of thinking need to evolve, but our systems and processes needed to be rebuilt to support a new way of doing pretty much everything. We needed to get to the tipping point where people, process and system were all ready.

That tipping point has been reached. Early adopters have joined major cloud providers in becoming champions for the cloud, contributing substantial open-source projects that help reduce the amount of investment companies need to make. The workforce has rapidly “skilled up” as experienced engineers get their hands dirty and a younger generation that is unencumbered by an old way of thinking enters the workforce. And the design patterns and experiences needed to properly and effectively compose complex applications have matured.

The Time is Now

So, the time for the cloud is most certainly now for pretty much all of us, but not just as a new place to apply our old ways of thinking. To reap the benefit we must embrace the differences and adapt. The tipping point came as a result of four highly leveraged factors, each of which can deliver a 10x return over the long term. These factors literally make the decision to be in the cloud an obvious choice — a matter of survival.

  • Open Source Ecosystem. Early adopters of the cloud, from large online systems like Facebook and Netflix to small companies and even students, have created a massive body of work — millions of lines of code. Thousands of wheels needed to be reinvented in the cloud; tooling needed to be built; and new frameworks for security, cost management and automation needed to be created. While this is a process that never will be done, the maturity, capability of open-source systems like Kubernetes, ElasticSearch and MongoDB are putting entrenched enterprise solutions out of business. Literally thousands of smaller projects provide a significant toolbox that teams can draw on to solve complex problems quickly. We’ll be sharing many of these with you in our blog and future issues of the newsletter.
  • Workforce Pipeline. Infrastructure skills always have been in short supply in the workforce. But as the tech workforce gets sharply younger and as the lines between software and infrastructure engineering blur, a massive pipeline of talent is emerging. They’re enthusiastic, skilled up and not particularly interested in building things the old way. As the workforce continues to drive hard toward the cloud, this will create increasing risk for companies who are content to stay put — not only will their systems remain complex and static, but the teams that are required to support them will stretch and fade.
  • Composable Infrastructure. For a long time now, software development has been agile. Good software development, anyway. But infrastructure lagged behind, requiring upfront planning and investment. Virtualization and software-defined services helped speed up the process and increase what DevOps teams could do, but it was a far cry from full infrastructure being fully defined on demand. Today, low-end infrastructure services like compute and storage can be mixed with open source and cloud-managed services.
  • Evolving Infrastructure. Infrastructure has tended to be static once deployed, as has the way software interacts with it. While infrastructure continues to evolve in the marketplace, those new capabilities rarely fit into existing deployments. That’s why concepts like “re-platforming” and “next generation” are as common today as they’ve ever been. The cloud breaks free from this way of thinking, allowing the underlying infrastructure to evolve incrementally and expose new capabilities to already deployed systems. Complex operations like major database upgrades are converted to seamless processes that you can easily test and revert. This is significant because it allows initial infrastructure investments to appreciate in value over time rather than depreciate in value.

You may notice we’ve left off many of the cloud benefits that providers like AWS champion in their marketing and sales pitches. It’s not because those benefits aren’t real. We just don’t see them as leveraged enough to justify the risks and opportunity cost that comes with such a dramatic change. For example, AWS aggressively highlights its managed services as a way to “free your time”, reducing operational overhead and focusing you on differentiation. Sure, that’s true. But the return on that time is linear, nudging total cost of ownership. On the other hand, automating infrastructure and then putting it in the hands of your teams so that they can spin up complex environments for just a few minutes of testing is leveraged, radically shifting total cost of ownership. For many companies, the factors above can deliver a 10x advantage over the long term. As a result, the public cloud is all but required to build a high-performing, high-output team.

© Copyright 2020 Rhythmic Technologies, Inc. All Rights Reserved.  ·  Privacy Policy