Now Is the Time for the Public Cloud – Issue #6

Take Note

Rhythmic is drawing its 10th year in business to a close, and those 10 years have seen incredible changes to an industry that has profoundly transformed the world around us. Our mission always has been to harness change for our clients, giving them a competitive edge. As the adoption and maturity of the cloud has accelerated, two parallel worlds have begun to take shape — one defined by automation, infrastructure on demand, and a rich ecosystem of open source and provider services that accelerate innovation, and one increasingly defined by the need to run and support systems of the past as justification for its continued relevance.

The worlds are being driven apart by strong forces that enable companies to leverage enormous return on investment: increasing maturity and differentiation of the “hyperscale” cloud providers — AWS, Azure, Google and Alibaba — a rich open source ecosystem built atop these providers and a powerful pipeline of cloud skills in the workforce. Many companies, us included, have adopted multi-cloud strategies that focus on normalization, allowing workloads to move seamlessly between traditional IT environments, on-premise cloud platforms like OpenStack and public clouds such as AWS. Pick whichever fits your use case best and we normalize the differences to eliminate any friction. This approach still has a place in the world, but it cuts off the leverage, making even modest return on investment in the cloud hard to come by. By embracing the driving forces, we can deliver enormous competitive edge and generate opportunity for our clients.

So, we’re excited to announce a new chapter in the life of Rhythmic. We have begun a journey to refocus our team, our services and our expertise solely around building and running systems in the public cloud. This journey will take years, and while many of our clients are already living in the cloud, we also know that others are not ready to begin their own journey yet. We are not leaving them behind. But we know if we are to continue delivering our clients a competitive edge, we must skate to where the puck is going. We are making a clear choice about how we invest in people, process and systems.

You’ll notice we’ve changed the format of our newsletter this month, after a short break to prepare for this change. While we hope it will remain an informative way to keep in touch, our focus will be on telling the stories of the journey, educating you on avoiding the inevitable challenges while maximizing opportunities and sharing the state of this fast-moving industry. We are eager to get started.

-Cris


Why Now

The cloud is not new. While AWS gets much of the credit for kickstarting the cloud “revolution”, companies like SalesForce have been delivering mission-critical hosted software for nearly 20 years. We also have been delivering our own cloud services, giving our clients unique capability without the then-big investments required to operate effectively within a so-called hyperscale cloud provider like AWS. Everyone from AWS to SalesForce and Rhythmic falls under the broad umbrella we now simply call cloud. So, when we talk about shifting focus to the cloud, it can be a bit confusing — where were we all this time if not the cloud?
The cloud is frequently summarized as “someone else’s server”, reflecting the notion that the cloud is a new label for old-fashioned outsourcing, whether that be running data centers or shifting on-premise software somewhere else. Traditionally, we have thought of infrastructure and systems in distinct layers, with well-built data centers underpinning well-built networks, networks underpinning well-built servers, and so on to the application that gets consumed by the end user. Through this lens, moving to the cloud is merely deciding to outsource more of those layers than you might have in the past.

As the cloud has matured, this increasingly simplistic view masks a paradigm shift, one where small teams can now compose systems out of infrastructure, service and application components dynamically. Creating infrastructure and entire systems is a matter of writing code. Layers are gone and everything intermingles freely, without prescriptive rules. Hardware, software and even data centers are all synthesized together, and different combinations can be experimented with in real-time to quickly find the optimal pattern to solve the problem at hand.

Virtualization was an important step in this journey, making infrastructure plastic and adaptable, but not quite composable. This was compatible with the skills, systems and practices established over the past 30 years of modern IT. We could bend the rules a bit and apply our traditional ways of thinking to determine what the best practices should be. Of course, once people got a taste for what you could do when the rules were relaxed, people did what people do best: they got rid of all of the rules and waited to see what happened. Many of these ideas were incubated in some of the largest tech companies, like Facebook, Google, Amazon and Netflix. The results were spectacular improvements in quality and speed of development, two forces that have been diametrically opposed in the past.

There were consequences to getting rid of those rules, to be sure, and the investment required to completely abandon the skills, systems and practices established in 30 years of modern IT was substantial. Some early adopters were able to innovate their way past the challenges and deliver software to their customers at warp speed, but many others got bogged down in cost and complexity. Not only did our ways of thinking need to evolve, but our systems and processes needed to be rebuilt to support a new way of doing pretty much everything. We needed to get to the tipping point where people, process and system were all ready.

That tipping point has been reached. Early adopters have joined major cloud providers in becoming champions for the cloud, contributing substantial open-source projects that help reduce the amount of investment companies need to make. The workforce has rapidly “skilled up” as experienced engineers get their hands dirty and a younger generation that is unencumbered by an old way of thinking enters the workforce. And the design patterns and experiences needed to properly and effectively compose complex applications have matured.

So, the time for the cloud is most certainly now for pretty much all of us, but not just as a new place to apply our old ways of thinking. To reap the benefit we must embrace the differences and adapt. The tipping point came as a result of four highly leveraged factors, each of which can deliver a 10x return over the long term. These factors literally make the decision to be in the cloud an obvious choice — a matter of survival.

  • Open Source Ecosystem. Early adopters of the cloud, from large online systems like Facebook and Netflix to small companies and even students, have created a massive body of work — millions of lines of code. Thousands of wheels needed to be reinvented in the cloud; tooling needed to be built; and new frameworks for security, cost management and automation needed to be created. While this is a process that never will be done, the maturity, capability of open-source systems like Kubernetes, ElasticSearch and MongoDB are putting entrenched enterprise solutions out of business. Literally thousands of smaller projects provide a significant toolbox that teams can draw on to solve complex problems quickly. We’ll be sharing many of these with you in our blog and future issues of the newsletter.
  • Workforce Pipeline. Infrastructure skills always have been in short supply in the workforce. But as the tech workforce gets sharply younger and as the lines between software and infrastructure engineering blur, a massive pipeline of talent is emerging. They’re enthusiastic, skilled up and not particularly interested in building things the old way. As the workforce continues to drive hard toward the cloud, this will create increasing risk for companies who are content to stay put — not only will their systems remain complex and static, but the teams that are required to support them will stretch and fade.
  • Composable Infrastructure. For a long time now, software development has been agile. Good software development, anyway. But infrastructure lagged behind, requiring upfront planning and investment. Virtualization and software-defined services helped speed up the process and increase what DevOps teams could do, but it was a far cry from full infrastructure being fully defined on demand. Today, low-end infrastructure services like compute and storage can be mixed with open source and cloud-managed services.
  • Evolving Infrastructure. Infrastructure has tended to be static once deployed, as has the way software interacts with it. While infrastructure continues to evolve in the marketplace, those new capabilities rarely fit into existing deployments. That’s why concepts like “re-platforming” and “next generation” are as common today as they’ve ever been. The cloud breaks free from this way of thinking, allowing the underlying infrastructure to evolve incrementally and expose new capabilities to already deployed systems. Complex operations like major database upgrades are converted to seamless processes that you can easily test and revert. This is significant because it allows initial infrastructure investments to appreciate in value over time rather than depreciate in value.

You may notice we’ve left off many of the cloud benefits that providers like AWS champion in their marketing and sales pitches. It’s not because those benefits aren’t real. We just don’t see them as leveraged enough to justify the risks and opportunity cost that comes with such a dramatic change. For example, AWS aggressively highlights its managed services as a way to “free your time”, reducing operational overhead and focusing you on differentiation. Sure, that’s true. But the return on that time is linear, nudging total cost of ownership. On the other hand, automating infrastructure and then putting it in the hands of your teams so that they can spin up complex environments for just a few minutes of testing is leveraged, radically shifting total cost of ownership. For many companies, the factors above can deliver a 10x advantage over the long term. As a result, the public cloud is all but required to build a high-performing, high-output team.


In the News


From the Blog

Setting Password Policies via CloudFormation

Ideally, you’d like to create as much as possible in your AWS account with CloudFormation templates. Particularly when you’d like to use multiple accounts for security or billing purposes, having to go through a bunch of manual steps every time you create an account is not ideal.

One of the first things you set up in any new account is an IAM password policy. Unfortunately, IAM password policies are one of those things you still can’t set up in CloudFormation natively. You can create a Custom Resource for this, which effectively lets you extend CloudFormation via Lambda functions.

Custom Resources can get you into trouble, creating resources that are difficult to track and manage throughout the lifecycle of your stack. However, sometimes they’re the only available option. We have a simple security stack we create with every account, which defines an IAM password policy and sets a few initial security groups. To set the password policy, we use a simple Python/Boto-based Lambda, and the key function uses Boto to create or update an existing policy based on your parameters:

Read More


What’s New

How we saved over $1,000 by building CloudForcast.io with Serverless and AWS Lambda (CloudForecast)

Serverless setups are the next step in the technological evolution from the old days of running dedicated hardware to power a platform. Running virtual machines allowed companies to better utilize either on-premise hardware and/or cloud resources. Serverless technologies go a step further in saving costs by allowing companies to only pay for what they need to run their environment(s) when they need it. –Kevin

Annoucing Lambda Support for PowerShell Core (AWS)

PowerShell took the Windows world by storm a little over 10 years ago, and since that time has spread to Linux and macOS platforms and become the de facto language of choice for Windows administrators. With PowerShell Core support now available on AWS Lambda, this powerful compute paradigm is now available to an even wider audience of system and network administrators. Automating routine instance health checks, regularly scanning for vulnerable security groups, and triggering batch processes from S3 object events are just some of the possibilities that can now by automated by non-developers thanks to PowerShell’s easy syntax and extensive community support. -Steven

Kubeaudit helps you audit your Kubernetes clusters against common security controls. (GitHub)

Kubernetes allows you to quickly create not only Docker containers but fully functional applications through Helm charts and other open source plugins. This makes it difficult to tie in your security tools, like vulnerability scanners and audit tools, creating a blind spot. Kubeaudit reviews your running Kubernetes clusters and makes actionable recommendations to address common security issues. -Cris


Security Tip

22 Most Under-Used AWS Security Metrics (Threat Stack)

CloudWatch metrics on infrequently changing security-related configurations are simple to set up. Unlike most security events that are more often than not false positives, these are high-quality events that are always worth investigating.

We recommend everyone set up metric filters to alert on changes to account and IAM configuration at a minimum. It can also be helpful to set up metrics for changes to VPC configs, security groups, and ACLs when they’re made outside of your team’s business hours. -Cris

© Copyright 2019 Rhythmic Technologies, Inc. All Rights Reserved.