Designing hardware and software have always been two separate fields, inextricably linked yet fully independent of each other. Software builders needed to plan in sufficient detail to forecast their needs and coordinate with the hardware team to make sure resources were available and cost was correctly estimated, then hope when launch time came that the plans were correct. This model has had a monopoly over software development for over 50 years. Like most monopolies, it wasn’t a good one–the model was a necessary evil in a world where infrastructure was expensive and time consuming to acquire and provision. 

As cloud platforms have matured, the concept of composable infrastructure has developed. Small and large pieces of infrastructure can be summoned–and subsequently dispensed with–on command. We’re long past the era of being able to create simple virtual machines on demand and now can create large scale networks, complex databases, machine learning and big data clusters, and even voice recognition and translation services. Infrastructure and software have merged, and planning has been replaced by experimentation and iteration. This issue explores how composable infrastructure is rapidly accelerating the speed of software development and considers challenges like maintaining quality.

-Cris

Building Modern Software in the Cloud is a Balancing Act 

For 20 years, the model in IT has been to build layers of abstraction, one on top of the other. We evolved from super servers to commodity servers, then virtual machines to containers, and now all the way to “Serverless” infrastructure. Until very recently, each layer was intended to supersede the others, freeing you from needing to think about the layers underneath. But, that freedom was often limiting. Sometimes, abstraction masks the best solution.

Cloud providers have continued to expand their offerings, and today you have access to both on-demand high performance bare metal servers as well as serverless options—the extreme edges of the infrastructure landscape. In addition to being available in real-time, with no ongoing commitment, these services are all interoperable. Applications can freely mix serverless, instances and physical hardware, all on-demand and real-time—all over the world.

When coupled with a rich ecosystem of open source libraries and platforms, applications become as much about assembly as creation. In fact, building software has grown into an art of composition. The notion of “configuration as code” has evolved to “configuration instead of code”, causing a paradigm shift in how software is built. Creating infrastructure in parallel with code eliminates the need to make infrastructure decisions in advance, allowing teams to build high quality software more quickly than ever. The ecosystem and platform flexibility have empowered countless small startups to unseat well entrenched incumbents.

Composition is not without challenges, though. Having an infinite menu of choices is often paralyzing, and making the wrong choice can cost a boatload and set you up for failure down the road. In other words, the complexity of building software is higher than ever. Until very recently, our industry had whole companies that were “Java shops” and “.NET shops”. Today, that sounds like madness. Developers are expected to know multiple languages and frameworks, while also understanding the underlying infrastructure.

The result of all of this is a new playbook, one where you do not necessarily need to think and plan before you build, where you can tinker and test before deciding on a design. But, we’re still humans, and freedom must balance with discipline to avoid chaos. Quality requires thoughtfulness, and speed is the enemy of both. Our challenge now is to learn what that balance looks like, understanding how to take risks while also being prepared to walk away, start over, and try again.

As we seek that balance, one of the most important things is to recognize that we need to recalibrate what we need to master in order to build great software. Understanding the nuance and syntax of a particular programming language is far less valuable today than it was a decade ago. However, understanding how a piece of code will ultimately be run—its performance characteristics, its security implications, its deployment model—is crucial. Developers are no longer building just their little cog in the grand machine. Operators are no longer running black boxes that they can afford to not understand. Instead, we are building and running systems that we understand from code through to execution for the first time since the earliest days of computing.

From the Blog

Understanding GP2 Volume Performance

Performance in EBS is widely misunderstood, resulting in many writing off EBS as a choice between “slow” or “expensive”. While there is some truth to that, I often see EBS implementations that fail to take advantage of cheap or even free options to boost performance. This article focuses specifically on GP2 volumes, which are SSD-backed, low cost, and the default in many regions. As a result, they are the most widely deployed volume type by far. Let’s take a look at the documented performance characteristics of a GP2 volume: continue reading

What’s New

How to Create DevOps Feedback Loops (Beyond20)

 Implementing DevOps requires rethinking meeting patterns, accountability and training to increase coordination and maintain flow. This article gives several patterns for creating feedback loops and fostering accountability in a new DevOps team. – Steven 

I Gots to Get Organized (AWS Advent)

Multi-account strategies are essential to empowering developers to build safely. Service Control Policies allow you to set guard rails such as blocking expensive services or requiring security settings. This blog gives an excellent overview of organizing your accounts and applying SCPs. tl;dr, It’s easy. Do it– Cris

fx: Command-line tool and terminal JSON viewer  (GitHub)

Command-line parsing in JSON can be maddening, particularly when using common tools like jq. fx is a thoughtful tool that streamlines many common JSON handling use cases. – Cris

Security Tip

20 Developers and Kubernetes Experts Reveal the Biggest Mistakes People Make During the Transition to Kubernentes (Threat Stack)

Even though Kubernetes is a production-ready, mature platform, teams often underestimate the complexity in running highly available, secure applications on top of Kubernetes. It is incredibly easy to get a Kubernetes cluster up and an application running in it, but “up” and “production ready” are very different states. Service health checks, monitoring, deployment strategies, networking, and container security all need to be thought through, and there are not well-established best practices and patterns to lean on yet.

Modern infrastructure is very resilient, and Kubernetes is unlikely to improve availability and performance unless the application is built to leverage its strengths. Specifically, Kubernetes is best for applications committed to a loosely coupled microservices architecture. And, it is best adopted by teams who understand that Kubernetes is not a shortcut but rather an opportunity to improve application performance and availability once properly understood.

– Cris