Helm v3 – What You Need to Know
Helm bills itself as “The package manager for Kubernetes”. A lot of changes went into version three of the tool and includes changes, removals, and additions. Fortunately, the Helm charts and the CLI work much the same, although if you’re managing a Helm deploy pipeline there are some breaking changes. Here are a few that we’d like to highlight.
What is Helm?
Helm was originally created by a company called Deis and released at the end of 2015. It was modeled after the package manager for macOS, Homebrew. Deis open-sourced the project but the company was later acquired by Microsoft. In the past few years, Helm has risen alongside Kubernetes to a place of prominence in the IT-OPS world as the de-facto package manager for Kubernetes and currently houses almost a thousand charts just in their hub. The biggest advantage that Helm provides its users is the ability to template Kubernetes manifests then deploy or roll them back as a single command. This means one can define a series of Kubernetes resources in the abstract – perhaps a database, a backend server, and a load balancer – and then when these resources are needed in the particular the details can be specified as variables (in a file usually called
values.yml) and can be identically deployed in as many places and as many times as necessary. It also provides a standardized format for specifying these applications which help diverse teams of engineers collaborate.
What’s changed in Helm 3?
- No tiller!
- More Secure
- Not managing state
Tiller, which was included in v2, is nautically named after the lever which the helmsman uses to steer a boat. It was introduced to help teams working on a shared cluster manage releases before Kubernetes introduced role-based access controls (RBAC). Tiller didn’t actually originate in the Helm project but was added when the codebase for a project called Kubernetes Deployment Manager was merged with Helm.
F.1 – With a tiller steering, the helmsman pushes for port tack to starboard. F.2 – With a steering wheel, the helmsman turns the wheel in the direction where he wants to turn. [Wikipedia]
By AvelKeltia – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=14656604
Helm is broken down into two major components; a client which the end-user will interact with and a part which will interact with the Kubernetes API. From versions 2 to 3 the Helm client stayed the same. What changed was the part of Helm which actually talks to Kubernetes. In Helm v2 this was Tiller, which did the heavy lifting of actually turning a chart into something Kubernetes could turn into an application and of tracking what happened to that application. This application-tracking, also known as the application “state”, became easier once Kubernetes introduced Custom Resource Definitions (CRDs). A CRD is really just a way of storing objects in Kubernetes, kind of like how you’d store data in a database. The Helm team took advantage of this and created a CRD for releases. This means that Helm no longer has to track state, it can just store it in Kubernetes. But if Tiller isn’t tracking state what really is the advantage of having a service running in-cluster? There really did not seem to be one so the Helm team moved the chart-rendering to the client-side so that it happens on your computer. This allowed for the complete removal of Tiller.
|Kubernetes API server interface||
Architecture Comparison, taken from https://github.com/helm/helm/blob/release-2.14/docs/architecture.md and https://github.com/helm/helm-www/blob/master/content/docs/topics/architecture.md
What was wrong with Tiller? Besides adding unnecessary complexity to the project, Tiller posed a security risk. The most alarming being that, by default, Tiller exposes an unauthenticated gRPC endpoint inside your cluster. That means if one of the services in your cluster gets compromised Tiller can as well, and since Tiller has the keys to the Kubernetes kingdom, it has control of your cluster as well. Also, there’s no way to give users role-based permissions to Tiller. Tiller has all the control and you can either have control of Tiller or not. There are ways to mitigate these risks but for those of us who started early with version 3, you can rest well knowing that it successfully passed a third-party security audit! At Rhythmic, we value being able to sleep at night knowing that no naughty parties are going to come breaking down our software doors and so have been using v3 since it was in Beta.
For a better summary of the security risks associated with Tiller see:
Security Audit Report: https://github.com/helm/community/blob/master/security-audit/HLM-01-report.pdf
What’s stayed the same?
- The charts!
- The CLI (for the most part)
Fortunately, all the good stuff that makes us love Helm has stayed the same. It still helps developers create, manage, and release Kubernetes applications. As an end-user of Helm very little will have changed, so little in fact that v2 charts are still installable. The nice people at Helm have even developed a plugin to help with the conversion. When you’re installing a chart with version three the only difference you’ll notice is that you must specify a name for the release (there is a
--generate-name flag but names should convey information – not be random). If you can’t do any of these things with version 3 check to make sure that you’re Kubernetes administrator has given you the requisite access. In v2 all you needed was access to tiller to do almost anything in your cluster but in v3 the individual user must be given these permissions. This may seem like a hassle but really it takes advantage of Kubernetes built-in RBAC to delegate permissions appropriately.
- Terraform provider support.
Many awesome third-party libraries are still working to support Helm v3. Terraform’s Helm provider is nearly ready for version 3 at the time of this writing. Another tool for managing Helm releases as code, helmfile, is ready for version 3. The Pulumi Helm provider is too if you believe the issue linked below. I couldn’t find any of docs validating that they are in fact ready for version 3, so if you get them up and running maybe submit a docs PR to the repos!
Watch the issues or contribute on GitHub: