Kube2iam lets you attach IAM roles to running pods in your Kubernetes cluster. Kube2iam is a bit confusing to install at first but is actually trivially easy to use once you understand how it works. This post shows how to get Kube2iam up and running in EKS, first using Helm (to focus on the EKS-specific parts) and then without Helm for completeness.
Kube2iam with Helm
Helm is a template engine gone wild for Kubernetes, giving a reasonable way for you to maintain your Kubernetes configs in version control without having them be hopelessly out of sync with what’s actually running. We won’t get into Helm too much here, but suffice to say, it’s easy enough to set up and install. If you’re running Kubernetes for any sort of meaningful project, you should be using something like Helm.
To install Kube2iam via Helm, you first need to create a values.yml file that sets various variables. Fortunately, not much needs changed:
# Define your region. IAM is global, but actually attaching the role to your running pod requires knowing the region that pod runs in.
# Replace the account number and an option role prefix to restrict the roles your pods can assume
# These settings are all that is required for EKS (unless you've decided to install Calico, but if you did, you hopefully know what you're doing)
# If your cluster is running RBAC (it is, right?), leave this alone
Now that kube2iam is configured, you’ll need to configure the pods to ask for the role to be applied. To do so, set the iam.amazonaws.com/role annotation on in pod template’s metadata. With most helm charts, you simply need to add something like this to your values.yml:
By applying the metadata to the deployment, Kubernetes will automatically add the same to the pods created underneath the deployment. This behavior is the same for replica sets and daemon sets.
Note that you do not need to specify the full ARN of the role, just the role name. The base ARN you specified in the Kube2iam chart is automatically prefixed. While you do not need to specify a base ARN and can use the full role ARN instead, this makes it more difficult to port your configs from account to account. That’s no fun.
Kube2iam the hard way
Helm creates all of the resources for you, saving you from having to delve into the details of Kube2iam all that much. But, if you want to create manually, it’s fairly simple… First off, create your RBAC roles/bindings:
Be sure to set the base ARN and region to match your needs, and update the version number to be the most current stable release.
Example: External DNS
External DNS lets you create and update Route53 entries automatically for your ingresses and services. As you can imagine, you would not want any running pod in your cluster to have Route53 access, so this is a perfect use case for Kube2iam–set your pods to use a policy that has limited Route53 access.
Here’s a sample policy document we’d like to apply:
We’ll attach that to a role called k8s-eks-cluster-r53-access. Prefixing your roles like this can be helpful to ensure that generically named roles do not inadvertently get used outside of your Kubernetes cluster. Depending on how you manage and govern roles, this may or may not be necessary for you.
Here’s the corresponding external-dns deployment, which references the role we created:
Note that we have omitted the RBAC configuration to keep this post focused on Kube2iam. You’ll need to create the appropriate ClusterRole, ClusterRoleBinding and ServiceAccount objects if using RBAC. Alternatively, you can automate this with helm using the following values.yml:
# Generally you want to explicitly set sources so that you can define DNS entries via both services and ingresses. Be mindful of certificate issues if overloading one ingress with names from multiple services.
# This will prevent external-dns from modifying zones you don't want it touching. Usually good to set this.
# Link in your AWS role
As you can see, it is not difficult to quickly get Kube2iam running, and it provides enormous benefit.