As you can see, it is not difficult to quickly get Kube2iam running, and it provides enormous benefit.
The Rhythmic Blog
Setting Up Kube2iam in EKS: A Step-by-Step Guide
Kube2iam lets you attach IAM roles to running pods in your Kubernetes cluster. Kube2iam is a bit confusing to install at first but is actually trivially easy to use once you understand how it works. This post shows how to get Kube2iam up and running in EKS, first using Helm (to focus on the EKS-specific parts) and then without Helm for completeness.
Kube2iam with Helm
Helm is a template engine gone wild for Kubernetes, giving a reasonable way for you to maintain your Kubernetes configs in version control without having them be hopelessly out of sync with what’s actually running. We won’t get into Helm too much here, but suffice to say, it’s easy enough to set up and install. If you’re running Kubernetes for any sort of meaningful project, you should be using something like Helm.
To install Kube2iam via Helm, you first need to create a values.yml file that sets various variables. Fortunately, not much needs changed:
# Define your region. IAM is global, but actually attaching the role to your running pod requires knowing the region that pod runs in. aws: region: "us-east-1" # Replace the account number and an option role prefix to restrict the roles your pods can assume extraArgs: base-role-arn: arn:aws:iam::012345678910:role/ # These settings are all that is required for EKS (unless you've decided to install Calico, but if you did, you hopefully know what you're doing) host: iptables: true interface: eni+ # If your cluster is running RBAC (it is, right?), leave this alone rbac: create: true
Now, use helm to “install” your chart:
helm install --name CLUSTERNAME-kube2iam -f values.yml \ --kube-context=arn:aws:eks:us-east-1:012345678910:cluster/eks-cluster \ stable/kube2iam
It’s as simple as that.
Now that kube2iam is configured, you’ll need to configure the pods to ask for the role to be applied. To do so, set the iam.amazonaws.com/role annotation on in pod template’s metadata. With most helm charts, you simply need to add something like this to your values.yml:
deployment: annotations: iam.amazonaws.com/role: "k8s-eks-cluster-iam-role"
Other charts may use this syntax:
podAnnotations: iam.amazonaws.com/role: "k8s-eks-cluster-iam-role"
By applying the metadata to the deployment, Kubernetes will automatically add the same to the pods created underneath the deployment. This behavior is the same for replica sets and daemon sets.
Note that you do not need to specify the full ARN of the role, just the role name. The base ARN you specified in the Kube2iam chart is automatically prefixed. While you do not need to specify a base ARN and can use the full role ARN instead, this makes it more difficult to port your configs from account to account. That’s no fun.
Kube2iam the hard way
Helm creates all of the resources for you, saving you from having to delve into the details of Kube2iam all that much. But, if you want to create manually, it’s fairly simple… First off, create your RBAC roles/bindings:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: eks-cluster-kube2iam rules: - apiGroups: - "" resources: - namespaces - pods verbs: - list - watch - get
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: eks-cluster-kube2iam roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: eks-cluster-kube2iam subjects: - kind: ServiceAccount name: eks-cluster-kube2iam namespace: default
And, create a ServiceAccount to run as:
apiVersion: v1 kind: ServiceAccount metadata: name: eks-cluster-kube2iam namespace: default
Finally, create a DaemonSet (Kube2iam needs to run on every node in your cluster):
apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: eks-cluster-kube2iam namespace: default labels: app: kube2iam spec: revisionHistoryLimit: 10 selector: matchLabels: app: kube2iam template: metadata: creationTimestamp: null labels: app: kube2iam spec: containers: - args: - --host-interface=eni+ - --node=$(NODE_NAME) - --host-ip=$(HOST_IP) - --iptables=true - --base-role-arn=arn:aws:iam:: 012345678910:role/ - --app-port=8181 env: - name: HOST_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: AWS_DEFAULT_REGION value: us-east-1 image: jtblin/kube2iam:0.10.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8181 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 name: kube2iam ports: - containerPort: 8181 hostPort: 8181 protocol: TCP securityContext: privileged: true dnsPolicy: ClusterFirst hostNetwork: true serviceAccount: eks-cluster-kube2iam
Be sure to set the base ARN and region to match your needs, and update the version number to be the most current stable release.
Example: External DNS
External DNS lets you create and update Route53 entries automatically for your ingresses and services. As you can imagine, you would not want any running pod in your cluster to have Route53 access, so this is a perfect use case for Kube2iam–set your pods to use a policy that has limited Route53 access.
Here’s a sample policy document we’d like to apply:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Route53UpdateZones", "Effect": "Allow", "Action": "route53:ChangeResourceRecordSets", "Resource": "arn:aws:route53:::zonename/*" }, { "Sid": "Route53ListZones", "Effect": "Allow", "Action": [ "route53:ListResourceRecordSets", "route53:ListHostedZones" ], "Resource": "*" } ] }
We’ll attach that to a role called k8s-eks-cluster-r53-access. Prefixing your roles like this can be helpful to ensure that generically named roles do not inadvertently get used outside of your Kubernetes cluster. Depending on how you manage and govern roles, this may or may not be necessary for you.
Here’s the corresponding external-dns deployment, which references the role we created:
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: external-dns name: example-external-dns namespace: default spec: replicas: 1 selector: matchLabels: app: external-dns template: metadata: annotations: iam.amazonaws.com/role: eks-cluster-r53-access labels: app: external-dns spec: containers: - args: - --log-level=info - --domain-filter=zone1.com - --domain-filter=otherzone.net - --policy=upsert-only - --provider=aws - --registry=txt - --source=ingress - --source=service env: - name: AWS_DEFAULT_REGION value: us-east-1 image: registry.opensource.zalan.do/teapot/external-dns:v0.5.9 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 7979 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: external-dns ports: - containerPort: 7979 protocol: TCP dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: example-external-dns terminationGracePeriodSeconds: 30
And the related service:
apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-01-27T19:52:10Z labels: app: external-dns name: example-external-dns namespace: default spec: ports: - name: http port: 7979 protocol: TCP targetPort: 7979 selector: app: external-dns sessionAffinity: None type: ClusterIP
Note that we have omitted the RBAC configuration to keep this post focused on Kube2iam. You’ll need to create the appropriate ClusterRole, ClusterRoleBinding and ServiceAccount objects if using RBAC. Alternatively, you can automate this with helm using the following values.yml:
apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-01-27T19:52:10Z labels: app: external-dns name: example-external-dns namespace: default spec: ports: - name: http port: 7979 protocol: TCP targetPort: 7979 selector: app: external-dns sessionAffinity: None type: ClusterIP