Kubernetes, EKS, IRSA and Terraform

By Andreas Spak
Published on 2023-01-14 (Last modified: 2023-08-10)

...

When I first started working with Kubernetes on EKS, for production environments, I spent quite a long time trying to understand the relation between EKS (the platform on which Kubernetes runs in AWS), IAM and Kubernetes RBAC. When you install a controller in Kubernetes, that needs to interact with services in AWS, such as Ec2, that controller somehow has to get the sufficient IAM policies to be able to perform whatever actions it needs to, on the respective Ec2 instance. There are basically two ways to allow the controller to do this, either by assigning policies to the entire node's instance role, or to associate an IAM role with a Kubernetes service account. The first approach results in all pods running on a specific node having the same policies, which could be fine, depending on context. The latter approach, to associate roles to service accounts, or IRSA (IAM Roles for Service Accounts), offers the option to assign different policies to different service accounts. This way, one can achieve a more fine-grained control over what IAM policies each pod has.

This article will make an effort to explain, on a higher level, the relationship between IAM policies and roles, and Kubernetes service accounts. In the examples, I'm using Terraform, and the AWS EKS Terraform module, which is something that most guides and articles that I have found, on this subject, does not cover.

 

Let's dig in...

To demonstrate IRSA and Terraform, I will go ahead and use the AWS Load Balancer Controller as an example. This controller makes a good example, since it clearly needs to interact with multiple AWS services.

 

The OIDC provider

First of all, the AWS documentation tells us to create an IAM OIDC provider for our cluster. In Kubernetes, pods authenticate with the Kubernetes API server by tokens. Since EKS hosts a public IODC discovery endpoint, these tokens can be validated by AWS, in exchange of IAM temporary role credentials, which a pod can use to interact with AWS services. This is a simplified version, but it is enough to grasp the concepts, and why each EKS cluster needs an OIDC provider attached.

Luckily, to enable the OIDC provider in EKS is fairly easy using the AWS EKS Terraform module. 

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = var.cluster_name
  version         = "19.0"
  ...

  enable_irsa    = true
  create_kms_key = true
  kms_key_owners = ["arn:aws:iam::<account id>:role/<owner role>"]
}
  • enable_irsa - Enable the OIDC provider with a global sts endpoint.
  • create_kms_key - Create a primary key.
  • kms_key_owners - List of users or roles with full key permissions.

 

IRSA Terraform modules

Instead of creating your own, custom IAM policies, which can be a daunting task, we can instead use a Terraform sub-module, available for some common controllers and add-ons. Let's look at how the AWS Load Balancer Controller IRSA sub-module is implemented.

module "load_balancer_controller_irsa_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

  role_name                              = "load-balancer-controller"
  attach_load_balancer_controller_policy = true

  oidc_providers = {
    ex = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
    }
  }
}

This module is pretty self-explanatory. Note that we pull in the OIDC provider ARN, from the EKS module. Also, make sure that namespace_service_accounts contains a list with matching namespace and service account name. When provisioned, you will have a role named "load-balancer-controller" with the proper policies attached to it.

 

Note that you want to output the ARN of the IAM role created by this module. For example:

output "cluster_oidc_provider_arn" {
  description = "The ARN of the IAM role created."
  value       = module.load_balancer_controller_irsa_role.iam_role_arn
}

This will create a Terraform output with the AWS load balancer IRSA role created by the IRSA module. We need to add this ARN to the service account annotations later. That's it for the Terraform part, let's move over to install and configure the controller in Kubernetes.

 

Create a service account

Firstly, let's create a service. I'm doing this "manually", outside of the Helm chart I use to install the controller in the next step.

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/name: aws-load-balancer-controller
  name: aws-load-balancer-controller
  namespace: kube-system

 

Install the controller

You can use the following Helm command to install the AWS Load Balancer Controller in your cluster

$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update
$ helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<cluster name> --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer- controller

 

Annotate the service account with role ARN

This is an important part. In order to map a service account to a AWS role, we need to make Kubernetes aware of the role. We do this by annotating the service account in Kubernetes. Since this is already created, we need to update it. You could of course have added this annotation directly in the yaml-file, when we created the service account, but then again, you probably want to automate this process in a CI/CD tool.

$ kubectl annotate serviceaccount aws-load-balancer-controller --overwrite -n kube-system eks.amazonaws.com/role-arn=<role-arn>

Of course, a better approach would be to store the role arn in AWS Secrets Manager or Parameter Store, and get the value in the CI/CD process, but for simplicity, I have cut away all the fancy bits and pieces in this article.

 

Summary

This has been a brief and pragmatic introduction to Kubernetes, Terraform and IRSA. I have explained how to enable the EKS OIDC provider in Terraform and how to utilize the IRSA Terraform modules to create custom policies for Kubernetes controllers that needs to access AWS services. I have also given an example of how to install the actual controller, as well as how to annotate the Kubernetes service account with the correct IAM role arn.




About the author



Andreas Spak

Andreas is a Devops and AWS specialist at Spak Consultants. He is an evangelist for building self-service technologies for cloud platforms, in order to offer clients a better experience on their cloud journey. Andreas has also strong focus on automation and container technologies.

Comments