How to set up AWS Load Balancing with Terraform

By Mateo Spak Jenbergsen
Published on 2021-11-01 (Last modified: 2023-08-09)

...

For the sake of cost efficiency, in AWS, it is important to allow for one load balancer to serve multiple targets, such as multiple Ec2 instances, or a cluster of services in ECS. In this article we are going to have a look at how to utilize one load balancer for multiple applications, running in ECS. We are going to use Terraform to provision infrastructure to AWS, so if you don't have Terraform with AWS already up and running, please have a look at Terraform with AWS.

 

Our use case

In our case, we run a micro-service with 3 different apps running in ECS, that we want to include under one load balancer. For this article I will create an admin-, client- and an API-application, that all use the same Application Load Balancer (ALB). Here is a quick summary of the steps we are going to take in order to accomplish our goal.

  1. First we create a single Application Load Balancer.
  2. Secondly, we'll create a target group for each of our applications.
  3. Then we'll create a single ALB listener and create and attach two listener rules to it.
  4. Lastly, we'll attach our TLS certificate validation.

 

Different types of load balancers

There are three different types of load balancers we can choose to use, 

  • Classic Load Balancer
  • Application Load Balancer
  • Network Load Balancer

For this article I will show how to set up an Application Load Balancer (ALB). An Application Load Balancer, or ALB, in AWS, is a single point of contact that distributes incoming application traffic across multiple targets, for example services running in ECS. One important task for a load balancer, in a modern application architecture, is to terminate TLS, i.e. taking the role of verifying your TLS certificate for any incoming traffic on port 443 and redirecting it on port 80 to your application. 

ALB's work very well for container based applications and micro-services, e.g. Amazon ECS and Docker. ALB allows for multiple helpful things, such as, load balancing based on route or hostname in URL. Another reason we use ALB is that we can front multiple applications, using the same load balancer. If we were to use e.g. a Classic Load Balancer, we would need one load balancer for each application.

 

How to create a Load Balancer

First, we need to create a file, we'll call it alb.tf, and add the following content.

resource "aws_lb" "my_lb" {
  name               = "my-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb-sg.id]
  subnets            = module.vpc.public_subnets

  enable_deletion_protection = false
}

 

This sets up our application load balancer. It uses the following arguments,

  • name - sets the name of our load balancer.
  • internal - if set to true, the ALB is only assigned to a private subnet. We want the ALB to be accessible from outside of our VPC so we set this to false.
  • load_balancer_type - the type of load balancer we want to use. 
  • security_groups - a list of security group IDs to assign to the load balancer. We'll create these in the next step.
  • subnets - a list of subnet IDs to attach to the load balancer.
  • enable_deletion_protection - if set to true, Terraform will not be able to delete this.

 

Security group

We now want to add our custom security group rules to our ALB. We'll create a file called security_groups.tf, and add the following.

resource "aws_security_group" "alb-sg" {
  name   = "alb-sg"
  vpc_id = module.vpc.vpc_id
  description = "Allow inbound traffic from port 80 and 443, to the ALB"
 
  ingress {
   protocol         = "tcp"
   from_port        = 80
   to_port          = 80
   cidr_blocks      = ["0.0.0.0/0"]
   ipv6_cidr_blocks = ["::/0"]
  }
 
  ingress {
   protocol         = "tcp"
   from_port        = 443
   to_port          = 443
   cidr_blocks      = ["0.0.0.0/0"]
   ipv6_cidr_blocks = ["::/0"]
  }
 
  egress {
   protocol         = "-1"
   from_port        = 0
   to_port          = 0
   cidr_blocks      = ["0.0.0.0/0"]
   ipv6_cidr_blocks = ["::/0"]
  }
}

 

This opens up for incoming traffic only on port 80 and 443. Outgoing traffic is set to all. I'm not going to explain what each of these arguments do. Terraform Security Groups gives an explanation to all of the arguments used.

 

Target groups

Next step is to add our target group. The target group is used to route requests to registered targets.

resource "aws_alb_target_group" "<admin-app>_lb_tg" {
  name        = "admin-app-tg"
  port        = 80
  protocol    = "HTTP"
  vpc_id      = module.vpc.vpc_id
  target_type = "ip"

  health_check {
    healthy_threshold   = "3"
    interval            = "30"
    protocol            = "HTTP"
    matcher             = "200"
    timeout             = "5"
    path                = "/"
    unhealthy_threshold = "2"
  }

  depends_on = [aws_lb.my_lb]
}

 

We do need to add one target group per application we want to access from our load balancer. These are the arguments we use.

  • name - is the name of our target group. If not defined, Terraform will assign a random, unique name.
  • port - must be set to whatever port our application is running on. Is required when target_type is set to ip.
  • protocol - the protocol to use when routing traffic.
  • vpc_id - the id of our vpc.
  • target_type - the type of target we uses when registering targets. We want it to be ip since we use the ip of our application.

We also define our health check in our target group. 

  • healthy_threshold - the number of consecutive health checks successes required before an unhealthy target turns healthy. 
  • interval - the number of seconds between each health check.
  • protocol - the protocol used to connect to the target. 
  • matcher - the required response from the target to consider it healthy. It is 200 for http(s).
  • timeout - the amount of time, in seconds, during which no response means a failed health check.
  • path - is the destination for the health check request. This is set inside of our application.
  • unhealthy_threshold - the number of consecutive health check failures before considering the target unhealthy.

Lastly, we use the argument depends_on which creates a dependency between our ALB and our target group. This ensures that our target group is only created after our ALB. If our ALB fails, Terraform won't create our target group. I only present one target group, but we have to create one for each application we want to attach to our ALB. I've chosen my admin-app in this example.

 

Load Balancer Listeners

Now, we have to create two listeners, one for HTTP and another for HTTPS. We do this because we want to redirect all incoming traffic from HTTP to HTTPS.

resource "aws_alb_listener" "listener_http" {
  load_balancer_arn = aws_lb.my_lb.id
  port              = 80
  protocol          = "HTTP"

  default_action {
    type = "redirect"

    redirect {
      port        = 443
      protocol    = "HTTPS"
      status_code = "HTTP_301"
    }
  }
}

resource "aws_alb_listener" "listener_https" {
  load_balancer_arn = aws_lb.my_lb.id
  port              = 443
  protocol          = "HTTPS"

  ssl_policy = "ELBSecurityPolicy-2016-08"

  certificate_arn = <admin-app-tls-certificate-arn>

  default_action {
    type             = "forward"
    target_group_arn = aws_alb_target_group.<admin-app>_lb_tg.arn
  }
}

The only task the first ALB listener has, is to redirect traffic to port 443. It has the following arguments.

  • load_balancer_arn - is the arn of the load balancer.
  • port - is the port on which the load balancer is listening. 
  • protocol - is the protocol for connections from clients to the load balancer.

In default_action we set type to redirect. Lastly, we the set the port, protocol and status_code for our redirect.

We then set up our listener for HTTPS requests, which will route requests to our target group. It uses the following arguments.

  • load_balancer_arn - is the arn of the load balancer.
  • port - is the port on which the load balancer is listening.
  • protocol - is the protocol for connections from clients to the load balancer.
  • ssl_policy - is the name of the SSL policy we want the listener to use. This is required since we use port 443(HTTPS).
  • certificate_arn - is the default ssh server certificate the listener should use. This is required to set. I've chosen to use my admin-application for this.

In default_action we tell it to forward to our target group using its arn. The listener requires one default target to route incoming traffic to, so I chose to use my admin-application as default.

 

Listener rules

Now, we need to set up listener rules for our other two applications, client and API, and attach them to our single ALB listener. 

resource "aws_lb_listener_rule" "client-app" {
  listener_arn = aws_alb_listener.listener_https.arn

  action {
    type             = "forward"
    target_group_arn = aws_alb_target_group.client_lb_tg.arn
  }

  condition {
    host_header {
      values = ["my-client-app-hostname"] # E.g. "my-client-app.com"
    }
  }
}

 

I only show how to do this for my client-application, so you also need to do this for the API-application. The listener rule block uses the following arguments.

  • listener_arn - is the arn of the listener to which to attach the rule.

Action block,

  • type - the type of routing action.
  • target_group_arn - is the arn of the target group to which to route the incoming traffic. 

Condition block,

Here we set the host_header for our rule. This is a list of hostnames to route the traffic to.

 

Listener certificate

Since we want our application to use TLS encryption, we need a certificate. AWS offers a service called AWS Certificate Manager (ACM) to help with this process. I will not go into depth on how this works, but you can read more about it at, AWS ACM documentation. Terraform does not allow us to simply add our tls-certificate directly into our rule, like we did in the listener for our admin-app. Therefore we need to create a listener_certificate resource. But, before we can use our certificate, we need to represent a successful validation of our ACM certificate. We do this by adding the following to our code.

resource "aws_acm_certificate_validation" "client-app_tls_cert_arn" {
  certificate_arn = <client-app-tls-certificate-arn>
}

 

The last step is to attach the TLS certificate to our listener rule. 

resource "aws_lb_listener_certificate" "client-app_tls_certificate" {
  listener_arn    = aws_alb_listener.listener_https.arn
  certificate_arn = aws_acm_certificate_validation.client-app_tls_cert_arn.certificate_arn
}
  • listener_arn - the arn of the listener to which to attach the certificate. 
  • certificate_arn - the arn of the certificate to attach to the listener. 

I only show one example for this with my client-app, but you need to do this for each listener_rule you want to use with TLS validation. 

 

Summary

Now we've successfully created an Application Load Balancer that we use when someone tries to access our application. This will balance the traffic load as well as redirect HTTP traffic to HTTPS, for our 3 applications.

 

 




About the author



Mateo Spak Jenbergsen

Mateo is a Devops at Spak Consultants, with strong focus on AWS, Terraform and container technologies. He has a strong passion for pushing the limits when it comes to building software on cloud platforms.

Comments