EKS Auto mode with terraform
Amazon EKS Auto Mode fully automates Kubernetes cluster management for compute, storage, and networking on AWS. It also comes with Karpenter
auto-scaler pre-installed with a default NodePool
that provisions AMD base servers as required.
Why opt for EKS Auto
EKS cluster requires many Add-on's and features to be installed for full functionality and it is challenging to maintain versions and compatibility of each of these components. With EKS Auto Mode
all of these are managed by AWS, making it much easier manager and upgrade.
Addon/features managed by AWS in EKS Auto mode | |
---|---|
VPC CNI | Loadbalancer controller |
EKS Pod Identity | Kube Proxy |
Ingress controller (Recommend using Gateway API) | Karpenter autoscaler with default NodePool |
EKS Auto Cluster build
With EKS auto mode, creating cluster is much less complicated, and have minimal requirements. Terraform
code for this example can by found at my repo aws-eks-terraform -> EKS-Cluster-auto-mode
- VPC with 2 subnet, and tags for subnet
- EKS Cluster Auto mode build and grant access (API Auth)
Step 1. Create VPC
VPC must be created with 2 subnets at minimum and following Subnet tags to allow LoadBalancer creation in correct subnet.
public_subnet_tags = {
"kubernetes.io/role/elb" = "1" # Tag for external LoadBalancer
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = "1" # Tag for internal LoadBalancer
}
Step 2. Provision EKS cluster Auto
Once VPC is created you are reay to provision EKS cluster. First you need to create necessary IAM roles, then create EKS cluster.
IAM Roles
You must create some IAM roles and trust policies before creating EKS cluster. Node IAM role with WorkerNode
and Container Registry
Read access. Cluster IAM role with EKSCluster
, Compute
, LoadBalancer
, BlockStorage
, and EKSNetworking
policies.
iam.tf
resource "aws_iam_role" "node" {
name = "eks-auto-node-example"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = ["sts:AssumeRole"]
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKSWorkerNodeMinimalPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEC2ContainerRegistryPullOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly"
role = aws_iam_role.node.name
}
resource "aws_iam_role" "cluster" {
name = "eks-cluster-eks-auto-demo"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sts:AssumeRole",
"sts:TagSession"
]
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSComputePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSComputePolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSBlockStoragePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSLoadBalancingPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSNetworkingPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy"
role = aws_iam_role.cluster.name
}
EKS Cluster
When creating EKS Auto
cluster, compute_config
, network_config
and Storage_config
must be set to true. This instructs AWS to configure Karpented, VPC-CNI, LB Controller, and EBS CSI driver.
eks-auto.tf
resource "aws_eks_cluster" "eks-auto-demo" {
name = "eks-auto-demo"
access_config {
authentication_mode = "API"
}
role_arn = aws_iam_role.cluster.arn
version = "1.31"
bootstrap_self_managed_addons = false
compute_config {
enabled = true
node_pools = ["general-purpose"]
node_role_arn = aws_iam_role.node.arn
}
kubernetes_network_config {
elastic_load_balancing {
enabled = true
}
}
storage_config {
block_storage {
enabled = true
}
}
vpc_config {
endpoint_private_access = true
endpoint_public_access = true
subnet_ids = module.vpc.private_subnets
}
# Ensure that IAM Role permissions are created before and deleted
# after EKS Cluster handling. Otherwise, EKS will not be able to
# properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSComputePolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSBlockStoragePolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSLoadBalancingPolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSNetworkingPolicy,
]
}
EKS Access entry
By default, creator of EKS cluster does not have Cluster Admin permission. You must add necessary users to aws_eks_access_entry
and assign permission to access EKS clusters. In this case I am allowing full ClusterAdmin
permission to user creating the cluster.
access_entry.tf
data "aws_caller_identity" "current" {}
resource "aws_eks_access_entry" "aws_eks_access_entry" {
cluster_name = aws_eks_cluster.eks-auto-demo.name
principal_arn = data.aws_caller_identity.current.arn
type = "STANDARD"
}
resource "aws_eks_access_policy_association" "aws_eks_access_policy_association" {
cluster_name = aws_eks_cluster.eks-auto-demo.name
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
principal_arn = data.aws_caller_identity.current.arn
access_scope {
type = "cluster"
# namespaces = ["example-namespace"]
}
}
Apply terraform code
Connect to EKS cluster and validate
aws eks --profile labs --region eu-west-1 update-kubeconfig --name eks-auto-demo
kubectl cluster-info
kubectl get nodepool, ingressclass, nodes
NodePool
, but no nodes will be created until application is deployed. Ingress controller is installed, you must created IngressClass first before making use of Ingress controller.
Deploy a sample app and you will notice Karpenter spinning up a new Node and app gets deployed.
Why should you NOT use EKS default Ingress
EKS Auto mode comes with Ingress controller however it is better to run a Gateway API like Envoy Gateway. By default when you create an Ingress resource, AWS Ingress controller will create necessary TargetGroup
and configure to forward direct to your Pod's port. Health checks for TG are traditional and much slower than Kubernetes. This can result in connection drops during releases, scaling up or down events as TG will take longer to detect changes to the pods.