Posted on

Kops AWS infra Automation


This example Project will help you to create KOPs cluster on multiple AZ but limited to the Single region.

Assume that you have AWS CLI installed and IAM user configured.

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess


  1. Terraform (Note you need to install 0.11.7 Version)
  2. Install kops (WE ARE USING kops 1.8.1 for now)

For Mac

brew update && brew install kops


curl -Lo kops
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

For Linux

wget -O kops
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
  1. Install kubectl

For Mac

curl -LO
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

For Ubuntu

curl -LO
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Getting started

Replace with your public zone name

vim example/
variable "domain_name" {
  default = ""

Edit cluster details. node_asg_desired,instance_key_name etc..

vim example/

**** Edit module according to insfra name *****

module "staging" {
  source                    = "../module"
  source                    = "./"
  kubernetes_version        = "1.8.11"
  sg_allow_ssh              = "${}"
  sg_allow_http_s           = "${}"
  cluster_name              = "staging"
  cluster_fqdn              = "staging.${}"
  route53_zone_id           = "${}"
  kops_s3_bucket_arn        = "${aws_s3_bucket.kops.arn}"
  kops_s3_bucket_id         = "${}"
  vpc_id                    = "${}"
  instance_key_name         = "${var.key_name}"
  node_asg_desired          = 3
  node_asg_min              = 3
  node_asg_max              = 3
  master_instance_type      = "t2.medium"
  node_instance_type        = "m4.xlarge"
  internet_gateway_id       = "${}"
  public_subnet_cidr_blocks = ["${local.staging_public_subnet_cidr_blocks}"]
  kops_dns_mode             = "private"

If you want Force single master. (Can be used when a master per AZ is not required or if running in a region with only 2 AZs).

vim module/ 

**** force_single_master should be true if you want single master ****

variable "force_single_master" {
   default = true

ALl good now. You can run Terraform plan to see if you get any errors. If everything clean just run “terraform apply” to build cluster.

cd example
terrafrom plan

(Output something like below)
  + module.staging.null_resource.delete_tf_files
      id:                                                 <computed>

Plan: 6 to add, 0 to change, 1 to destroy.


MASTER_ELB_CLUSTER1=$(terraform state show module.staging.aws_elb.master | grep dns_name | cut -f2 -d= | xargs)
kubectl config set-cluster --insecure-skip-tls-verify=true --server=https://$MASTER_ELB_CLUSTER1

And then test:

kubectl cluster-info
Kubernetes master is running at
KubeDNS is running at

kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION    Ready     master    9m        v1.8.11    Ready     node      3m        v1.8.11   Ready     node      27s       v1.8.11   Ready     node      2m        v1.8.11

Credits: Original code is taken from here.