Creating the EKS Cluster
Follow the instructions below to define the EKS cluster resource requirements and then create the cluster based on your specifications.
- For integration with Anzo Version 5.1.5 or earlier, Kubernetes version 1.17 is required.
- For integration with Anzo Version 5.1.6 or later, Kubernetes versions 1.17, 1.18, and 1.19 are supported.
See Amazon EKS Kubernetes versions in the EKS documentation for details about the available cluster versions.
- Define the EKS Cluster Requirements
- (Optional) Define the IAM Role for K8s Service Accounts Requirements
- Create the EKS Cluster
Define the EKS Cluster Requirements
The first step in creating the K8s cluster is to define the infrastructure specifications. The k8s_cluster.conf file in the eksctl/conf.d
directory is a sample cluster configuration file that you can use as a template, or you can edit the file directly. The contents of k8s_cluster.conf are shown below. Descriptions of the cluster parameters follow the contents.
# AWS Configuration parameters
REGION="<region>"
AvailabilityZones="<zones>"
TAGS="<tags>"
# Networking configuration
VPC_ID="<vpc-id>"
VPC_CIDR="<vpc-cidr>"
NAT_SUBNET_CIDRS="<nat-subnet-cidr>"
PUBLIC_SUBNET_CIDRS="<public-subnet-cidr>"
PRIVATE_SUBNET_CIDRS="<private-subnet-cidr>"
VPC_NAT_MODE="<nat-mode>"
WARM_IP_TARGET="<warm-ip-target>" PUBLIC_ACCESS_CIDRS="<public-access-cidrs>"
ALLOW_NETWORK_CIDRS="<allow-network-cidrs>"
# EKS control plane configuration
CLUSTER_NAME="<name>"
CLUSTER_VERSION="<version>"
ENABLE_PRIVATE_ACCESS=<resources-vpc-config endpointPrivateAccess>
ENABLE_PUBLIC_ACCESS=<resources-vpc-config endpointPublicAccess>
CNI_VERSION="<cni-version>"
# Logging types: ["api","audit","authenticator","controllerManager","scheduler"]
ENABLE_LOGGING_TYPES="<logging-types>" DISABLE_LOGGING_TYPES="<logging-types>"
# Common parameters
WAIT_DURATION=<wait-duration>
WAIT_INTERVAL=<wait-interval>
STACK_CREATION_TIMEOUT="<timeout>"
REGION
The AWS region for the EKS cluster. For example, us-east-1.
AvailabilityZones
A space-separated list of each of the Availability Zones in which you want to make the EKS cluster highly available. To ensure that the AWS EKS service can maintain high availability, you can list up to three Availability Zones. For example, us-east-1a us-east-1b.
TAGS
A comma-separated list of any labels that you want to add to the EKS cluster resources. Tags are optional key/value pairs that you define for categorizing resources.
VPC_ID
The ID of the VPC to provision the cluster into. Typically this value is the ID for the VPC that Anzo is deployed in. For example, vpc-0dd06b24c819ec3e5.
If you want eksctl to create a new VPC, you can leave this value blank. However, after deploying the EKS cluster, you must configure the new VPC to make it routable from the Anzo VPC.
VPC_CIDR
The CIDR block to use for the VPC. For example, 10.107.0.0/16.
Supply this value even if VPC_ID is not set and a new VPC will be created.
NAT_SUBNET_CIDRS
A space-separated list of the CIDR blocks for the public subnets that will be used by the NAT gateway. For example, 10.107.0.0/24 10.107.5.0/24.
The number of CIDR blocks should equal the number of specified AvailabilityZones if you want the NAT gateway to be highly available.
PUBLIC_SUBNET_CIDRS
A space-separated list of the CIDR blocks for the public subnets. For example, 10.107.1.0/24 10.107.2.0/24.
PRIVATE_SUBNET_CIDRS
A space-separated list of the CIDR blocks for the private subnets. For example, 10.107.3.0/24 10.107.4.0/24.
VPC_NAT_MODE
The NAT mode for the VPC. Valid values are "HighlyAvailable," "Single," or "Disable." Cambridge Semantics recommends that you set this value to HighlyAvailable.
WARM_IP_TARGET
Specifies the "warm pool" or number of free IP addresses to keep available for pod assignment on each node so that there is less time spent waiting for IP addresses to be assigned when a pod is scheduled. Cambridge Semantics recommends that you set this value to 8.
PUBLIC_ACCESS_CIDRS
A comma-separated list of the CIDR blocks that can access the K8s API server over the public endpoint.
ALLOW_NETWORK_CIDRS
A comma-separated list of the CIDR blocks that can access the K8s API over port 443.
CLUSTER_NAME
Name to give the EKS cluster. For example, csi-k8s-cluster.
CLUSTER_VERSION
Kubernetes version of the EKS cluster.
- For integration with Anzo Version 5.1.5 or earlier, Kubernetes version 1.17 is required.
- For integration with Anzo Version 5.1.6 or later, Kubernetes versions 1.17, 1.18, and 1.19 are supported.
See Amazon EKS Kubernetes versions in the EKS documentation for details about the available cluster versions.
ENABLE_PRIVATE_ACCESS
Indicates whether to enable private (VPC-only) access to the EKS cluster endpoint. This parameter accepts a "true" or "false" value and maps to the EKS --resources-vpc-config endpointPrivateAccess
option. The default value in k8s_cluster.conf is true.
ENABLE_PUBLIC_ACCESS
Whether to enable public access to the EKS cluster endpoint. This parameter accepts a "true" or "false" value and maps to the EKS --resources-vpc-config endpointPublicAccess
option. The default value in k8s_cluster.conf is false.
CNI_VERSION
An optional property that specifies the version of the VPC CNI plugin to use for pod networking.
ENABLE_LOGGING_TYPES
A comma-separated list of the logging types to enable for the cluster. Valid values are api, audit, authenticator, controllerManager, and scheduler. For information about the types, see Amazon EKS Control Plane Logging in the EKS documentation. The default value in k8s_cluster.conf is api,audit for Kubernetes API logging and Audit logs, which provide a record of the users, administrators, or system components that have affected the cluster.
DISABLE_LOGGING_TYPES
A comma-separated list of the logging types to disable for the cluster. Valid values are api, audit, authenticator, controllerManager, and scheduler. The default value in k8s_cluster.conf is controllerManager,scheduler, which disables the Kubernetes Controller Manager daemon as well as the Kubernetes Scheduler.
WAIT_DURATION
The number of seconds to wait before timing out during cluster resource creation. For example, 1200 means the creation of a resource will time out if it is not finished in 20 minutes.
WAIT_INTERVAL
The number of seconds to wait before polling for resource state information. The default value in k8s_cluster.conf is 10 seconds.
STACK_CREATION_TIMEOUT
The number of minutes to wait for EKS cluster state changes before timing out. For example, the time to wait for creation or update to complete. For example, 30m.
Example Cluster Configuration File
An example completed k8s_cluster.conf file is shown below.
# AWS Configuration parameters REGION="us-east-1" AvailabilityZones="us-east-1a us-east-1b" TAGS="Description=EKS Cluster" # Networking configuration VPC_ID="vpc-0dd06b24c819ec3e5" VPC_CIDR="10.107.0.0/16" NAT_SUBNET_CIDRS="10.107.0.0/24 10.107.5.0/24" PUBLIC_SUBNET_CIDRS="10.107.1.0/24 10.107.2.0/24" PRIVATE_SUBNET_CIDRS="10.107.3.0/24 10.107.4.0/24" VPC_NAT_MODE="HighlyAvailable" WARM_IP_TARGET="8" PUBLIC_ACCESS_CIDRS="1.2.3.4/32,1.1.1.1/32" ALLOW_NETWORK_CIDRS="10.108.0.0/16 10.109.0.0/16" # EKS control plane configuration CLUSTER_NAME="csi-k8s-cluster" CLUSTER_VERSION="1.17" ENABLE_PRIVATE_ACCESS=True ENABLE_PUBLIC_ACCESS=False CNI_VERSION="1.7.5" # Logging types: ["api","audit","authenticator","controllerManager","scheduler"] ENABLE_LOGGING_TYPES="api,audit" DISABLE_LOGGING_TYPES="controllerManager,scheduler" # Common parameters WAIT_DURATION=1200 WAIT_INTERVAL=10 STACK_CREATION_TIMEOUT="30m"
(Optional) Define the IAM Role for K8s Service Accounts Requirements
For fine-grained permission management of the applications that run in the EKS cluster, you can associate an IAM role with a Kubernetes (K8s) Service Account. The Service Account can then be used to grant permissions to the pods in the cluster so that the container applications can use an AWS SDK or AWS CLI to make API requests to AWS services like S3 or Amazon RDS. For details, see IAM Roles for Service Accounts in the Amazon EKS documentation.
If you want to create a new IAM role with associated K8s Service Accounts during EKS cluster creation, you can define the Service Account requirements in the iam_serviceaccounts.yaml file in the conf.d
directory. When you create the cluster, there is a prompt that asks if you want to update IAM properties for the cluster. Responding y (yes) creates the account based on the specifications in iam_serviceaccounts.yaml. The contents of the file are shown below. Descriptions of the parameters follow the contents.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: <eks-cluster-name>
region: <cluster-region>
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: <service-account-name>
namespace: <namespace>
labels: {<label-name>: "<value>"}
attachPolicyARNs:
- "<arn>"
tags:
<tag-name>: "<value>"
- metadata:
name: <service-account-name>
namespace: <namespace>
labels: {<label-name>: "<value>"}
attachPolicyARNs:
- "<arn>"
tags:
<tag-name>: "<value>"
wellKnownPolicies:
<policy>: <enable-policy>
roleName: <role-name>
roleOnly: <role-only>
apiVersion
The version of the schema for this object.
kind
The schema for this object.
name
The name of the EKS cluster (CLUSTER_NAME) to create the Service Accounts for. For example, csi-k8s-cluster.
region
The region that the EKS cluster is deployed in (REGION). For example, us-east-1.
withOIDC
Indicates whether to enable the IAM OpenID Connect Provider (OIDC) as well as IRSA for the Amazon CNI plugin. This value must be true. Amazon requires OIDC to use IAM roles for Service Accounts.
serviceAccounts
There are multiple - metadata
sequences under serviceAccounts:
- metadata:
name: <service-account-name>
namespace: <namespace>
labels: {<label-name>: "<value>"}
attachPolicyARNs:
- "<arn>"
tags:
<tag-name>: "<value>"
Each sequence supplies the metadata for one Service Account. You can include any number of metadata sequences to create multiple Service Accounts.
name
The name to use for the Service Account.
namespace
The namespace to create the Service Account in. If the namespace you specify does not exist, a new namespace is created. If namespace is not specified, default is used.
labels
An optional list of labels to add to the Service Account.
attachPolicyARNs
A list of the Amazon Resource Names (ARN) for the IAM policies to attach to the Service Account.
tags
An optional list of tags to add to the Service Account.
wellKnownPolicies
A list of any common AWS IAM policies that you want to attach to the Service Accounts, such as imageBuilder, autoScaler, awsLoadBalancerController, or certManager. For a complete list of the supported well-known policies, see the eksctl Config File Schema.
roleName
The name for the new Service Account IAM Role.
roleOnly
Indicates whether to annotate the Service Accounts with the ARN of the new IAM Role (eks.amazonaws.com/role-arn). Cambridge Semantics recommends that you set this value to true.
Example IAM Role for Service Accounts Configuration File
An example completed iam_serviceaccounts.yaml file is shown below. This example creates a role called S3ReadRole with one Service Account that gives AnzoGraph containers read-only access to Amazon S3.
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: csi-k8s-cluster region: us-east-1 iam: withOIDC: true serviceAccounts: - metadata: name: s3-reader namespace: anzograph labels: {app: "database"} attachPolicyARNs: - "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess" tags: Team: "AnzoGraph Deployment" wellKnownPolicies: autoScaler: true roleName: S3ReadRole roleOnly: true
Create the EKS Cluster
After defining the cluster requirements, run the create_k8s.sh script in the eksctl
directory to create the cluster.
The create_k8s.sh
script references the files in the eksctl/reference
directory. If you customized the directory structure on the workstation, ensure that the reference directory is available at the same level as create_k8s.sh
before creating the cluster.
Run the script with the following command. The arguments are described below.
./create_k8s.sh -c <config_file_name> [ -d <config_file_directory> ] [ -f | --force ] [ -h | --help ]
-c <config_file_name>
This is a required argument that specifies the name of the configuration file that supplies the cluster requirements. For example, -c k8s_cluster.conf.
-d <config_file_directory>
This is an optional argument that specifies the path and directory name for the configuration file specified for the -c argument. If you are using the original eksctl
directory file structure and the configuration file is in the conf.d
directory, you do not need to specify the -d argument. If you created a separate directory structure for different Anzo environments, include the -d option. For example, -d /eksctl/env1/conf.
-f | --force
This is an optional argument that controls whether the script prompts for confirmation before proceeding with each stage involved in creating the cluster. If -f (--force) is specified, the script assumes the answer is "yes" to all prompts and does not display them.
-h | --help
This argument is an optional flag that you can specify to display the help from the create_k8s.sh script.
For example, the following command runs the create_k8s script, using k8s_cluster.conf as input to the script. Since k8s_cluster.conf is in the conf.d directory, the -d argument is excluded:
./create_k8s.sh -c k8s_cluster.conf
The script validates that the required software packages, such as the aws-cli, eksctl, and kubectl, are installed and that the versions are compatible with the script. It also displays an overview of the deployment details based on the values in the configuration file.
The script then prompts you to proceed with deploying each component of the EKS cluster infrastructure. Type y (yes) and press Enter to proceed with each step in creating the specified network, cluster, Internet gateway, NAT gateway, route table, and security group resources. All resources are created according to the specifications in the configuration file. Once the cluster resources are deployed, the script asks whether you would like to update IAM properties for the cluster. Continue to Configuring Cluster IAM Properties below for background information and details on configuring IAM properties.
Configuring Cluster IAM Properties
At the final stage of EKS cluster creation, the last few prompts are related to IAM properties.
First, you are asked about IAM roles for K8s Service Accounts. If you want to create Service Accounts, as described in (Optional) Define the IAM Role for K8s Service Accounts Requirements, answer y (yes) to the prompt Do you want to update IAM properties for cluster? Service Accounts will be created according to the specifications in iam_serviceaccounts.yaml. If you do not want to create Service Accounts, answer n (no).
The last prompt is related to IAM identity mapping for the EKS cluster. Only the IAM entity that created the cluster has system:masters permission for the cluster and its K8s services. To grant additional AWS users or roles the ability to interact with the cluster, IAM identity mapping must be performed by adding the aws-auth ConfigMap to the EKS cluster configuration (see Managing Users or IAM Roles for your Cluster in the Amazon EKS documentation).
To aid you in updating the ConfigMap so that additional users can access the cluster, the create_k8s.sh script includes prompts that ask for the required ConfigMap information. If you want to update the ConfigMap, answer y (yes) to the Do you want to add IAM users to control access to cluster prompt. The script prompts for the following values, which will be used to update mapRoles
and/or mapUsers
in aws-auth ConfigMap:
- Account ID: The AWS account ID where the EKS cluster is deployed.
- User Name: The username within Kubernetes to map to the IAM role. For example, admin.
- RBAC Group: The Kubernetes group to map the IAM role to. For example, system:masters.
- Service Name: This value must be emr-containers.
- Namespace: The namespace to create RBAC resources in.
- User or Role ARN: The Amazon Resource Name for the IAM role or user to create. For example, arn:aws:iam::105333188789:role/admin.
When cluster creation is complete, proceed to Creating the Required Node Groups to add the required node groups to the cluster.