Creating the Required Node Pools

This topic provides instructions for creating the three types of required node pools:

  • The Operator node pool for running the AnzoGraph, Anzo Agent with Anzo Unstructured (AU), and Elasticsearch operator pods.
  • The AnzoGraph node pool for running AnzoGraph application pods.
  • The Dynamic node pool for running Anzo Agent with AU and Elasticsearch application pods.

For more information about the node pools, see Node Pool Requirements.

Define the Node Pool Requirements

Before creating the node pools, configure the infrastructure requirements for each type of pool. The nodepool_*.conf files in the az/conf.d directory are sample configuration files that you can use as templates, or you can edit the files directly:

  • nodepool_operator.conf defines the requirements for the Operator node pool.
  • nodepool_anzograph.conf defines the requirements for the AnzoGraph node pool.
  • nodepool_dynamic.conf defines the requirements for the Dynamic node pool.

Each type of node pool configuration file contains the following parameters. Descriptions of the parameters and guidance on specifying the appropriate values for each type of node pool are provided below.

NODEPOOL_NAME="<name>"
KUBERNETES_VERSION="<kubernetes-version>"
DOMAIN="<domain>"
KIND="<kind>"
MACHINE_TYPE="<node-vm-size>"
LOCATION=${LOCATION:-"<location>"}
RESOURCE_GROUP=${RESOURCE_GROUP:-"<resource-group>"}
VNET_NAME=${VNET_NAME:-"<vnet-name>"}
SUBNET_NAME="<name>"
SUBNET_CIDR="<address-prefix>"
K8S_CLUSTER_NAME=${K8S_CLUSTER_NAME:-"<cluster-name>"}
NODE_TAINTS="<node-taints>"
MAX_PODS_PER_NODE=<max-pods>
MAX_NODES=<max-count>
MIN_NODES=<min-count>
NUM_NODES=<node-count>
DISK_SIZE="<node-osdisk-size>"
OS_TYPE="<os-type>"
PRIORITY="<priority>"
ENABLE_CLUSTER_AUTOSCALER=<enable-cluster-autoscaler>
LABELS="<nodepool-labels>"
MODE="<mode>"
NODE_OSDISK_TYPE="<node-osdisk-type>"
PPG="<name>"
PPG_TYPE=${PPG_TYPE:-"<type>"}

NODEPOOL_NAME

The name to give the node pool.

Node Pool Type Sample NODEPOOL_NAME Value
Operator csi-operator
AnzoGraph csi-anzograph
Dynamic csi-dynamic

KUBERNETES_VERSION

The version of Kubernetes to use for creating the node pool. This value must match the AKS cluster version (K8S_CLUSTER_VERSION). For example, 1.24.

DOMAIN

The name of the domain that hosts the node pool. This is typically the name or acronym for the organization, such as csi.

KIND

This parameter classifies the node pool in terms of kernel tuning and the type of pods that the node pool will host.

Node Pool Type Required KIND Value
Operator operator
AnzoGraph anzograph
Dynamic dynamic

MACHINE_TYPE

The Virtual Machine Type to use for the nodes in the node pool.

Node Pool Type Sample MACHINE_TYPE Value
Operator Standard_DS2_v2
AnzoGraph Standard_D16s_v3
Dynamic Standard_D8s_v3

For more guidance on determining the instance types to use for nodes in the required node pools, see Compute Resource Planning.

LOCATION

The Region code for the location of the AKS cluster. For example, eastus.

RESOURCE_GROUP

The name of the Azure Resource Group to allocate the node pool's resources to. You can specify the name of an existing group, or you can specify a new name if you want the K8s scripts to create a new Resource Group for the node pool.

VNET_NAME

The name of the Virtual Network that the AKS cluster was deployed in.

SUBNET_NAME

The name of the subnetwork to create.

SUBNET_CIDR

The IP address prefix to use when creating the subnetwork.

K8S_CLUSTER_NAME

The name of the AKS cluster.

NODE_TAINTS

This parameter configures a node so that the scheduler avoids or prevents using it for hosting certain pods. When a pod is scheduled for deployment, the scheduler relies on this value to determine whether the pod belongs in this pool. If a pod has a toleration that is not compatible with this taint, the pod is rejected from the pool. The table below lists the recommended values. The NoSchedule value means a toleration is required and pods without the appropriate toleration will not be allowed in the pool.

Node Pool Type Recommended NODE_TAINTS Value
Operator cambridgesemantics.com/dedicated=operator:NoSchedule
AnzoGraph cambridgesemantics.com/dedicated=anzograph:NoSchedule
Dynamic cambridgesemantics.com/dedicated=dynamic:NoSchedule

MAX_PODS_PER_NODE

The maximum number of pods that can be hosted on a node in the node pool. In addition to Anzo application pods, this limit also needs to account for K8s service pods and helper pods. Cambridge Semantics recommends that you set this value to at least 16 for all node pool types.

MAX_NODES

The maximum number of nodes that can be deployed in the node pool.

Node Pool Type Sample MAX_NODES Value
Operator 8
AnzoGraph 16
Dynamic 32

MIN_NODES

The minimum number of nodes to remain deployed in the node pool at all times. If the cluster autoscaler is enabled for the node pool, you can set this value to 1 (the lowest value allowed by AKS). The autoscaler will automatically provision additional nodes if multiple pods are scheduled for deployment.

NUM_NODES

The number of nodes to deploy when this node pool is created. This value must be set to at least 1. When you create the node pool, at least one node in the pool needs to be deployed as well.

DISK_SIZE

The size in GB of the OS disk for each node in the node pool.

Node Pool Type Sample DISK_SIZE Value
Operator 50
AnzoGraph 100
Dynamic 100

OS_TYPE

The operating system to use for the nodes in the node pool. Specify Linux for each type of node pool.

PRIORITY

Specifies the priority level of the VMs for the nodes in the node pool. Valid values are Regular (dedicated) or Spot (low-priority or preemptible).

ENABLE_CLUSTER_AUTOSCALER

Indicates whether to enable the cluster autoscaler for the node pool.

LABELS

A space-separated list (in key=value format) of labels that define the type of pods that can be placed on the nodes in this node pool. One label, cambridgesemantics.com/node-purpose, is required for each type of node pool. The node-purpose label indicates that the purpose of the nodes in the pools are to host operator, anzograph, or dynamic pods. The table below lists the required labels for each node pool.

Node Pool Type Required NODE_LABELS Value
Operator cambridgesemantics.com/node-purpose=operator
AnzoGraph cambridgesemantics.com/node-purpose=anzograph
Dynamic cambridgesemantics.com/node-purpose=dynamic

For information about using labels in Kubernetes clusters, see Labels and Selectors in the Kubernetes documentation.

MODE

The mode for the node pool. The mode defines the node pool's primary function, i.e., whether it is a System node pool or a User pool. System node pools serve the primary purpose of hosting critical system pods. User node pools serve the primary purpose of hosting application pods. For the Operator, AnzoGraph, and Dynamic node pools, the mode should be set to User. For more information, see System and User Node Pools in the Azure AKS documentation.

NODE_OSDISK_TYPE

The type of OS disk to use for machines in the node pool. The options are Ephemeral or Managed.

PPG

This optional parameter specifies the name of the Proximity Placement Group (PPG) to use for the node pool. For information about using proximity placement groups, see Use Proximity Placement Groups in the Azure AKS documentation.

PPG_TYPE

If using a Proximity Placement Group (PPG), this parameter specifies the type of PPG to use. The only valid value is Standard.

Example Configuration Files

Example completed configuration files for each type of node pool are shown below.

Operator Node Pool

The example below shows a configured nodepool_operator.conf file.

NODEPOOL_NAME="csi-operator"
KUBERNETES_VERSION="1.24"
DOMAIN="csi"
KIND="operator"
MACHINE_TYPE="Standard_DS2_v2"
LOCATION=${LOCATION:-"eastus"}
RESOURCE_GROUP=${RESOURCE_GROUP:-"aks-resource-group"}
VNET_NAME=${VNET_NAME:-"anzo-vnet"}
SUBNET_NAME="k8s-subnet"
SUBNET_CIDR="20.20.2.0/19"
K8S_CLUSTER_NAME=${K8S_CLUSTER_NAME:-"k8s-cluster"}
NODE_TAINTS="cambridgesemantics.com/dedicated=operator:NoSchedule"
MAX_PODS_PER_NODE=16
MAX_NODES=8
MIN_NODES=1
NUM_NODES=1
DISK_SIZE="50"
OS_TYPE="Linux"
PRIORITY="Regular"
ENABLE_CLUSTER_AUTOSCALER=true
LABELS="cambridgesemantics.com/node-purpose=operator"
MODE="User"
NODE_OSDISK_TYPE="Managed"
#PPG="testppg"
#PPG_TYPE=${PPG_TYPE:-"standard"}

AnzoGraph Node Pool

The example below shows a configured nodepool_anzograph.conf file.

NODEPOOL_NAME="csi-anzograph"
KUBERNETES_VERSION="1.24"
DOMAIN="csi"
KIND="anzograph"
MACHINE_TYPE="Standard_D16s_v3"
LOCATION=${LOCATION:-"eastus"}
RESOURCE_GROUP=${RESOURCE_GROUP:-"aks-resource-group"}
VNET_NAME=${VNET_NAME:-"anzo-vnet"}
SUBNET_NAME="k8s-subnet"
SUBNET_CIDR="20.20.2.0/19"
K8S_CLUSTER_NAME=${K8S_CLUSTER_NAME:-"k8s-cluster"}
NODE_TAINTS="cambridgesemantics.com/dedicated=anzograph:NoSchedule"
MAX_PODS_PER_NODE=16
MAX_NODES=16
MIN_NODES=1
NUM_NODES=1
DISK_SIZE="100"
OS_TYPE="Linux"
PRIORITY="Regular"
ENABLE_CLUSTER_AUTOSCALER=true
LABELS="cambridgesemantics.com/node-purpose=anzograph"
MODE="User"
NODE_OSDISK_TYPE="Managed"
#PPG="testppg"
#PPG_TYPE=${PPG_TYPE:-"standard"}

Dynamic Node Pool

The example below shows a configured nodepool_dynamic.conf file.

NODEPOOL_NAME="csi-dynamic"
KUBERNETES_VERSION="1.24"
DOMAIN="csi"
KIND="dynamic"
MACHINE_TYPE="Standard_D8s_v3"
LOCATION=${LOCATION:-"eastus"}
RESOURCE_GROUP=${RESOURCE_GROUP:-"aks-resource-group"}
VNET_NAME=${VNET_NAME:-"anzo-vnet"}
SUBNET_NAME="k8s-subnet"
SUBNET_CIDR="20.20.2.0/19"
K8S_CLUSTER_NAME=${K8S_CLUSTER_NAME:-"k8s-cluster"}
NODE_TAINTS="cambridgesemantics.com/dedicated=dynamic:NoSchedule"
MAX_PODS_PER_NODE=16
MAX_NODES=32
MIN_NODES=1
NUM_NODES=1
DISK_SIZE="100"
OS_TYPE="Linux"
PRIORITY="Regular"
ENABLE_CLUSTER_AUTOSCALER=true
LABELS="cambridgesemantics.com/node-purpose=dynamic"
MODE="User"
NODE_OSDISK_TYPE="Managed"
#PPG="testppg"
#PPG_TYPE=${PPG_TYPE:-"standard"}

Create the Node Pools

After defining the requirements for the node pools, run the create_nodepools.sh script in the az directory to create each type of node pool. Run the script once for each type of pool.

The create_nodepools.sh script references the files in the az/reference directory. If you customized the directory structure on the workstation, ensure that the reference directory is available at the same level as create_nodepools.sh before creating the node pools.

Run the script with the following command. The arguments are described below.

./create_nodepools.sh -c <config_file_name> [ -d <config_file_directory> ] [ -f | --force ] [ -h | --help ]
Argument Description
-c <config_file_name> This is a required argument that specifies the name of the configuration file (i.e., nodepool_operator.conf, nodepool_anzograph.conf, or nodepool_dynamic.conf) that supplies the node pool requirements. For example, -c nodepool_dynamic.conf.
-d <config_file_directory> This is an optional argument that specifies the path and directory name for the configuration file specified for the -c argument. If you are using the original az directory file structure and the configuration file is in the conf.d directory, you do not need to specify the -d argument. If you created a separate directory structure for different Anzo environments, include the -d option. For example, -d /az/env1/conf.
-f | --force This is an optional argument that controls whether the script prompts for confirmation before proceeding with each stage involved in creating the node pool. If -f (--force) is specified, the script assumes the answer is "yes" to all prompts and does not display them.
-h | --help This argument is an optional flag that you can specify to display the help from the create_nodepools.sh script.

For example, the following command runs the create_nodepools script, using nodepool_operator.conf as input to the script. Since nodepool_operator.conf is in the conf.d directory, the -d argument is excluded:

./create_nodepools.sh -c nodepool_operator.conf

The script validates that the required software packages, such as the Azure CLI and kubectl, are installed and that the versions are compatible with the script. It also displays an overview of the node pool deployment details based on the values in the specified configuration file.

The script then prompts you to proceed with deploying each component of the node pool. Type y and press Enter to proceed with the configuration.

Once the Operator, AnzoGraph, and Dynamic node pools are created, the next step is to create a Cloud Location in Anzo so that Anzo can connect to the AKS cluster and deploy applications. See Connecting to a Cloud Location.