Setting Up a Workstation

This topic provides the requirements and instructions to follow for configuring a workstation to use for creating and managing the AKS infrastructure. The workstation needs to be able to connect to the Azure API. It also needs to have the required Azure and Kubernetes (K8s) software packages as well as the deployment scripts and configuration files supplied by Cambridge Semantics. This workstation will be used to connect to the Azure API and provision the K8s cluster and node pools.

You can use the Anzo server as the workstation if the network routing and security policies permit the Anzo server to access the Azure and K8s APIs. When deciding whether to use the Anzo server as the K8s workstation, consider whether Anzo may be migrated to a different server or VPC in the future.

Review the Requirements and Install the Software

Component Requirement
Operating System The operating system for the workstation must be RHEL/CentOS 7.8 or higher.
Networking The workstation should be in the same VPC network as the AKS cluster. If it is not in the same VPC, make sure that it is on a network that is routable from the cluster's VPC.
Software
  • Python 3 is required.
  • Kubectl: Cambridge Semantics recommends that you use the same kubectl version as the AKS cluster version. For instructions, see Install Kubectl below.
  • Azure CLI Version 2.5.1 or later is required. For installation instructions, see Install Azure CLI below.
CSI AZ Package Cambridge Semantics provides az scripts and configuration files to use for provisioning the AKS cluster and node pools. Download the files to the workstation. See Download the Cluster Creation Scripts and Configuration Files for more information about the az package.

Install Kubectl

Follow the instructions below to install kubectl on your workstation. Cambridge Semantics recommends that you install the same version of kubectl as the K8s cluster API. For more information, see Install and Set Up kubectl on Linux in the Kubernetes documentation.

  1. Run the following cURL command to download the kubectl binary:
    curl -LO https://dl.k8s.io/release/<version>/bin/linux/amd64/kubectl

    Where <version> is the version of kubectl to install. For example, the following command downloads version 1.19.12:

    curl -LO https://dl.k8s.io/release/v1.19.12/bin/linux/amd64/kubectl
  2. Run the following command to make the binary executable:
    chmod +x ./kubectl
  3. Run the following command to move the binary to your PATH:
    sudo mv ./kubectl /usr/local/bin/kubectl
  4. To confirm that the binary is installed and that you can run kubectl commands, run the following command to display the client version:
    kubectl version --client

    The command returns the following type of information. For example:

    Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.12", GitCommit:"f3abc15296f3a3f54e4ee42e830c61047b13895f", 
    GitTreeState:"clean", BuildDate:"2021-06-16T13:21:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Install Azure CLI

Follow the instructions below to install the Azure CLI on your workstation. These instructions follow the steps in Install the Azure CLI on Linux in the Microsoft Azure CLI documentation.

  1. Run the following command to import the Microsoft repository key:
    sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
  2. Run the following command to create the local azure-cli repository information:
    echo -e "[azure-cli]
    name=Azure CLI
    baseurl=https://packages.microsoft.com/yumrepos/azure-cli
    enabled=1
    gpgcheck=1
    gpgkey=https://packages.microsoft.com/keys/microsoft.asc" | sudo tee /etc/yum.repos.d/azure-cli.repo
    
  3. Run the following command to install the CLI:
    sudo yum install azure-cli
  4. To ensure that the CLI was installed, run the following command to display the CLI version:
    az version
  5. Next, run the following command to run the Azure CLI. Follow the prompts to log in to Azure:
    az login --use-device-code

Download the Cluster Creation Scripts and Configuration Files

The Cambridge Semantics GitHub repository, k8s-genesis (https://github.com/cambridgesemantics/k8s-genesis.git), includes all of the files that are needed to manage the configuration, creation, and deletion of the AKS cluster and node pools.

You can clone the repository to any location on the workstation or download the k8s-genesis package as a ZIP file, copy the file to the workstation, and extract the contents. The k8s-genesis directory includes three subdirectories (one for each supported Cloud Service Provider), the license information, and a readme file:

k8s-genesis
├── aws
├── azure
├── gcp
├── LICENSE
└── README.md

Navigate to /azure/k8s/az. The az directory contains all of the AKS cluster and node pool configuration files. You can remove all other directories from the workstation. The az files and subdirectories are shown below:

az
├── common.sh
├── conf.d
│   ├── k8s_cluster.conf
│   ├── nodepool_anzograph.conf
│   ├── nodepool_common.conf
│   ├── nodepool.conf
│   ├── nodepool_dynamic.conf
│   └── nodepool_operator.conf
├── create_k8s.sh
├── create_nodepools.sh
├── delete_k8s.sh
├── delete_nodepools.sh
├── exec_samples
│   ├── rbac_aad_group.yaml
│   └── rbac_aad_user.yaml
├── permissions
│   ├── aks_admin_role.json
│   └── cluster_developer_role.json
├── README.md
├── reference
│   ├── nodepool_anzograph_tuner.yaml
│   └── nodepool_dynamic_tuner.yaml
└── sample_use_cases
    ├── 10_useExistingResources
    │   └── k8s_cluster.conf
    ├── 11_useProximityPlacementGroups
    │   └── k8s_cluster.conf
    ├── 1_azureManagedIdentity_private_cluster
    │   └── k8s_cluster.conf
    ├── 2_createServicePrincipal_public_cluster
    │   └── k8s_cluster.conf
    ├── 3_useServicePrincipal
    │   └── k8s_cluster.conf
    ├── 4_userManagedAAD
    │   └── k8s_cluster.conf
    ├── 5_azureManagedAAD
    │   └── k8s_cluster.conf
    ├── 6_attachACR
    │   └── k8s_cluster.conf
    ├── 7_clusterAutoscalerSupport
    │   └── k8s_cluster.conf
    ├── 8_MonitoringEnabled
    │   └── k8s_cluster.conf
    └── 9_RBACSupport
        └── k8s_cluster.conf

The following list gives an overview of the files. Subsequent topics describe the files in more detail.

  • The common.sh script is used by the create and delete cluster and node pool scripts.
  • The conf.d directory contains the configuration files that are used to supply the specifications to follow when creating the K8s cluster and node pools:
    • k8s_cluster.conf: Supplies the specifications for the AKS cluster.
    • nodepool_anzograph.conf: Supplies the specifications for the AnzoGraph node pool.
    • nodepool_common.conf: Supplies the specifications for a Common node pool. The Common node pool is not required for AKS deployments, and this configuration file is typically not used.
    • nodepool.conf: This file is supplied as a reference. It contains the super set of node pool parameters.
    • nodepool_dynamic.conf: Supplies the specifications for the Dynamic node pool.
    • nodepool_operator.conf: Supplies the specifications for the Operator node pool.
  • The create_k8s.sh script is used to deploy the AKS cluster, and the k8s_cluster.conf file in the conf.d directory is the configuration file that is input to the create_k8s.sh script.
  • The create_nodepools.sh script is used to deploy the required node pools in the AKS cluster. The nodepool_*.conf files in the conf.d directory are the configuration files that are input to the create_nodepools.sh script.
  • The delete_k8s.sh script is used to delete the AKS cluster.
  • The delete_nodepools.sh script is used to remove node pools from the AKS cluster.
  • The exec_samples and permissions directories contain role definitions and scripts for creating the custom roles that are needed to grant access to the Azure users and groups who will create or use the AKS cluster.
  • The reference directory contains crucial files that are referenced by the cluster and node pool creation scripts. The files in the directory should not be edited, and the reference directory must exist on the workstation at the same level as the create*.sh and delete*.sh scripts.
  • The sample_use_cases directory contains sample AKS cluster configuration files that you can refer to or use as a template for configuring your AKS cluster depending on your use case. There are several files in the directory because there is an example for each type of AKS-supported identity and authentication management option. You can use a combination of settings from different sample files to configure your cluster, but you can only choose one type of authentication mode. For example, you cannot enable Service Principals with Azure Active Directory.
    • The k8s_cluster.conf file in the 1_azureManagedIdentity_private_cluster directory is a sample file for a use case where you want to deploy the AKS cluster into a private Virtual Network and let Azure handle identity creation and management. Using an Azure managed identity is recommended.
    • The k8s_cluster.conf file in the 2_createServicePrincipal_public_cluster directory is a sample file for a use case where you want to create a new Service Principal to deploy a public AKS cluster. Access to the cluster is limited to certain IP ranges. Managing Service Principals adds more complexity than using an Azure managed identity.
    • The k8s_cluster.conf file in the 3_useServicePrincipal directory is a sample file for a use case that is similar to the 2_createServicePrincipal_public_cluster use case above but uses an existing Service Principal.
    • The k8s_cluster.conf file in the 4_userManagedAAD directory is a sample file for a use case where you want to deploy an AKS cluster that connects to your user-managed Azure Active Directory (AAD) server for identity management. You supply the AAD client and server applications and the AAD tenant.
    • The k8s_cluster.conf file in the 5_azureManagedAAD directory is a sample file for a use case where you want to deploy an AKS cluster that connects to an Azure-managed Azure Active Directory (AAD) server for identity management. In this case, the AKS resource provider manages the client and server AAD applications.
    • The k8s_cluster.conf file in the 6_attachACR directory is a sample file for a use case where you want to deploy an AKS cluster that retrieves images from a private Azure Container Registry.
    • The k8s_cluster.conf file in the 7_clusterAutoscalerSupport directory is a sample file for a use case where you want to deploy an AKS cluster that employs the Cluster Autoscaler service. The autoscaler automatically adds nodes to the node pool when demand increases and then deprovisions the nodes when demand decreases.
    • The k8s_cluster.conf file in the 8_MonitoringEnabled directory is a sample file for a use case where you want to deploy an AKS cluster with cluster monitoring enabled.
    • The k8s_cluster.conf file in the 9_RBACSupport directory is a sample file for a use case where you want to deploy an AKS cluster with Azure Role-Based Access Control (RBAC). Enabling RBAC allows you to use Azure AD users, groups, or service principals as subjects in Kubernetes RBAC.
    • The k8s_cluster.conf file in the 10_useExistingResources directory is a sample file for a use case where you want to deploy the AKS cluster into existing resources, such an existing Virtual Network with existing resource groups and subnetworks.
    • The k8s_cluster.conf file in the 11_useProximityPlacementGroups directory is a sample file for a use case where you want to use proximity placement groups for reduced latency. A proximity placement group is a logical grouping used to make sure Azure compute resources are physically located close to each other.

Once the workstation is configured, see Planning the Anzo and AKS Network Architecture to review information about the network architecture that the az scripts create. And see Creating and Assigning IAM Roles for instructions on creating the IAM roles that are needed for assigning permissions to create and use the AKS cluster.