Setting Up a Workstation

This topic provides the requirements and instructions to follow for configuring a workstation to use for creating and managing the GKE infrastructure. The workstation needs to be able to connect to the Google Cloud API. It also needs to have the required Google Cloud and Kubernetes (K8s) software packages as well as the deployment scripts and configuration files supplied by Cambridge Semantics. This workstation will be used to connect to the Google Cloud API and provision the K8s cluster and node pools.

You can use the Anzo server as the workstation if the network routing and security policies permit the Anzo server to access the Google Cloud and K8s APIs. When deciding whether to use the Anzo server as the K8s workstation, consider whether Anzo may be migrated to a different server or VPC in the future.

Workstation Requirements and Software Installation

The table below lists the requirements for the K8s workstation.

Component Requirement
Operating System The operating system for the workstation must be RHEL/CentOS 7.8 or higher.
Networking The workstation should be in the same VPC network as the GKE cluster. If it is not in the same VPC, make sure that it is on a network that is routable from the cluster's VPC.
Software
  • Kubectl Versions 1.17 – 1.19 are supported. Cambridge Semantics recommends that you use the same kubectl version as the GKE cluster version. For instructions, see Install Kubectl below.
  • Google Cloud SDK is required. For installation instructions, see Install the Google Cloud SDK below.
CSI GCLOUD Package Cambridge Semantics provides gcloud scripts and configuration files to use for provisioning the GKE cluster and node pools. Download the files to the workstation. See Cluster Creation Scripts and Configuration Files for more information about the gcloud package.

Install Kubectl

Follow the instructions below to install kubectl on your workstation. Cambridge Semantics recommends that you install the same version of kubectl as the K8s cluster API. For more information, see Install and Set Up kubectl on Linux in the Kubernetes documentation.

  1. Run the following cURL command to download the kubectl binary:
    curl -LO https://dl.k8s.io/release/<version>/bin/linux/amd64/kubectl

    Where <version> is the version of kubectl to install. For example, the following command downloads version 1.17.17:

    curl -LO https://dl.k8s.io/release/v1.17.17/bin/linux/amd64/kubectl
  2. Run the following command to make the binary executable:
    chmod +x ./kubectl
  3. Run the following command to move the binary to your PATH:
    sudo mv ./kubectl /usr/local/bin/kubectl
  4. To confirm that the binary is installed and that you can run kubectl commands, run the following command to display the client version:
    kubectl version --client

    The command returns the following information:

    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.17", GitCommit:"f3abc15296f3a3f54e4ee42e830c61047b13895f", 
    GitTreeState:"clean", BuildDate:"2021-01-13T13:21:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Install the Google Cloud SDK

Follow the instructions below to install the Google Cloud SDK on your workstation.

  1. Run the following command to configure access to the Google Cloud repository:
    sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
    [google-cloud-sdk]
    name=Google Cloud SDK
    baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOM
  2. Run the following command to install google-cloud-sdk:
    sudo yum install google-cloud-sdk

    The following packages are installed:

    google-cloud-sdk-app-engine-grpc
    google-cloud-sdk-pubsub-emulator 
    google-cloud-sdk-app-engine-go 
    google-cloud-sdk-cloud-build-local 
    google-cloud-sdk-datastore-emulator 
    google-cloud-sdk-app-engine-python 
    google-cloud-sdk-cbt 
    google-cloud-sdk-bigtable-emulator 
    google-cloud-sdk-datalab 
    google-cloud-sdk-app-engine-java
  3. Next, configure the default project and region settings for the Cloud SDK:
    1. Run the following command to set the default project for the GKE cluster:
      gcloud config set project <project_ID>

      Where <project_ID> is the Project ID for the project in which the GKE cluster will be provisioned.

    2. If you work with zonal clusters, run the following command to set the default compute zone for the GKE cluster:
      gcloud config set compute/zone <compute_zone>

      Where <compute_zone> is the default compute zone for the GKE cluster. For example:

      gcloud config set compute/zone us-central1-a
    3. If you work with regional clusters, run the following command to set the default region for the GKE cluster:
      gcloud config set compute/region <compute_region>

      Where <compute_region> is the default region for the GKE cluster. For example:

      gcloud config set compute/region us-east1
    4. To make sure that you are using the latest version of the Cloud SDK, run the following command to check for updates:
      gcloud components update

Cluster Creation Scripts and Configuration Files

Cambridge Semantics provides a package of files that enable users to manage the configuration, creation, and deletion of the GKE cluster and node pools. The top-level directory is called gcloud. Place the directory in any location on the workstation. The files and directory structure are shown below:

gcloud
├── conf.d
│   ├── k8s_cluster.conf
│   ├── nodepool.conf
│   ├── nodepool_anzograph.conf
│   ├── nodepool_anzograph.tuner.conf
│   ├── nodepool_common.conf
│   ├── nodepool_dynamic.conf
│   ├── nodepool_dynamic.tuner.conf
│   └── nodepool_operator.conf
├── common.sh
├── create_k8s.sh
├── create_nodepools.sh
├── delete_k8s.sh
└── delete_nodepools.sh	

The list below gives an overview of the files that are included in the gcloud package. Subsequent topics describe the files in more detail.

  • The conf.d directory contains the configuration files that supply the specifications to follow when creating the K8s cluster and node pools.
    • k8s_cluster.conf: Supplies the specifications for the GKE cluster.
    • nodepool.conf: This file is supplied as a reference. It contains the super set of node pool parameters.
    • nodepool_anzograph.conf: Supplies the specifications for the AnzoGraph node pool.
    • nodepool_anzograph_tuner.conf: Supplies the kernel-level tuning and security policies to apply to AnzoGraph runtime environments.
    • nodepool_common.conf: Supplies the specifications for a Common node pool. The Common node pool is not required for GKE deployments, and this configuration file is typically not used.
    • nodepool_dynamic.conf: Supplies the specifications for the Dynamic node pool.
    • nodepool_dynamic_tuner.conf: Supplies the kernel-level tuning and security policies to apply to Dynamic runtime environments.
    • nodepool_operator.conf: Supplies the specifications for the Operator node pool.
  • The common.sh scripts is used by the create*.sh and delete*.sh scripts.
  • The create_k8s.sh script is used to deploy the GKE cluster.
  • The create_nodepools.sh script is used to deploy node pools in the GKE cluster.
  • The delete_k8s.sh script is used to delete the GKE cluster.
  • The delete_nodepools.sh script is used to remove node pools from the GKE cluster.

Once the workstation is configured, see Planning the Anzo and GKE Network Architecture to review information about the network architecture that the gcloud scripts create. And see Creating and Assigning IAM Roles for instructions on creating the IAM roles that are needed for assigning permissions to create and use the GKE cluster.

Related Topics