Setting Up a Workstation

This topic provides the requirements and instructions to follow for configuring a workstation to use for creating and managing the GKE infrastructure. The workstation needs to be able to connect to the Google Cloud API. It also needs to have the required Google Cloud and Kubernetes (K8s) software packages as well as the deployment scripts and configuration files supplied by Cambridge Semantics. This workstation will be used to connect to the Google Cloud API and provision the K8s cluster and node pools.

You can use the Anzo server as the workstation if the network routing and security policies permit the Anzo server to access the Google Cloud and K8s APIs. When deciding whether to use the Anzo server as the K8s workstation, consider whether Anzo may be migrated to a different server or VPC in the future.

Review the Requirements and Install the Software

The table below lists the requirements for the K8s workstation.

Component Requirement
Operating System The operating system for the workstation must be RHEL/CentOS 7.8 or higher.
Networking The workstation should be in the same VPC network as the GKE cluster. If it is not in the same VPC, make sure that it is on a network that is routable from the cluster's VPC.
Software
  • Kubectl Versions 1.17 – 1.19 are supported. Cambridge Semantics recommends that you use the same kubectl version as the GKE cluster version. For instructions, see Install Kubectl below.
  • Google Cloud SDK is required. For installation instructions, see Install the Google Cloud SDK below.
CSI GCLOUD Package Cambridge Semantics provides gcloud scripts and configuration files to use for provisioning the GKE cluster and node pools. Download the files to the workstation. See Download the Cluster Creation Scripts and Configuration Files for more information about the gcloud package.

Install Kubectl

Follow the instructions below to install kubectl on your workstation. Cambridge Semantics recommends that you install the same version of kubectl as the K8s cluster API. For more information, see Install and Set Up kubectl on Linux in the Kubernetes documentation.

  1. Run the following cURL command to download the kubectl binary:
    curl -LO https://dl.k8s.io/release/<version>/bin/linux/amd64/kubectl

    Where <version> is the version of kubectl to install. For example, the following command downloads version 1.19.12:

    curl -LO https://dl.k8s.io/release/v1.19.12/bin/linux/amd64/kubectl
  2. Run the following command to make the binary executable:
    chmod +x ./kubectl
  3. Run the following command to move the binary to your PATH:
    sudo mv ./kubectl /usr/local/bin/kubectl
  4. To confirm that the binary is installed and that you can run kubectl commands, run the following command to display the client version:
    kubectl version --client

    The command returns the following type of information. For example:

    Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.12", GitCommit:"f3abc15296f3a3f54e4ee42e830c61047b13895f", 
    GitTreeState:"clean", BuildDate:"2021-06-16T13:21:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Install the Google Cloud SDK

Follow the instructions below to install the Google Cloud SDK on your workstation.

  1. Run the following command to configure access to the Google Cloud repository:
    sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
    [google-cloud-sdk]
    name=Google Cloud SDK
    baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOM
  2. Run the following command to install google-cloud-sdk:
    sudo yum install google-cloud-sdk

    The following packages are installed:

    google-cloud-sdk-app-engine-grpc
    google-cloud-sdk-pubsub-emulator 
    google-cloud-sdk-app-engine-go 
    google-cloud-sdk-cloud-build-local 
    google-cloud-sdk-datastore-emulator 
    google-cloud-sdk-app-engine-python 
    google-cloud-sdk-cbt 
    google-cloud-sdk-bigtable-emulator 
    google-cloud-sdk-datalab 
    google-cloud-sdk-app-engine-java
  3. Next, configure the default project and region settings for the Cloud SDK:
    1. Run the following command to set the default project for the GKE cluster:
      gcloud config set project <project_ID>

      Where <project_ID> is the Project ID for the project in which the GKE cluster will be provisioned.

    2. If you work with zonal clusters, run the following command to set the default compute zone for the GKE cluster:
      gcloud config set compute/zone <compute_zone>

      Where <compute_zone> is the default compute zone for the GKE cluster. For example:

      gcloud config set compute/zone us-central1-a
    3. If you work with regional clusters, run the following command to set the default region for the GKE cluster:
      gcloud config set compute/region <compute_region>

      Where <compute_region> is the default region for the GKE cluster. For example:

      gcloud config set compute/region us-east1
    4. To make sure that you are using the latest version of the Cloud SDK, run the following command to check for updates:
      gcloud components update

Download the Cluster Creation Scripts and Configuration Files

The Cambridge Semantics GitHub repository, k8s-genesis (https://github.com/cambridgesemantics/k8s-genesis.git), includes all of the files that are needed to manage the configuration, creation, and deletion of the GKE cluster and node pools.

You can clone the repository to any location on the workstation or download the k8s-genesis package as a ZIP file, copy the file to the workstation, and extract the contents. The k8s-genesis directory includes three subdirectories (one for each supported Cloud Service Provider), the license information, and a readme file:

k8s-genesis
├── aws
├── azure
├── gcp
├── LICENSE
└── README.md

Navigate to /gcp/k8s/gcloud. The gcloud directory contains all of the GKE cluster and node pool configuration files. You can remove all other directories from the workstation. The gcloud files and subdirectories are shown below:

gcloud
├── common.sh
├── conf.d
│   ├── k8s_cluster.conf
│   ├── nodepool_anzograph.conf
│   ├── nodepool_anzograph_tuner.yaml
│   ├── nodepool_common.conf
│   ├── nodepool.conf
│   ├── nodepool_dynamic.conf
│   ├── nodepool_dynamic_tuner.yaml
│   └── nodepool_operator.conf
├── create_k8s.sh
├── create_nodepools.sh
├── delete_k8s.sh
├── delete_nodepools.sh
├── gcloud_cli_common.sh
├── README.md
└── sample_use_cases
	 ├── 1_usePrivateEndpoint_private_cluster
	 │   └── k8s_cluster.conf
	 ├── 2_public_cluster
	 │   └── k8s_cluster.conf
	 ├── 3_useAuthorizedNetworks
	 │   └── k8s_cluster.conf
	 └── 4_providePublicEndpointAccess
		 └── k8s_cluster.conf	

The following list gives an overview of the files. Subsequent topics describe the files in more detail.

  • The common.sh and gcloud_cli_common.sh scripts are used by the create*.sh and delete*.sh scripts when the GKE cluster and node pools are created or deleted.
  • The conf.d directory contains the configuration files that supply the specifications to follow when creating the K8s cluster and node pools.
    • k8s_cluster.conf: Supplies the specifications for the GKE cluster.
    • nodepool_anzograph.conf: Supplies the specifications for the AnzoGraph node pool.
    • nodepool_anzograph_tuner.conf: Supplies the kernel-level tuning and security policies to apply to AnzoGraph runtime environments.
    • nodepool_common.conf: Supplies the specifications for a Common node pool. The Common node pool is not required for GKE deployments, and this configuration file is typically not used.
    • nodepool.conf: This file is supplied as a reference. It contains the superset of node pool parameters.
    • nodepool_dynamic.conf: Supplies the specifications for the Dynamic node pool.
    • nodepool_dynamic_tuner.conf: Supplies the kernel-level tuning and security policies to apply to Dynamic runtime environments.
    • nodepool_operator.conf: Supplies the specifications for the Operator node pool.
  • The create_k8s.sh script is used to deploy the GKE cluster.
  • The create_nodepools.sh script is used to deploy node pools in the GKE cluster.
  • The delete_k8s.sh script is used to delete the GKE cluster.
  • The delete_nodepools.sh script is used to remove node pools from the GKE cluster.
  • The sample_use_cases directory contains sample GKE cluster configuration files that you can refer to or use as a template for configuring your GKE cluster depending on your use case:
    • The k8s_cluster.conf file in the 1_usePrivateEndpoint_private_cluster directory is a sample file for a use case where you want to deploy the GKE cluster in an existing network that does not have public internet access.
    • The k8s_cluster.conf file in the 2_public_cluster directory is a sample file for a use case where you want to deploy the GKE cluster into a new network with public internet access.
    • The k8s_cluster.conf file in the 3_useAuthorizedNetworks directory is a sample file for a use case where you want to deploy the GKE cluster into a private network with master authorized networks.
    • The k8s_cluster.conf file in the 4_providePublicEndpointAccess directory is a sample file for a use case where you want to deploy the GKE cluster into a private network that has public endpoint access enabled.

Once the workstation is configured, see Planning the Anzo and GKE Network Architecture to review information about the network architecture that the gcloud scripts create. And see Creating and Assigning IAM Roles for instructions on creating the IAM roles that are needed for assigning permissions to create and use the GKE cluster.

Related Topics