terraform-provider-google/website/docs/r/container_node_pool.html.markdown
Darren Haken 2b1b668953 #1300 Supporting regional clusters for node pools (#1320)
This PR also switched us to using the beta API in all cases, and that had a side effect which is worth noting, note included here for posterity.

=====
The problem is, we add a GPU, and as per the docs, GKE adds a taint to
the node pool saying "don't schedule here unless you tolerate GPUs",
which is pretty sensible.

Terraform doesn't know about that, because it didn't ask for the taint
to be added. So after apply, on refresh, it sees the state of the world
(1 taint) and the state of the config (0 taints) and wants to set the
world equal to the config. This introduces a diff, which makes the test
fail - tests fail if there's a diff after they run.

Taints are a beta feature, though. :) And since the config doesn't
contain any taints, terraform didn't see any beta features in that node
pool ... so it used to send the request to the v1 API. And since the v1
API didn't return anything about taints (since they're a beta feature),
terraform happily checked the state of the world (0 taints I know about)
vs the config (0 taints), and all was well.

This PR makes every node pool refresh request hit the beta API. So now
terraform finds out about the taints (which were always there) and the
test fails (which it always should have done).

The solution is probably to write a little bit of code which suppresses
the report of the diff of any taint with value 'nvidia.com/gpu', but
only if GPUs are enabled. I think that's something that can be done.
2018-04-24 17:55:21 -07:00

4.7 KiB

layout page_title sidebar_current description
google Google: google_container_node_pool docs-google-container-node-pool Manages a GKE NodePool resource.

google_container_node_pool

Manages a Node Pool resource within GKE. For more information see the official documentation and API.

Example usage

Standard usage

resource "google_container_node_pool" "np" {
  name       = "my-node-pool"
  zone       = "us-central1-a"
  cluster    = "${google_container_cluster.primary.name}"
  node_count = 3
}

resource "google_container_cluster" "primary" {
  name               = "marcellus-wallace"
  zone               = "us-central1-a"
  initial_node_count = 3

  additional_zones = [
    "us-central1-b",
    "us-central1-c",
  ]

  master_auth {
    username = "mr.yoda"
    password = "adoy.rm"
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    guest_accelerator {
      type  = "nvidia-tesla-k80"
      count = 1
    }
  }
}

Usage with an empty default pool.

resource "google_container_node_pool" "np" {
  name       = "my-node-pool"
  zone       = "us-central1-a"
  cluster    = "${google_container_cluster.primary.name}"
  node_count = 1

  node_config {
    preemptible  = true
    machine_type = "n1-standard-1"

    oauth_scopes = [
      "compute-rw",
      "storage-ro",
      "logging-write",
      "monitoring",
    ]
  }
}

resource "google_container_cluster" "primary" {
  name = "marcellus-wallace"
  zone = "us-central1-a"

  lifecycle {
    ignore_changes = ["node_pool"]
  }

  node_pool {
    name = "default-pool"
  }
}

Usage with a regional cluster


resource "google_container_cluster" "regional" {
  name   = "marcellus-wallace"
  region = "us-central1"
}

resource "google_container_node_pool" "regional-np" {
  name       = "my-node-pool"
  region     = "us-central1"
  cluster    = "${google_container_cluster.primary.name}"
  node_count = 1
}

Argument Reference

  • zone - (Optional) The zone in which the cluster resides.

  • region - (Optional) The region in which the cluster resides (for regional clusters).

  • cluster - (Required) The cluster to create the node pool for. Cluster must be present in zone provided for zonal clusters.

Note: You must be provide region for regional clusters and zone for zonal clusters


  • autoscaling - (Optional) Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.

  • initial_node_count - (Optional) The initial node count for the pool. Changing this will force recreation of the resource.

  • management - (Optional) Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.

  • name - (Optional) The name of the node pool. If left blank, Terraform will auto-generate a unique name.

  • name_prefix - (Deprecated, Optional) Creates a unique name for the node pool beginning with the specified prefix. Conflicts with name.

  • node_config - (Optional) The node configuration of the pool. See google_container_cluster for schema.

  • node_count - (Optional) The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside autoscaling.

  • project - (Optional) The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.

  • version - (Optional) The Kubernetes version for the nodes in this pool. Note that if this field and auto_upgrade are both specified, they will fight each other for what the node version should be, so setting both is highly discouraged.

The autoscaling block supports:

  • min_node_count - (Required) Minimum number of nodes in the NodePool. Must be >=1 and <= max_node_count.

  • max_node_count - (Required) Maximum number of nodes in the NodePool. Must be >= min_node_count.

The management block supports:

  • auto_repair - (Optional) Whether the nodes will be automatically repaired.

  • auto_upgrade - (Optional) Whether the nodes will be automatically upgraded.

Import

Node pools can be imported using the zone, cluster and name, e.g.

$ terraform import google_container_node_pool.mainpool us-east1-a/my-cluster/main-pool