Fix a panic in our test that is caused by a ListPolicy being nil. I
assume, but cannot verify, that this is an API change in that it may now
send back a nil listpolicy if a default is used.
Add the `enable_flow_logs` field to our subnetwork resource, so we can
specify whether [flow logs][1] should be enabled in Terraform configs.
Note that this behavior isn't explicitly documented yet, but it has made
it into the beta API client.
[1]: https://cloud.google.com/vpc/docs/using-flow-logs
This PR also switched us to using the beta API in all cases, and that had a side effect which is worth noting, note included here for posterity.
=====
The problem is, we add a GPU, and as per the docs, GKE adds a taint to
the node pool saying "don't schedule here unless you tolerate GPUs",
which is pretty sensible.
Terraform doesn't know about that, because it didn't ask for the taint
to be added. So after apply, on refresh, it sees the state of the world
(1 taint) and the state of the config (0 taints) and wants to set the
world equal to the config. This introduces a diff, which makes the test
fail - tests fail if there's a diff after they run.
Taints are a beta feature, though. :) And since the config doesn't
contain any taints, terraform didn't see any beta features in that node
pool ... so it used to send the request to the v1 API. And since the v1
API didn't return anything about taints (since they're a beta feature),
terraform happily checked the state of the world (0 taints I know about)
vs the config (0 taints), and all was well.
This PR makes every node pool refresh request hit the beta API. So now
terraform finds out about the taints (which were always there) and the
test fails (which it always should have done).
The solution is probably to write a little bit of code which suppresses
the report of the diff of any taint with value 'nvidia.com/gpu', but
only if GPUs are enabled. I think that's something that can be done.
This PR does a few things to the User-Agent header:
1. It puts Terraform/(version) first, since that's the way the RFC
expects it
2. It removes the goos and goarch data, although I could be convinced to
put it back in, I don't see what value it's providing
3. Moves directly to consuming the version package (which is the comment
above the function previously being called)
This simply adds the specification for operation timeouts, and sets
sane defaults. In testing against specific regions, creation of SQL
databases would fluctuate between 7-14 minutes against us-east1. As
such, a 15m creation threshold is recommended for this. Update and
Delete operations will adhere to 10m timeouts.
This follows a similar format as #1309.
* escape the folder name (in case of spaces, etc)
* add test case for folder with space
* add missing args
* make separate tests for each folder test, get folder name length under API limits
* further abstract out the resource name to prevent test collisions
* workaround multiple results returning for a given query by looping over return
* split test cases into separate funcs
* adding google folder data source with get by id, search by fields and lookup organization functionality
* removing search functionality
* creating folders for each test and updating documentation with default values
* Add support for regional GKE clusters in google_container_cluster:
* implement operation wait for v1beta1 api
* implement container clusters get for regional clusters
* implement container clusters delete for regional cluster
* implement container clusters update for regional cluster
* simplify logic by using generic 'location' instead of 'zone' and 'region'
* implement a method to generate the update function and refactor
* rebase and fix
* reorder container_operation fns
* cleanup
* add import support and docs
* additional locations cleanup
* Updates the default GKE legacy ABAC setting to false
* Updates docs for container_cluster
* Update test comments
* Format fix
* Adds ImportState test step to default legacy ABAC test
* Add time partitioning field to google_bigquery_table resource
* Fix flatten time partitioning field to google_bigquery_table resource
* Add resource bigquery table time partitioning field test
* Move resource bigquery table time partitioning field test to basic
* Add step to check that all the fields match
* Mark resource bigquery table time partitioning field as ForceNew
* Add time partitioning field test to testAccBigQueryTable config
* Updated google.golang.org/api/container/v1beta1
* Added support for private_cluster and master_ipv4_cidr
This is to implement #1174. See
https://groups.google.com/forum/#!topic/google-cloud-sdk-announce/GGW3SQSANIc
* Added simple test for private_cluster and master_ipv4_cidr
* Review replies
* Added some documentation for private_cluster
This updates the organization policy tests to be run sequentially,
instead of in parallel, as they share a resource that they're modifying.
It also updates them to use a separate organization than the one all our
other tests are running in, which prevents other tests from failing
because they're run in parallel to the organization policy changing
under them.
* add util for handling wrapped/raw google api errors
* add 404 error handling to shared iam files
* fix errwrap calls in iam files
* fix import
* remove newlines, clear ID for removed state in iam binding
* move setid calls back
* Revert "move setid calls back"
This reverts commit 0c7b2dbf92aff33dac8c5beb95568c2bc86dd7de.
* add update support for pod security policy
* update test
* add comment about updates
PR #1217 mistakenly updated the Set logic when flattening backends,
which caused some cascading errors and wasn't strictly necessary to
resolve the issue at hand. This backs out those changes, and instead
makes the smallest possible change to resolve the initial error, by
separating the logic for flattening regional backends from the logic for
flattening global backends.
We had several calls to `d.Set` that returned errors we weren't
catching, that turning on the panic flag for the tests caught. This PR
addresses them, and fixes the one test that was not safe to run in
parallel because it relied on a hardcoded name being unique.
This is largely just removing calls to `d.Set` for fields that don't
exist in the Schema, fixing how Sets get set, correcting typos, and
converting types.