* Add basic update for `google_kms_crypto_key` resource
Prior to this commit, any changes to `rotation_period` would
force a new resource as no `Update` was defined for the resource.
This commit introduces a basic `Update` through calling the
`Patch` service method. It only modifies the `rotation_period`,
and `next_rotation_time` at the moment, but this is reflective
of what is "allowed" on https://console.cloud.google.com/security/kms.
* Remove unused `Purpose` value in `CryptoKey`
We are only patching the `rotation_period`, and `next_rotation_time`,
so that value will not be affected.
* nit: format `Patch` operation to be in a single line
* Extend `TestAccKmsCryptoKey_rotation` test steps
- Test change in rotation period
- Test removal of rotation period
* Do not parse `NextRotationTime` if it is not set
* remove ForceNew: false
When creating a trigger by using the project defined in the schema we
enforce that the repo must be in that same project. We should be looking
at the project defined in the trigger_template data and falling back to
that first project if not found.
Closes: #1555
When enabling services, after the waiter returns, list the enabled
services and ensure the ones we enabled are in there. If not, retry. May
not always resolve#1393, but should help. Unfortunately, the real
answer is probably either:
1. For us to try and get the API updated to only return the waiter when
the service will consistently be available. I don't know how feasible
this is, but I'm willing to open a ticket.
2. For us to build retries into ~all our resources to retry for a set
amount of time when a service not enabled error is returned. This would
greatly slow down the provider in the case of the service legitimately
not being enabled, but is how other providers handle this class of
problem.
Unfortunately, due to the eventual consistency at play, this is a hard
issue to reproduce and prove, though it matches with my
experience--while testing this patch, one of the tests failed with the
error that the serviceusage API hadn't been enabled, but only on step 4
of the test, when calls had already succeeded. Which suggests eventual
consistency, to me. Regardless, this patch shouldn't _hurt_ and should
mostly be an imperceptible change to users, and should make instances
like #1393 less likely.
* vendor service usage api
* use serviceusage api instead of servicemanagement for project services
* add bigquery-json to test
* add import for project service
* add serviceusage_operation.go
When reading a project, both App Engine and Billing would fail if
_neither_ the default project the provider was configured with nor the
project being targeted had the App Engine Admin or Billing APIs
(respectively) enabled. We didn't catch this because our source project
obviously has both enabled.
This fixes the matter by checking the error returned from each of those,
and if it looks like an API not enabled error, logging it at warning
level instead of returning it as an error, which will let the read
proceed as usual.
IAP has no reasonable support policy, because PATCH is broken, and IAP
must be configured with an OAuth2 client ID and secret that belongs to
the project the app is associated with. There's no programmatic way to
create Clients. But we create the project and the app at the same time,
and we can't update because PATCH is broken. So this just drops IAP. It
also forces all our updates to ForceNew, because we can't update.
Also, adds more test coverage and docs, and fixes import by not relying
on the config for setting app engine info in state.
* Revert "Merge pull request #1434 from terraform-providers/paddy_revert_beta"
This reverts commit 118cd71201, reversing
changes made to d59fcbbc59.
* add ConvertSelfLinkToV1 calls to places where beta links are stored
Fix a panic in our test that is caused by a ListPolicy being nil. I
assume, but cannot verify, that this is an API change in that it may now
send back a nil listpolicy if a default is used.
Add the `enable_flow_logs` field to our subnetwork resource, so we can
specify whether [flow logs][1] should be enabled in Terraform configs.
Note that this behavior isn't explicitly documented yet, but it has made
it into the beta API client.
[1]: https://cloud.google.com/vpc/docs/using-flow-logs
This PR also switched us to using the beta API in all cases, and that had a side effect which is worth noting, note included here for posterity.
=====
The problem is, we add a GPU, and as per the docs, GKE adds a taint to
the node pool saying "don't schedule here unless you tolerate GPUs",
which is pretty sensible.
Terraform doesn't know about that, because it didn't ask for the taint
to be added. So after apply, on refresh, it sees the state of the world
(1 taint) and the state of the config (0 taints) and wants to set the
world equal to the config. This introduces a diff, which makes the test
fail - tests fail if there's a diff after they run.
Taints are a beta feature, though. :) And since the config doesn't
contain any taints, terraform didn't see any beta features in that node
pool ... so it used to send the request to the v1 API. And since the v1
API didn't return anything about taints (since they're a beta feature),
terraform happily checked the state of the world (0 taints I know about)
vs the config (0 taints), and all was well.
This PR makes every node pool refresh request hit the beta API. So now
terraform finds out about the taints (which were always there) and the
test fails (which it always should have done).
The solution is probably to write a little bit of code which suppresses
the report of the diff of any taint with value 'nvidia.com/gpu', but
only if GPUs are enabled. I think that's something that can be done.
This PR does a few things to the User-Agent header:
1. It puts Terraform/(version) first, since that's the way the RFC
expects it
2. It removes the goos and goarch data, although I could be convinced to
put it back in, I don't see what value it's providing
3. Moves directly to consuming the version package (which is the comment
above the function previously being called)
This simply adds the specification for operation timeouts, and sets
sane defaults. In testing against specific regions, creation of SQL
databases would fluctuate between 7-14 minutes against us-east1. As
such, a 15m creation threshold is recommended for this. Update and
Delete operations will adhere to 10m timeouts.
This follows a similar format as #1309.
* escape the folder name (in case of spaces, etc)
* add test case for folder with space
* add missing args
* make separate tests for each folder test, get folder name length under API limits
* further abstract out the resource name to prevent test collisions
* workaround multiple results returning for a given query by looping over return
* split test cases into separate funcs
* adding google folder data source with get by id, search by fields and lookup organization functionality
* removing search functionality
* creating folders for each test and updating documentation with default values
* Add support for regional GKE clusters in google_container_cluster:
* implement operation wait for v1beta1 api
* implement container clusters get for regional clusters
* implement container clusters delete for regional cluster
* implement container clusters update for regional cluster
* simplify logic by using generic 'location' instead of 'zone' and 'region'
* implement a method to generate the update function and refactor
* rebase and fix
* reorder container_operation fns
* cleanup
* add import support and docs
* additional locations cleanup
* Updates the default GKE legacy ABAC setting to false
* Updates docs for container_cluster
* Update test comments
* Format fix
* Adds ImportState test step to default legacy ABAC test
* Add time partitioning field to google_bigquery_table resource
* Fix flatten time partitioning field to google_bigquery_table resource
* Add resource bigquery table time partitioning field test
* Move resource bigquery table time partitioning field test to basic
* Add step to check that all the fields match
* Mark resource bigquery table time partitioning field as ForceNew
* Add time partitioning field test to testAccBigQueryTable config
* Updated google.golang.org/api/container/v1beta1
* Added support for private_cluster and master_ipv4_cidr
This is to implement #1174. See
https://groups.google.com/forum/#!topic/google-cloud-sdk-announce/GGW3SQSANIc
* Added simple test for private_cluster and master_ipv4_cidr
* Review replies
* Added some documentation for private_cluster