I also did a bit of cleanup while I was here and noticed things that I thought could be improved in the files (wording changes, removing tests that aren't quite necessary, etc.) Take a look and make sure I didn't actually remove anything important!
So, every time this test fails we are leaving behind 10 CPUs. This is a big cause of our quota problem because we are at 70!! right now. We still need to fix the test but let's also reduce this count to 1 so we leave 1/10th of the zombie instances.
Changing from a new to legacy health check also did not work so I have no clue what the issue is here. Observing the instances, they are shown as being stuck in "Instance is being verified" state for up to days.
We had a failure in the last CI run while waiting for this resource to be created, so let's increase the timeout.
```
------- Stdout: -------
=== RUN TestAccDataSourceRegionInstanceGroup
--- FAIL: TestAccDataSourceRegionInstanceGroup (631.53s)
testing.go:518: Step 0 error: Error applying: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: timeout while waiting for state to become 'created' (last state: 'creating', timeout: 5m0s)
testing.go:579: Error destroying resource! WARNING: Dangling resources
may exist. The full state and error is shown below.
Error: Error refreshing: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: google_compute_region_instance_group_manager.foo: timeout while waiting for state to become 'created' (last state: 'creating', timeout: 5m0s)
State:
FAIL
```
The hadoop test passes for me locally but fails in CI, so this should give some extra debugging information (my user acct doesn't have permissions to read the dataproc logs from the failing test).
WIP for now because I want to make sure this method of reading the logs works.
TestAccComputeDisk_timeout is expected to fail 27.77% of the time because acctest.RandString can start with a number, making an invalid `name`. Let's fix that :)
The ID that is created for a nodepool is 3 parts in the form of
<location>/<cluster>/<nodepool>. When we use the newer 4 part import
format the ID becomes that string. Then when checking the existence of
the nodepool the name is incorrectly parsed out of the ID.
When using the 4 part import form we need to set the correct ID so that
terraform can parse the name from the ID.
Closes: #1846
* Adding resource_attached_disk
This is a resource which will allow joining a arbitrary compute disk
to a compute instance. This will enable dynamic numbers of disks to
be associated by using counts.
* undelete-update recently soft-deleted custom roles
* remove my TODO statements
* check values on soft-delete-recreate for custom role tests
* final fixes to make sure delete works; return read() when updating to 'create'
* check for non-404 errors for custom role get
* add warnings to custom roles docs
Fixes#1494.
* Add import support for `google_logging_organization_sink`, `google_logging_folder_sink`, `google_logging_billing_account_sink`.
Using `StateFunc` over `DiffSuppressFunc` should only affect tests; for some reason `TestAccLoggingFolderSink_folderAcceptsFullFolderPath` expected a `folder` value of `folders/{{id}}` vs expecting `{{id}}` when only `DiffSuppressFunc` was used, when in real use `DiffSuppressFunc` should be sufficient.
* fix service account key data source name
* switch id to name
* update docs
* doc format
* fixes for validation and tests
* last fixes for service account key data source