Fix Dana's comments, one of which exposed a bug: we weren't actually
fixing the diff. Because resolveImage can return multiple formats, and
only returns a self_link if a self_link is passed in, it was diffing
more often than not against a self_link supplied in the config. We just
had a bug in our CustomizeDiff function that concealed this.
I added the resolvedImageSelfLink helper function to turn the output
from resolveImage into a self_link, which fixes the problem. It also has
the knock-on effect of fixing #2067 at the same time, which is nice. I
decided to add another function instead of just modifying resolveImage
to always return a self_link because I don't want to deal with the
backwards compatibility problems of changing how we're storing a bunch
of things in state this close to 1.19.0. And honestly, we should
probably just be storing the self_link always, _anyways_.
We don't need quite so many `GetOk`s since the client library will ignore any fields that are set to the zero value for that type. I left a few that involved error-handling or things that had to be set before other things, but this at least should make the code a bit nicer to look at.
Tests are passing except the ones that were already failing in CI.
When reading or deleting the google_project_service resource, first
retrieve the project. If it's not found, or if it's pending deletion,
remove the service, as it's functionally disabled.
Also, add a test to confirm the behaviour.
This should resolve#1292.
I'm working on Stackdriver monitoring in another branch, which required updating `google.golang.org/genproto`, which required updating `google.golang.org/grpc`, which required updating `github.com/golang/protobuf`, and so on. This PR updates all of the Google-provided deps to their latest versions. In addition, there is:
- A change in config.go to reflect an updated type name
- Five files changed by `make fmt`
Tested with `make build`, `make test`, and `make testacc`.
I also did a bit of cleanup while I was here and noticed things that I thought could be improved in the files (wording changes, removing tests that aren't quite necessary, etc.) Take a look and make sure I didn't actually remove anything important!
So, every time this test fails we are leaving behind 10 CPUs. This is a big cause of our quota problem because we are at 70!! right now. We still need to fix the test but let's also reduce this count to 1 so we leave 1/10th of the zombie instances.
Changing from a new to legacy health check also did not work so I have no clue what the issue is here. Observing the instances, they are shown as being stuck in "Instance is being verified" state for up to days.
We had a failure in the last CI run while waiting for this resource to be created, so let's increase the timeout.
```
------- Stdout: -------
=== RUN TestAccDataSourceRegionInstanceGroup
--- FAIL: TestAccDataSourceRegionInstanceGroup (631.53s)
testing.go:518: Step 0 error: Error applying: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: timeout while waiting for state to become 'created' (last state: 'creating', timeout: 5m0s)
testing.go:579: Error destroying resource! WARNING: Dangling resources
may exist. The full state and error is shown below.
Error: Error refreshing: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: 1 error(s) occurred:
* google_compute_region_instance_group_manager.foo: google_compute_region_instance_group_manager.foo: timeout while waiting for state to become 'created' (last state: 'creating', timeout: 5m0s)
State:
FAIL
```
The hadoop test passes for me locally but fails in CI, so this should give some extra debugging information (my user acct doesn't have permissions to read the dataproc logs from the failing test).
WIP for now because I want to make sure this method of reading the logs works.
TestAccComputeDisk_timeout is expected to fail 27.77% of the time because acctest.RandString can start with a number, making an invalid `name`. Let's fix that :)
The ID that is created for a nodepool is 3 parts in the form of
<location>/<cluster>/<nodepool>. When we use the newer 4 part import
format the ID becomes that string. Then when checking the existence of
the nodepool the name is incorrectly parsed out of the ID.
When using the 4 part import form we need to set the correct ID so that
terraform can parse the name from the ID.
Closes: #1846