Match the bigtable_instance resource to the API structure, deprecating
the inlined config parameters. This is a complicated deprecation,
because we don't want to ForceNew when updating to use the new
undeprecated fields, but we want to ForceNew if they change.
This is achieved using a CustomizeDiff.
Update (Regional) Instance Group Managers to deprecate the beta fields
(rolling_update_policy, versions, and auto_healing_policy). Also, rename
RESTART to REPLACE for Instance Group Manager's update_strategy, to be a
bit clearer. This required a DiffSuppress to make sure RESTART and
REPLACE are considered equivalent. In 2.0.0, we should remove RESTART
and drop the DiffSuppressFunc. Sadly, we have no way to do a deprecation
message for RESTART. We'll have to put it in the CHANGELOG. I've already
removed it from the website.
Deprecate the app_engine sub-block of google_project, and create a
google_app_engine_application resource instead. Also, add some tests for
its behaviour, as well as some documentation for it.
Note that this is largely an implementation of the ideas discussed in
#2118, except we're not using CustomizeDiff to reject deletions without
our special flag set, because CustomizeDiff apparently doesn't run on
Delete. Who knew? This leaves us rejecting the deletion at apply time,
which is less than ideal, but the only other option I see is to silently
not delete the resource, and that's... not ideal, either.
This also stops the app_engine sub-block on google_project from forcing
new when it's removed, and sets it to computed, so users can safely move
from using the sub-block to using the resource without state surgery or
deleting their entire project. This does mean it's impossible to delete
an App Engine application from a sub-block now, but seeing as that was
the same situation before, and we just papered over it by making the
project recreate itself in that situation, and people Were Not Fans of
that, I'm considering that an acceptable casualty.
Google Apps is a former name.
Also, I mention primary domain and domain alias. It seems I have
to use the primary domain. Otherwise, GCP replaces the field to the primary one.
And every time Terraform plan returns a diff about it.
I used domain alias for `members` arg in `google_project_iam_member`
resource. `terraform apply` was succeeded. However, when I run modify
other resources, I got a diff about the `google_project_iam_member`
resource.
* Adding resource_attached_disk
This is a resource which will allow joining a arbitrary compute disk
to a compute instance. This will enable dynamic numbers of disks to
be associated by using counts.
* undelete-update recently soft-deleted custom roles
* remove my TODO statements
* check values on soft-delete-recreate for custom role tests
* final fixes to make sure delete works; return read() when updating to 'create'
* check for non-404 errors for custom role get
* add warnings to custom roles docs
Fixes#1494.
* Add import support for `google_logging_organization_sink`, `google_logging_folder_sink`, `google_logging_billing_account_sink`.
Using `StateFunc` over `DiffSuppressFunc` should only affect tests; for some reason `TestAccLoggingFolderSink_folderAcceptsFullFolderPath` expected a `folder` value of `folders/{{id}}` vs expecting `{{id}}` when only `DiffSuppressFunc` was used, when in real use `DiffSuppressFunc` should be sufficient.
@michaelharo suggested this would be a good best practice for this particular resource to prevent users from accidentally deleting a bunch of encryption keys on a -/+ and losing data, and I agree.
This commit adds a quick documentation blurb saying what actually happens if those crypto key versions are destroyed and also updates the snippet to showcase a lifecycle hook.
* fix service account key data source name
* switch id to name
* update docs
* doc format
* fixes for validation and tests
* last fixes for service account key data source
Fixes#1702.
@chrisst I'm putting you as a reviewer, but no rush. Feel free to ask as many questions as you have! Also feel free to offer suggestions 😃 (or just say it's perfect as-is, that works too)
In testing an upcoming `google_compute_region_disk` resource, I had to make these changes. Checking them in separately so that when the magician runs, these changes will already be a part of TF.
Make it clear that regional backend services are only for internal load
balancing, and fix the default for protocol. It's not HTTP, as the API
docs claim, but is TCP instead.
This was done as its own resource as suggested in slack, since we don't have the option of making all fields Computed in google_compute_instance. There's precedent in the aws provider for this sort of thing (see ami_copy, ami_from_instance).
When I started working on this I assumed I could do it in the compute_instance resource and so I went ahead and reordered the schema to make it easier to work with in the future. Now it's not quite relevant, but I left it in as its own commit that can be looked at separately from the other changes.
Fixes#1582.
An instance is an abstract container of clusters, it's the cluster that
has the nodes and holds the data, so the number of nodes and location
apply to the cluster.
Added node config 'disk_type' which can either be 'pd-standard' or
'pd-ssd', if left blank 'pd-standard' will be the default used by google
cloud.
Closes: #1656
## What
As well as https://github.com/terraform-providers/terraform-provider-google/pull/1282 , make `resource_container_node_pool` importer accept `{project}/{zone}/{cluster}/{name}` format to specify the project where the node pool belongs to actually.
## Why
Sometimes I want to import container pool in different project from default SA's. However, currently there is no way to specify project the target node pool belongs to, Terraform tries to retrieve node pool from SA's project, then it fails due to `You cannot import non-existent resources using Terraform import.` error.
As discussed in #1326, we're not going to remove name_prefix for
compute_ssl_certificate, because it makes the common use case more
ergonomic by a good amount, and the only cost is it's harder to maintain
the autogenerated code, and we've decided the benefits outweigh the
costs in this circumstance.
* Added a link to the console page where you can download a file
* Removed instructions on how to get to that page, since now you can just click on the link
* Added caveat for application default credentials
@sethvargo @theacodes @kimcam let me know if this seems reasonable / you have any suggestions!
If you try and specify them together you will get this error:
google_compute_global_forwarding_rule.your-rule: Error creating Global Forwarding Rule: googleapi: Error 400: Invalid value for field 'resource.ipVersion': 'IPV4'. Both IP Version and IP Address cannot be specified., invalid
* Allow using in repo configuration for cloudbuild trigger
Cloudbuild triggers have a complex configuration that can be defined
from the API. When using the console, the more typical way of doing this
is to defined the configuration within the repository and point the
configuration to the file that defines the config.
This can be supported by sending the filename parameter instead of the
build parameter, however only one can be sent.
* Acceptance testing for cloudbuild trigger with filename
Ensure that when a cloudbuild repo trigger is created with a filename,
that filename is what actually ends up in the cloud.
* Don't specify "by default" in cloudbuild-trigger.
The docs shouldn't say that "cloudbuild.yaml" is used by default. There
is no default from the APIs, but the console suggest using this value.
Just say it's the typical value in documentation.