* Vendor cloud logging api
* Add logging sink support
* Remove typo
* Set Filter simpler
* Rename typ, typName to resourceType, resourceId
* Handle notFoundError
* Use # instead of // for hcl comments
* Cleanup test code
* Change testAccCheckLoggingProjectSink to take a provided api object
* Fix whitespace change after merge conflict
* Fix bug with CSEK where the key stored in state might be associated with the wrong disk
* preserve original order of attached disks
* use the disk index to figure out the raw key
* Add preemptible as an option to node config
* Check for preemptible in test matching functions
* Move flattenClusterNodeConfig to node_config
* Handle bools properly when comparing in cluster and node pool tests
* Use a supported image_type in cluster tests
* Support views in Terraform.BigQuery
* Add tests for Table with view, and fix existing Table test
* Remove dead code
* run gofmt
* Address comments
* Address review comments and add support for use_legacy_sql
* Force transmission/storage of UseLegacySQL
* Trying to fix tests
* add tests for useLegacySQL
Cloud DNS requires every managed zone to have an NS record at all times.
This means if people want to manage their own NS records, we need to add
their new record and remove the old one in the same call. It also means
we can't delete NS records, as we wouldn't know what to replace it with.
So deleting of NS records short-circuits. For the case of `terraform
destroy`, this prevents the error. It does mean if the user explicitly
tries to remove their NS zone from their project, it silently does
nothing, but that's unavoidable unless we want to A) restore a default
value (and it looks like the default values change from zone to zone?
And that is arguably just as unexpected?) or B) let the (arguably more
reasonable) `terraform destroy` case be impossible.
Storage bucket ACLs inherited the behaviour of only updating the fields
that were set in the config file. Terraform should track all the fields
in the resource, whether the user has specified a value for them or not,
and correct any drift that may occur.
This has manifested in an issue and unexpected behaviour in #50, and
this PR restores the expected behaviour.
* Vendor runtimeconfig
* Add support for RuntimeConfig config and variable resources
This allows users to create/manage Google RuntimeConfig resources and
variables. More information here:
https://cloud.google.com/deployment-manager/runtime-configurator/Closes#236
* Remove typo
* Use top-level declaration rather than init()
* Cleanup testing-related code by using ConflictsWith
Also adds better comments around how update works
* govendor fetch cloud.google.com/go/bigtable
* Vendor the rest of the stuff.
* Add support for instance_type to google_bigtable_instance.
* Revendored some packages.
* Removed bad packages from vendor.json
Import tests for compute_instance_template fail without this change as
they expect a value of true for automatic_restart. As this value was
removed, we're no longer setting it (and therefore it looks like it has
a value of false, which is different from the default).
* Fix bug where scheduling.automatic_restart false is never used
* Remove deprecated automatic_restart value in favor of scheduling.automatic_restart
* Remove deprecated on_host_maintenance
* Correct bad var name
* Re-add removed schema values and marked as Removed
* Fix var to snake case
* Migrate empty scheduling blocks in compute_instance_template
* Shorten error message
* Use only one return value instead of two
* Mark google_sql_database.{charset,collation} as computed instead of having defaults.
This change is required to avoid the following scenario:
When upgrading from a previous version of the Google provider, TF will change
the charset/collation of existing (TF-managed) databases to utf8/utf8_general_ci
(if the user hasn't added different config values before running TF apply),
potentially overriding any non-default settings that the user my have applied
through the Cloud SQL admin API. This violates POLA.
* Remove charset/collation defaults from the documentation, too.
* Add links to MySQL's and PostgreSQL's documentation about supported charset and collation values.
* Use version 5.7's docs instead of 5.6, since that's the most up to date version of MySQL that we support.
* Add a note that only UTF8 / en_US.UTF8 are currently supported for Cloud SQL PostgreSQL databases.
* Add versioned Beta support to google_compute_firewall.
* Add Beta support for deny to google_compute_firewall.
* remove extra line:
* make fmt
* Add missing ForceNew fields.
* Respond to review comments testing functionality + reducing network GET to v1
* Make google_compute_global_address a versioned resource with Beta support.
* Added Beta support for ip_version in google_compute_global_address.
* Move checks to TestCheckFuncs, add a regression test for IPV4 on v1 resources.
* Consolidated TestCheckFuncs to a single function.
* Add missing return statement.
* Fix IPV4 test
* Clarified comment.
Prior to this change it was possible for Terraform to error during plan / apply with the following:
Error 404: The resource "node pool \"foo\" not found"
* Add versioned Beta support to google_compute_global_forwarding_rule.
* Add Beta support for ip_version in google_compute_global_forwarding_rule.
* Temporary commit with compute_shared_operation.go changes.
* Added a test to see if v1 GFR is still IPV4, moved to a TestCheckFunc
* This API returns the original self links, but let's make sure we don't diff.
* Add support for auto_healing_policies to google_compute_instance_group_manager.
* Add a test for self link stability when a v1 resource uses a versioned resource.
* Add a comment about what the stable self link test does.
* make fmt
* Fixed formatting on new tests.
* Address review comments.
* Fix make vet
* Fix disk type’Malformed URL’ error
The API expects the disk type to be a SelfLink URL, but the disk type
name was being used (e.g. “pd-ssd”).
* Add ACC Tests for boot disk type
* Fix acceptance test & fmt test config
The Instance data does not contain the actual disk type, just "PERSISTENT". This commit uses the computeClient to pull the disk data from the API, allowing checking of the disk type.
Also fmt'd the test configuration.
* Add support node config for GKE node pool
* Review fixes:
- Set max items in node config schema
- Fill missing node config fields
- Put test helpers above than test vars
* Update checks in node pool tests
* Fix node pool check match
We don't need to set the ID to "" in read-modify-write helpers, because
once they're done, we read anyways to update state based on the changes.
And that read checks if the binding/member still exists, and does the
SetId("") if it doesn't.
This way, we stick with state only getting set based on the API state,
not by what we think the state will be.
Tests need to have unique names. Whoooops.
Also, the Elem property accepts an interface I guess, which means we
actually need the struct type repetition there.
* Vendor GCP Compute Beta client library.
* Refactor resource_compute_instance_group_manager for multi version support (#129)
* Refactor resource_compute_instance_group_manager for multi version support.
* Minor changes based on review.
* Removed type-specific API version conversion functions.
* Add support for Beta operations.
* Add v0beta support to google_compute_instance_group_manager.
* Renamed Key to Feature, added comments & updated some parameter names.
* Fix code and tests for version finder to match fields that don't have a change.
* Store non-v1 resources' self links as v1 so that dependent single-version resources don't see diffs.
* Fix weird change to vendor.json from merge.
* Add a note that Convert loses ForceSendFields, fix failing test.
* Moved nil type to a switch case in compute_shared_operation.go.
* Move base api version declaration above schema.
* Refactor compute_operation.go to duplicate less code.
* Determine what scope type an Operation is from it's Operation object.
* Inlined operation type switch statement into if/else methods.
Use the new projectIamPolicyReadModifyWrite helper to manage the RMW
loop for our policy member resource.
Handle the case of having a binding server-side that doesn't have the
member we expect more elegantly.
We were repeating that logic a lot, so this helper just reads a policy,
calls the passed modify function on the policy, then writes the policy
back and takes care of the optimistic concurrency logic for the caller.
So now all the caller has to do is the unique part, which is the modify
function.
* Revert "Add additional fingerprint error to check for when updating metadata (#221)"
This reverts commit 4c8f62edf6.
* Revert "Fix bug where range variable is improperly dereferenced (#217)"
This reverts commit 8f75c1c9a5.
* Revert "Add support for google_compute_project_metadata_item (#176)"
This reverts commit 236c0f5d24.
* Add support for google_compute_project_metadata_item
This allows terraform users to manage single key/value items within the
project metadata map, rather than the entire map itself.
* Update CHANGELOG.md
* Add details about import
* Add charset and collation to google_sql_database.
* Add documentation for charset, collation attributes.
* Extend the existing acceptance test to also cover charset and collation.
* Charset and collation always have a value present. Also inline.
* Move charset and collation to optional arguments.
* Add charset and collection to the example.
* Document charset and collation defaults.
* Keep TestAccGoogleSqlDatabase_basic as is, add TestAccGoogleSqlDatabase_update.
* Add import support for google_compute_image.
* Added comment explaning why we set the create_timeout in the import state method
* Don't ForceNew for create_timeout field
* Update image name in import documentation
* update compute instance docs to use new boot and scratch disk attributes, document attached_disk
* Update compute instance tests to mostly use new boot and scratch disk attributes
* Fix encryption test by setting values in state from what was there before
* Allow unlinking of billing account. Closes#133
* Add acceptance test for unlinking the billing account.
* Just apply the resource definition without the billing account instead of setting an empty billing account.
* Used standard validation functions where possible, added a GCP name validation function.
* Add tests for GCP name, factor out a ValidateRegexp function.
* make fmt
This change adds another option for supplying authentication credentials
to acceptance tests: If GOOGLE_USE_DEFAULT_CREDENTIALS is set, the default
credentials are used. When run from a compute engine instance, the compute
engine default credentials are used. When run from the user's workstation,
the user's credentials are used, if the user has authenticated with the
GCloud CLI beforehand.
Adds the google_project_iam_member resource, which just ensures that a
single member has a single role.
google_project_iam_member should not be used to grant permissions to a
role controlled by google_project_iam_binding or to a policy controlled
by google_project_iam_policy, as they'll fight for control.
Changing the role is ForceNew, because the role is part of the ID.
Make reads go through to the Binding functions, not the Policy
functions. That's embarrassing.
Add a resource that manages just a single binding within a Google
project's IAM Policy.
Note that this resource should not be used when
google_project_iam_policy is used, or they will fight over which is
correct.
This also required wrapping the error returned from setProjectIamPolicy,
as we need to test to see if it's a 409 error and retry, which can't be
done if we just use fmt.Errorf.
* Add scratch_disk property to google_compute_instance
* docs for scratch_disk
* limit scope of scratchDisks array by using bool, test formatting
* add slash back to disk check
* Add boot_disk property to google_compute_instance
* docs for boot_disk
* limit scope of bootDisk, use bool instead
* test formatting
* make device_name forcenew, add sha256 encryption key
* compute_disk: update image in test
* disk_image: add default type, make size computed
* compute_dis: wait on disk size operations to complete before moving on
* update docks on the image
* Add compute_backend_service import support
* Fixed the nit
* Made example names a bit more intuitive
* Use underscores wherever possible instead of dashes