* Add support for Kubernetes alpha features
* Add tests for support of Kubernetes alpha features
* Fix dodgy copy and paste operations
* Add documentation
* initial work on adding IAP support for backend services
* readback of IAP
* flatten IAP + static set id
* expandIap function
* removed enabled flag/state rework
Removed the enabled flag for IAP
IAP is now enabled when the client id and secret are set
IAP now correctly disables when IAP stanza is removed
Client secret is now correctly hashed against the secret hash stored on the server
* Tests for IAP
* added comments, fixed tabs.
* testing for IAP disabled
Rename all ID fields to {resource_noun}_id instead of removing them
outright. This means people can still get at the info.
Leave project's id deleted. It has been marked as Removed for months.
I'm fine with cleaning it up before 1.0.0.
Also, update website docs.
Right now we can't create subscription on a topic in a different gcp
project since it assume the project from the subscription. The provider
always create the full topic name string
projects/{project}/topics/{topic} with the received topic property.
Using a regexp we validate if the string is already in
the format projects/{project}/topics/{topic} and if it's the case
we don't wrap it again and take it directly. The original functionality
is maintained but it's possible to specify a different project for the
topic.
* Instantiate the cloudkms client
* Implement Create and Read for the kms key ring resource
* Expose the kms key ring resource
* Create acceptance test for creating a KeyRing, fix read to use KeyRing ID
* Add cloudkms library to vendor
* Address style comments
* Use fully-qualified keyring name in read operation
* Remove call to SetId during read operation
* Set ID as entire resource string
* Spin up a new project for acceptance test
* Use Getenv for billing and org environment variables
* And test and logs around removal from state
* Add comments
* Fixes formatting
* Log warning instead of info
* Use a single line for cloudkms client actions
* Add resource import test
* Add ability to import resource, update helper functions to use keyRingId struct
* Use shorter terraform ID for easier import
* Update import test to use the same config as the basic test
* Update KeyRing name regex to be consistent with API docs
* Add documentation page for resource
* Add KeyRing documentation to sidebar
* Adds unit tests around parsing the KeyRing import id
* Allow for project in id to be autopopulated from config
* Throw error in import if project provider is not provided for location/name format
* Consistent variable names
* Use tabs in resource config instead of spaces
* Remove "-x" suffix for docs
* Set project attribute on import if different from the project config
* Initial support for google service account keys
* Add vendor for vault and encryption
* Add change for PR comment
* Add doc and improvement fo public key management
* adding waiter for compatibility with issue google/google-api-go-client#234
* improvement
* Add test with pgp_key
* Perform doc anf format
* remove test if public_key exists
* Add link on doc
* correct pr
* Make google_service_account resource importable
* Add google_service_account testcase with default project
* Mark google_service_account.project as computed to ensure the project id is always stored in the state, defined in configuration or not. Add corresponding test cases
* Inline variables with single usage
* Replace tabs with spaces in configuration strings
* Ensure service account is not recreated when the default project is explicitely added to the configuration
* camelcase
* disk cleanup
* fix attached disk test
* allow disk sources from name or url
* parse disk source better on read
* update docs
* fix boot disk source url
* Reorder fields in schema for style consistency
* Add reusable ZonalFieldValue
* Fix import and read state from API for compute route
* Generate network link without calling the API
This reverts commit 8ab9d96d25 and revives
the original commit that adds t.Parallel to all acceptance tests. It
turns out test failures were unrelated to this change (rather, they were
related to quota issues).
This reverts commit 42de44592f. It appears
there might be thread-safety issues as panics have started occuring when
parallism is ramped up. Reverting for now while investigating.
`compute_instance`'s StateVersion was set to 2. Then we released a
migration to v3, but never updated the StateVersion to 3, meaning the
migration was never run. When we added the migration for disks, we
bumped to 4, bypassing 3 altogher. In theory, this is fine, and is
expected; after all, some people may have state in version 0 and need to
upgrade all the way to 4, so our schema migration function is supposed
to support this.
Unfortunately, for migrations to v2, v3, and v4 of our schema, the
migration _returned_ after each migration, instead of falling through.
This meant that (in this case), version 2 would see it needs to be
version 4, run the state migration to version 3, then _return_, setting
its StateVersion to _4_, which means the migration from 3->4 got skipped
entirely.
This PR bumps the version to 5, and adds a migration from 4->5 such that
if there are still disks in state after 4, re-run 4. This will fix
things for people that upgraded to 1.0.0 and had their StateVersion
updated without the migration running.
I also updated the tests @danawillow wrote to start from state version 2
instead of state version 3, as the state would never be in version 3.
I also duplicated those tests, but started them from state version 4
(assuming the migration hadn't run) and verifying that the migration
from 4->5 would correct that.
* Add state migration from disk to boot_disk/scratch_disk/attached_disk
* get rid of test for now
* update schema version
* add tests for migration
* fix travis errors
* actually fix travis errors
* fix logic when project is set, also remove some log statements
* add tests for reading based on encryption key and image
* use as much of the image URL as we can for matching on image
* read project from config if it wasn't set in the attribute
* update resolveImage call
+ Make the org_id optional when creating a project. Closes#131
+ Mark org_id as computed to allow for GCP automatically assigning the org.
+ Add an acceptance test for project creation without an organization.
+ Skip TestAccGoogleProject_createWithoutOrg if GOOGLE_ORG is set.
+ Add a folder_id to the google_project resource, optionally
specifying the ID of the GCP folder in which the GCP project should
live.
+ Document how one can provision a project into a folder, and added a
sample configuration to create a project into an existing folder.
* Skip test without org if service account is used
* Support folders/* or id only for the folder id field
The `predefined_acl` test for `storage_object_acl` was failing. This is
because we removed the state-setting portion of the `predefined_acl`
field from `storage_bucket_acl`, and due to what I can only assume is a
copy/paste error, `storage_object_acl` was calling the Read function of
`storage_bucket_acl` instead of its own when using `predefined_acl`.
Updating to use `storage_object_acl`'s Read function makes the tests
pass.
Because we were instantiating a client outside of resource.TestCase, it
was being instantiated even for unit tests, which have no credentials,
causing the unit tests to fail. Sadly, this is the only way I could
figure out how to get a client inside resource.TestCase, which is very
sad making, but works.
When GCS buckets are created, they're created with a set of default
ACLs:
* `OWNER:project-owners-{project_number}`
* `OWNER:project-editors-{project_number}`
* `READER:project-viewers-{project_number}`
Normally, this would be fine, or a minor inconvenience. Terraform could
either delete them itself, or the first apply of a user would overwrite
them.
However, trying to remove the `OWNER:project-owners-{project_number}`
ACL yields an API error that the bucket owner must maintain OWNER access
to the bucket. This breaks things like `terraform destroy`, but also
means any config without that line in it will fail to apply, not just
overwrite the value.
To make matters worse, trying to *add* the
`OWNER:project-owners-{project_number}` ACL to any bucket that already
has it _also_ yields the same error about not being able to remove it.
To get around this, the storage_bucket_acl resource has been updated to
largely ignore _just this_ ACL. It will not try to add it if it already
exists, will not try to remove it at all. This does mean that Terraform
is incapable of removing this ACL from a bucket, but I'm not sure it's
possible to do that with the API, anyways.
Tests were also updated to keep the default ACLs as part of the config,
and to change the email addresses to addresses we actually own. I tried
changing to non-existant hashicorp.com email addresses, but was
rejected; only email addresses that are backed by actual Google accounts
can be used, sadly.
* Vendor cloud logging api
* Add logging sink support
* Remove typo
* Set Filter simpler
* Rename typ, typName to resourceType, resourceId
* Handle notFoundError
* Use # instead of // for hcl comments
* Cleanup test code
* Change testAccCheckLoggingProjectSink to take a provided api object
* Fix whitespace change after merge conflict
* Fix bug with CSEK where the key stored in state might be associated with the wrong disk
* preserve original order of attached disks
* use the disk index to figure out the raw key
* Add preemptible as an option to node config
* Check for preemptible in test matching functions
* Move flattenClusterNodeConfig to node_config
* Handle bools properly when comparing in cluster and node pool tests
* Use a supported image_type in cluster tests
* Support views in Terraform.BigQuery
* Add tests for Table with view, and fix existing Table test
* Remove dead code
* run gofmt
* Address comments
* Address review comments and add support for use_legacy_sql
* Force transmission/storage of UseLegacySQL
* Trying to fix tests
* add tests for useLegacySQL
Cloud DNS requires every managed zone to have an NS record at all times.
This means if people want to manage their own NS records, we need to add
their new record and remove the old one in the same call. It also means
we can't delete NS records, as we wouldn't know what to replace it with.
So deleting of NS records short-circuits. For the case of `terraform
destroy`, this prevents the error. It does mean if the user explicitly
tries to remove their NS zone from their project, it silently does
nothing, but that's unavoidable unless we want to A) restore a default
value (and it looks like the default values change from zone to zone?
And that is arguably just as unexpected?) or B) let the (arguably more
reasonable) `terraform destroy` case be impossible.
Storage bucket ACLs inherited the behaviour of only updating the fields
that were set in the config file. Terraform should track all the fields
in the resource, whether the user has specified a value for them or not,
and correct any drift that may occur.
This has manifested in an issue and unexpected behaviour in #50, and
this PR restores the expected behaviour.
* Vendor runtimeconfig
* Add support for RuntimeConfig config and variable resources
This allows users to create/manage Google RuntimeConfig resources and
variables. More information here:
https://cloud.google.com/deployment-manager/runtime-configurator/Closes#236
* Remove typo
* Use top-level declaration rather than init()
* Cleanup testing-related code by using ConflictsWith
Also adds better comments around how update works
* govendor fetch cloud.google.com/go/bigtable
* Vendor the rest of the stuff.
* Add support for instance_type to google_bigtable_instance.
* Revendored some packages.
* Removed bad packages from vendor.json
Import tests for compute_instance_template fail without this change as
they expect a value of true for automatic_restart. As this value was
removed, we're no longer setting it (and therefore it looks like it has
a value of false, which is different from the default).
* Fix bug where scheduling.automatic_restart false is never used
* Remove deprecated automatic_restart value in favor of scheduling.automatic_restart
* Remove deprecated on_host_maintenance
* Correct bad var name
* Re-add removed schema values and marked as Removed
* Fix var to snake case
* Migrate empty scheduling blocks in compute_instance_template
* Shorten error message
* Use only one return value instead of two
* Mark google_sql_database.{charset,collation} as computed instead of having defaults.
This change is required to avoid the following scenario:
When upgrading from a previous version of the Google provider, TF will change
the charset/collation of existing (TF-managed) databases to utf8/utf8_general_ci
(if the user hasn't added different config values before running TF apply),
potentially overriding any non-default settings that the user my have applied
through the Cloud SQL admin API. This violates POLA.
* Remove charset/collation defaults from the documentation, too.
* Add links to MySQL's and PostgreSQL's documentation about supported charset and collation values.
* Use version 5.7's docs instead of 5.6, since that's the most up to date version of MySQL that we support.
* Add a note that only UTF8 / en_US.UTF8 are currently supported for Cloud SQL PostgreSQL databases.
* Add versioned Beta support to google_compute_firewall.
* Add Beta support for deny to google_compute_firewall.
* remove extra line:
* make fmt
* Add missing ForceNew fields.
* Respond to review comments testing functionality + reducing network GET to v1
* Make google_compute_global_address a versioned resource with Beta support.
* Added Beta support for ip_version in google_compute_global_address.
* Move checks to TestCheckFuncs, add a regression test for IPV4 on v1 resources.
* Consolidated TestCheckFuncs to a single function.
* Add missing return statement.
* Fix IPV4 test
* Clarified comment.
Prior to this change it was possible for Terraform to error during plan / apply with the following:
Error 404: The resource "node pool \"foo\" not found"
* Add versioned Beta support to google_compute_global_forwarding_rule.
* Add Beta support for ip_version in google_compute_global_forwarding_rule.
* Temporary commit with compute_shared_operation.go changes.
* Added a test to see if v1 GFR is still IPV4, moved to a TestCheckFunc
* This API returns the original self links, but let's make sure we don't diff.
* Add support for auto_healing_policies to google_compute_instance_group_manager.
* Add a test for self link stability when a v1 resource uses a versioned resource.
* Add a comment about what the stable self link test does.
* make fmt
* Fixed formatting on new tests.
* Address review comments.
* Fix make vet
* Fix disk type’Malformed URL’ error
The API expects the disk type to be a SelfLink URL, but the disk type
name was being used (e.g. “pd-ssd”).
* Add ACC Tests for boot disk type
* Fix acceptance test & fmt test config
The Instance data does not contain the actual disk type, just "PERSISTENT". This commit uses the computeClient to pull the disk data from the API, allowing checking of the disk type.
Also fmt'd the test configuration.
* Add support node config for GKE node pool
* Review fixes:
- Set max items in node config schema
- Fill missing node config fields
- Put test helpers above than test vars
* Update checks in node pool tests
* Fix node pool check match
We don't need to set the ID to "" in read-modify-write helpers, because
once they're done, we read anyways to update state based on the changes.
And that read checks if the binding/member still exists, and does the
SetId("") if it doesn't.
This way, we stick with state only getting set based on the API state,
not by what we think the state will be.
Tests need to have unique names. Whoooops.
Also, the Elem property accepts an interface I guess, which means we
actually need the struct type repetition there.