We have a set of constraints we apply to our organization as part of a
test for the organization policy functionality. This can get stuck from
quota issues, or it can run in parallel to other tests. The policy
currently limits the projects that images can be used from to the
project running the test, but a lot of our tests use images from the
debian-cloud project. This just updates the policy to allow debian-cloud
images to be used, too, so even if the policy doesn't properly get
cleaned up or if it runs in parallel with other tests, our tests are
still within the policy.
The real fix for this is to set up a separate org for testing, so we're
not modifying the test environment under running tests, but that'll take
a bit more time, so this is the patchfix until that can happen.
Managed zone tests are failing because we're attempting to use the naked
domain as the managed zone, when it's already being managed by GCP. By
making a subdomain the managed zone, we avoid this problem.
* move setid calls back
* add support for pod security policy
* pod security policy docs
* Revert "move setid calls back"
This reverts commit 0c7b2dbf92aff33dac8c5beb95568c2bc86dd7de.
* cleanup
* remove comments about disabling update
* add extra wait for storage bucket object deletion
* make timeout for object deletion 5 minutes, make it succeed 3 times
* delete the cluster before deleting the bucket
* deprecate delete_autogen_bucket
* improve deprecation message
Exposes existing `google_compute_backend_service` as data sources.
This addresses #149 .
This allows, for instance, to collect a backend service's self_link and
use it from an other workspace/tfstate, sharing most of the
loadbalancers definition.
* add import helpers for generated code
* Updates to backend bucket and transport.go from MM
* add generated http(s)_health_check resources
* name is required; transport import style
* update docs with new fields/timeouts
* fixes
* Support `distributionPolicy` when creating regional instance group managers.
* Better match the API structure of distributionPolicy.
* Switch to "distribution_policy_zones".
This approach lets us more simply allow a list of zones to use, while
providing a deprecation path for implementing the distribution policy
field more holistically, avoiding backwards-incompatible changes.
* fix typo
* use slice instead of Set for flattenDP
* vendor container/v1beta1
* revendor container/v1beta1
* add beta scaffolding for gke resources
* fix json unmarshal error
* fix issues with trying to convert interface instead of struct
* same fixes but for node pool
* move setid calls back
* Expose first ip address on sql db instance.
Signed-off-by: Desmond Pompa Alarcon Rawls <captaingrover@gmail.com>
* Use the ip_address key on the first map in ip_address list.
Signed-off-by: Genevieve LEsperance <glesperance@pivotal.io>
* Run first_ip_address test check if there is an ip address.
Signed-off-by: Desmond Pompa Alarcon Rawls <captaingrover@gmail.com>
* Add first_ip_address to sql db instance scheme.
Signed-off-by: Genevieve LEsperance <glesperance@pivotal.io>
The GCP backend apparently lowercases the values, no matter what you
enter, so we consider uppercase and lowercase values to be equivalent.
This fixes#862.
* add json omitted fields back when converting
* for testing: don't use json in convert
* try a combination of structs and mapstructure libraries
* Revert "try a combination of structs and mapstructure libraries"
This reverts commit eab11aa95d3abb74b240988e5c99d6e9525db96c.
* Revert "for testing: don't use json in convert"
This reverts commit 96af067b29dd147fcedb55995ebc8a17c6a9d1b2.
* Add Set method to TerraformResourceData and ResourceDataMock
* Add Id() and SetId() to ResourceDataMock and TerraformResourceData
* Keep only name when reading region or zone field to handle api that returns self_link
* Remove bad test in testAccContainerCluster_withIPAllocationPolicy
One step was expecting the test to fail if the subnetwork defines
secondary ip ranges that the cluster doesn't use. However, it is
perfectly fine to do so and we don't expect an error.
* Revert "Remove bad test in testAccContainerCluster_withIPAllocationPolicy"
This reverts commit af2f369907181a107cfc0ed9fa2ff0e288f02f66.
* Fail if use_ip_aliases is true and no range names is provided
* make fmt
* don't introduce new field for now. Wait until we want to support new feature in allocation policy
* Storage Default Object ACL resource
* Fixed the doc
* Renamed the resource id. Log change
* Complying with go vet
* Changes for review
* link to default object acl docs in sidebar
* Support for GCS notifications
* docs for storage notification
* docs for storage notification
* Clarified the doc
* Doc modifications
* Addressing requested changes from review
* Addressing requested changes from review
* Using ImportStatePassthrough
* Storage Default Object ACL resource
* Fixed the doc
* Renamed the resource id. Log change
* Complying with go vet
* Changes for review
* link to default object acl docs in sidebar
* Import google_compute_shared_vpc_host_project/google_compute_shared_vpc_service_project resources.
* Incorporate testing of resource import into main acceptance tests.
* Add update support for compute instance fields that require the machine to be stopped
* add warnings in docs about stopping the instance before updating
* add allow_stopping_for_update field
* Update sqladmin api
Pull in updates to the generated sqladmin api and update callers for
the change in the StorageAutoResize setting
* Add support for availability_type setting
Allow specifying ZONAL or REGIONAL to allow for PostgreSQL HA
setup.
* vendor: update sqladmin/v1beta4
* Test setting AvailabilityType for PostgreSQL
Add tests that cover the creation of a Postgres database with
AvailabilityType set to REGIONAL, and correct some small issues that
were preventing compilation.
* Fix breaking change w/ disk_autoresize in cloudsql
95e5582766
The cloudsql admin client changed the way it handles StorageAutoResize
as a parameter, in order to be more explicit about when the server has
ommitted the field. This changed the type from being bool to *bool, and
we need to modify provider code so that we supply the right value to the
api client.
* skip guest accelerators if count is 0.
Instances in instance groups in google will fail to provision, despite
requesting 0 GPUs. This came up for me when trying to provision
a similar instance group in all available regions, but only asking for
GPU's in those that support them by parameterizing the `count` and
setting it to 0.
This might be a violation of some terraform principles. For example,
testing locally with this change `terraform` did not recognize that
indeed my infra needed to be re-deployed (from it's pov, I assume it
believes this because inputs hadn't changed). Additionally, there may be
valid reasons for creating an instance template with 0 gpu's that can be
tuned upwards.
* Add guest accelerator skip test for instances.
* do not leave empty pointers to guest accelerators.
* attempt to clear guest accelerator diff
* conditionally customize diff for guest accels
* read boot disk initialization param from API
* make fmt
* Mark the initialize_params list as computed to support boot source
* Ensure private family test follow naming pattern
* Improve docs
* Add import support to google_dns_record_set
* Add import test to NS record
* Minimize diff change
* Improve docs
* Make error message more helpful
* Add note about trailing dot at the end of the record name
Add support for Google Dataflow jobs
Note: A dataflow job exists when it is in a nonterminal state, and does not exist if it
is in a terminal state (or a non-running state which can only transition into terminal
states). See doc for more detail.
* Initial commit
* Adding google_cloudfunction_function resource
* Some FMT updates
* Working Cloud Function Create/Delete/Get
Create is limited to gs:// source now.
* Fixed tests import
* Terraform now is able to apply and destroy function
* Fully working Basic test
* Added:
1. Allowed region check
2. readTimeout helper
* Found better solution for conflicting values
* Adding description
* Adding full basic test
* dded Update functionality
* Made few more optional params
* Added test for Labels
* Added update tests
* Added storage_* members and made function source deploy from storage bucket object
* Adding comments
* Adding tests for PubSub
* Adding tests for Bucket
* Adding Data provider
* Fixing bug which allowed to miss error
* Amending Operation retrieval
* Fixing vet errors and vendoring cloudfunctions/v1
* Fixing according to comments
* Fixing according to comments round #2
* Fixing tabs to space
* Fixing tabs to space and some comments #3
* Re-done update to include labels in one update with others
* Adding back default values. In case of such scenario, when user creates function with some values for "timeout" or "available_memory_mb", and then disables those attributes. Terraform plan then gives:
No changes. Infrastructure is up-to-date.
This is an error. By adding const we would avoid this error.
* Fixed MixedCase and more tabs