* Remove bad test in testAccContainerCluster_withIPAllocationPolicy
One step was expecting the test to fail if the subnetwork defines
secondary ip ranges that the cluster doesn't use. However, it is
perfectly fine to do so and we don't expect an error.
* Revert "Remove bad test in testAccContainerCluster_withIPAllocationPolicy"
This reverts commit af2f369907181a107cfc0ed9fa2ff0e288f02f66.
* Fail if use_ip_aliases is true and no range names is provided
* make fmt
* don't introduce new field for now. Wait until we want to support new feature in allocation policy
* Storage Default Object ACL resource
* Fixed the doc
* Renamed the resource id. Log change
* Complying with go vet
* Changes for review
* link to default object acl docs in sidebar
* Support for GCS notifications
* docs for storage notification
* docs for storage notification
* Clarified the doc
* Doc modifications
* Addressing requested changes from review
* Addressing requested changes from review
* Using ImportStatePassthrough
* Storage Default Object ACL resource
* Fixed the doc
* Renamed the resource id. Log change
* Complying with go vet
* Changes for review
* link to default object acl docs in sidebar
* Import google_compute_shared_vpc_host_project/google_compute_shared_vpc_service_project resources.
* Incorporate testing of resource import into main acceptance tests.
* Add update support for compute instance fields that require the machine to be stopped
* add warnings in docs about stopping the instance before updating
* add allow_stopping_for_update field
* Update sqladmin api
Pull in updates to the generated sqladmin api and update callers for
the change in the StorageAutoResize setting
* Add support for availability_type setting
Allow specifying ZONAL or REGIONAL to allow for PostgreSQL HA
setup.
* vendor: update sqladmin/v1beta4
* Test setting AvailabilityType for PostgreSQL
Add tests that cover the creation of a Postgres database with
AvailabilityType set to REGIONAL, and correct some small issues that
were preventing compilation.
* Fix breaking change w/ disk_autoresize in cloudsql
95e5582766
The cloudsql admin client changed the way it handles StorageAutoResize
as a parameter, in order to be more explicit about when the server has
ommitted the field. This changed the type from being bool to *bool, and
we need to modify provider code so that we supply the right value to the
api client.
* skip guest accelerators if count is 0.
Instances in instance groups in google will fail to provision, despite
requesting 0 GPUs. This came up for me when trying to provision
a similar instance group in all available regions, but only asking for
GPU's in those that support them by parameterizing the `count` and
setting it to 0.
This might be a violation of some terraform principles. For example,
testing locally with this change `terraform` did not recognize that
indeed my infra needed to be re-deployed (from it's pov, I assume it
believes this because inputs hadn't changed). Additionally, there may be
valid reasons for creating an instance template with 0 gpu's that can be
tuned upwards.
* Add guest accelerator skip test for instances.
* do not leave empty pointers to guest accelerators.
* attempt to clear guest accelerator diff
* conditionally customize diff for guest accels
* read boot disk initialization param from API
* make fmt
* Mark the initialize_params list as computed to support boot source
* Ensure private family test follow naming pattern
* Improve docs
* Add import support to google_dns_record_set
* Add import test to NS record
* Minimize diff change
* Improve docs
* Make error message more helpful
* Add note about trailing dot at the end of the record name
Add support for Google Dataflow jobs
Note: A dataflow job exists when it is in a nonterminal state, and does not exist if it
is in a terminal state (or a non-running state which can only transition into terminal
states). See doc for more detail.
* Initial commit
* Adding google_cloudfunction_function resource
* Some FMT updates
* Working Cloud Function Create/Delete/Get
Create is limited to gs:// source now.
* Fixed tests import
* Terraform now is able to apply and destroy function
* Fully working Basic test
* Added:
1. Allowed region check
2. readTimeout helper
* Found better solution for conflicting values
* Adding description
* Adding full basic test
* dded Update functionality
* Made few more optional params
* Added test for Labels
* Added update tests
* Added storage_* members and made function source deploy from storage bucket object
* Adding comments
* Adding tests for PubSub
* Adding tests for Bucket
* Adding Data provider
* Fixing bug which allowed to miss error
* Amending Operation retrieval
* Fixing vet errors and vendoring cloudfunctions/v1
* Fixing according to comments
* Fixing according to comments round #2
* Fixing tabs to space
* Fixing tabs to space and some comments #3
* Re-done update to include labels in one update with others
* Adding back default values. In case of such scenario, when user creates function with some values for "timeout" or "available_memory_mb", and then disables those attributes. Terraform plan then gives:
No changes. Infrastructure is up-to-date.
This is an error. By adding const we would avoid this error.
* Fixed MixedCase and more tabs
* Add internalIpOnly support for Dataproc clusters
* Add internal_ip_only to dataproc cluster docs
* Add default/basic dataproc internal ip test case
* Add test for dataproc internal_ip_only=true
* fixup cluster_config.gce_cluster_config to include .0.
* Remove redundant depends_on
* Add %s rnd to network and subnetwork
* Use variable for subnet CIDR and reference via source_ranges
* Add depends_on back to dataproc cluster test
* Fix cluster attribute refs (.0. again)
* Add 'google_organization' data source.
* Use 'GetResourceNameFromSelfLink'.
* Remove 'resourcemanager_helpers'.
* Use 'ConflictsWith' in schema.
* Add 'organization' argument and make 'name' an output-only attribute.
* Add 'google_billing_account' data source.
* Use 'GetResourceNameFromSelfLink'.
* Use 'ConflictsWith' in schema.
* Use pagination for List() API call.
* Add ability to filter by 'open' attribute.
* Don't use 'ForceNew' for data sources.
* Add 'billing_account' argument and make 'name' an output-only attribute.
* Correct error message.
* Add google_kubernetes_cluster datasource
Add documentation for google_kubernetes_cluster datasource
Rename datasource to google_container_cluster
To be consistent with the equivalent resource.
Rename datasource in docs.
google_kubernetes_cluster -> google_container_cluster.
Also add reference in google.erb file.
WIP
Datasource read needs to set an ID, then call resource read func
Add additional cluster attributes to datasource schema
* Generate datasource schema from resource
Datasource documentation also updated.
* add test for datasourceSchemaFromResourceSchema
* Code review changes
* Add IAM support for pubsub topic
* Fix resource name
* Add update test for iam_policy resource
* Standardize policy conversion function
* Standardize policy conversion function all resources