This was done as its own resource as suggested in slack, since we don't have the option of making all fields Computed in google_compute_instance. There's precedent in the aws provider for this sort of thing (see ami_copy, ami_from_instance).
When I started working on this I assumed I could do it in the compute_instance resource and so I went ahead and reordered the schema to make it easier to work with in the future. Now it's not quite relevant, but I left it in as its own commit that can be looked at separately from the other changes.
Fixes#1582.
* adding google folder data source with get by id, search by fields and lookup organization functionality
* removing search functionality
* creating folders for each test and updating documentation with default values
Exposes existing `google_compute_backend_service` as data sources.
This addresses #149 .
This allows, for instance, to collect a backend service's self_link and
use it from an other workspace/tfstate, sharing most of the
loadbalancers definition.
* Storage Default Object ACL resource
* Fixed the doc
* Renamed the resource id. Log change
* Complying with go vet
* Changes for review
* link to default object acl docs in sidebar
* Support for GCS notifications
* docs for storage notification
* docs for storage notification
* Clarified the doc
* Doc modifications
* Addressing requested changes from review
* Addressing requested changes from review
* Using ImportStatePassthrough
* Storage Default Object ACL resource
* Fixed the doc
* Renamed the resource id. Log change
* Complying with go vet
* Changes for review
* link to default object acl docs in sidebar
Add support for Google Dataflow jobs
Note: A dataflow job exists when it is in a nonterminal state, and does not exist if it
is in a terminal state (or a non-running state which can only transition into terminal
states). See doc for more detail.
* Initial commit
* Adding google_cloudfunction_function resource
* Some FMT updates
* Working Cloud Function Create/Delete/Get
Create is limited to gs:// source now.
* Fixed tests import
* Terraform now is able to apply and destroy function
* Fully working Basic test
* Added:
1. Allowed region check
2. readTimeout helper
* Found better solution for conflicting values
* Adding description
* Adding full basic test
* dded Update functionality
* Made few more optional params
* Added test for Labels
* Added update tests
* Added storage_* members and made function source deploy from storage bucket object
* Adding comments
* Adding tests for PubSub
* Adding tests for Bucket
* Adding Data provider
* Fixing bug which allowed to miss error
* Amending Operation retrieval
* Fixing vet errors and vendoring cloudfunctions/v1
* Fixing according to comments
* Fixing according to comments round #2
* Fixing tabs to space
* Fixing tabs to space and some comments #3
* Re-done update to include labels in one update with others
* Adding back default values. In case of such scenario, when user creates function with some values for "timeout" or "available_memory_mb", and then disables those attributes. Terraform plan then gives:
No changes. Infrastructure is up-to-date.
This is an error. By adding const we would avoid this error.
* Fixed MixedCase and more tabs
* Add 'google_organization' data source.
* Use 'GetResourceNameFromSelfLink'.
* Remove 'resourcemanager_helpers'.
* Use 'ConflictsWith' in schema.
* Add 'organization' argument and make 'name' an output-only attribute.
* Add 'google_billing_account' data source.
* Use 'GetResourceNameFromSelfLink'.
* Use 'ConflictsWith' in schema.
* Use pagination for List() API call.
* Add ability to filter by 'open' attribute.
* Don't use 'ForceNew' for data sources.
* Add 'billing_account' argument and make 'name' an output-only attribute.
* Correct error message.
* Add google_kubernetes_cluster datasource
Add documentation for google_kubernetes_cluster datasource
Rename datasource to google_container_cluster
To be consistent with the equivalent resource.
Rename datasource in docs.
google_kubernetes_cluster -> google_container_cluster.
Also add reference in google.erb file.
WIP
Datasource read needs to set an ID, then call resource read func
Add additional cluster attributes to datasource schema
* Generate datasource schema from resource
Datasource documentation also updated.
* add test for datasourceSchemaFromResourceSchema
* Code review changes
* Add IAM support for pubsub topic
* Fix resource name
* Add update test for iam_policy resource
* Standardize policy conversion function
* Standardize policy conversion function all resources
* Create google_kms_secret datasource
* Create google_kms_secret datasource documentation
* Remove duplicated code
* Create acceptance test
* Fix indentation
* Add documentation to sidebar
* Update Cloud SDK link in docs
* Oxford comma
* Rename variable to make it clear which resource is under test
* Update test to use utils from provider_test
* Add new data source: compute region instance group manager's groups.
* Add documentation for wait_for_instances and for the timeout mechanism in resourceComputeRegionInstanceGroupManagerCreate.
Add consistency for for IAM imports.
- Adds imports for projects, folders, crypto keys, organizations, and key rings.
- Anything else with IAM can implement a simple method and begin working immediately.
- Add tests for all the IAM imports.
- Import documentation for IAM resources.