From 4678bee19456386c25bf17266ff976cd4ebe559c Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 22 Aug 2018 17:16:22 -0700 Subject: [PATCH 01/31] Use differentiated issue templates. I largely stole these from github.com/terraform-providers/terraform-provider-aws, but with a few changes: * I called out how we assign users to issues, to make it more transparent to users. * I added a "success story" issue that will be automatically closed and labeled, to keep track of those stories. * I added a `[issue-type:TYPE]` string to each, so hashibot can detect and act on them. --- .github/ISSUE_TEMPLATE/bug.md | 77 +++++++++++++++++++++++ .github/ISSUE_TEMPLATE/bug_report.md | 77 +++++++++++++++++++++++ .github/ISSUE_TEMPLATE/enhancement.md | 45 +++++++++++++ .github/ISSUE_TEMPLATE/feature_request.md | 44 +++++++++++++ .github/ISSUE_TEMPLATE/question.md | 17 +++++ .github/ISSUE_TEMPLATE/success-story.md | 26 ++++++++ 6 files changed, 286 insertions(+) create mode 100644 .github/ISSUE_TEMPLATE/bug.md create mode 100644 .github/ISSUE_TEMPLATE/bug_report.md create mode 100644 .github/ISSUE_TEMPLATE/enhancement.md create mode 100644 .github/ISSUE_TEMPLATE/feature_request.md create mode 100644 .github/ISSUE_TEMPLATE/question.md create mode 100644 .github/ISSUE_TEMPLATE/success-story.md diff --git a/.github/ISSUE_TEMPLATE/bug.md b/.github/ISSUE_TEMPLATE/bug.md new file mode 100644 index 00000000..ca9424df --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug.md @@ -0,0 +1,77 @@ +--- +name: Bug +about: For when something is there, but doesn't work how it should. + +--- + + + + +### Community Note + +* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request +* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request +* If you are interested in working on this issue or have submitted a pull request, please leave a comment +* If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. + + + +### Terraform Version + + + +### Affected Resource(s) + + + +* google_XXXXX + +### Terraform Configuration Files + + + +```tf +# Copy-paste your Terraform configurations here - for large Terraform configs, +# please use a service like Dropbox and share a link to the ZIP file. For +# security, you can also encrypt the files using our GPG public key: https://www.hashicorp.com/security +``` + +### Debug Output + + + +### Panic Output + + + +### Expected Behavior + + + +### Actual Behavior + + + +### Steps to Reproduce + + + +1. `terraform apply` + +### Important Factoids + + + +### References + + + +* #0000 diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 00000000..86050f12 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,77 @@ +--- +name: Bug report +about: For when something is there, but doesn't work how it should. + +--- + + + + +### Community Note + +* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request +* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request +* If you are interested in working on this issue or have submitted a pull request, please leave a comment +* If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. + + + +### Terraform Version + + + +### Affected Resource(s) + + + +* google_XXXXX + +### Terraform Configuration Files + + + +```tf +# Copy-paste your Terraform configurations here - for large Terraform configs, +# please use a service like Dropbox and share a link to the ZIP file. For +# security, you can also encrypt the files using our GPG public key: https://www.hashicorp.com/security +``` + +### Debug Output + + + +### Panic Output + + + +### Expected Behavior + + + +### Actual Behavior + + + +### Steps to Reproduce + + + +1. `terraform apply` + +### Important Factoids + + + +### References + + + +* #0000 diff --git a/.github/ISSUE_TEMPLATE/enhancement.md b/.github/ISSUE_TEMPLATE/enhancement.md new file mode 100644 index 00000000..30279323 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/enhancement.md @@ -0,0 +1,45 @@ +--- +name: Enhancement +about: For when something (a resource, field, etc.) is missing, but should be added. + +--- + + + + +### Community Note + +* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request +* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request +* If you are interested in working on this issue or have submitted a pull request, please leave a commentIf the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. + + + +### Description + + + +### New or Affected Resource(s) + + + +* google_XXXXX + +### Potential Terraform Configuration + + + +```tf +# Propose what you think the configuration to take advantage of this feature should look like. +# We may not use it verbatim, but it's helpful in understanding your intent. +``` + +### References + + + +* #0000 diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md new file mode 100644 index 00000000..4bbbfffb --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -0,0 +1,44 @@ +--- +name: Feature request +about: For when something (a resource, field, etc.) is missing, but should be added. + +--- + + + +### Community Note + +* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request +* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request +* If you are interested in working on this issue or have submitted a pull request, please leave a commentIf the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. + + + +### Description + + + +### New or Affected Resource(s) + + + +* google_XXXXX + +### Potential Terraform Configuration + + + +```tf +# Propose what you think the configuration to take advantage of this feature should look like. +# We may not use it verbatim, but it's helpful in understanding your intent. +``` + +### References + + + +* #0000 diff --git a/.github/ISSUE_TEMPLATE/question.md b/.github/ISSUE_TEMPLATE/question.md new file mode 100644 index 00000000..7322f1e6 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/question.md @@ -0,0 +1,17 @@ +--- +name: Question +about: If you have a question, please check out our community resources! + +--- + +--- + +Issues on GitHub are intended to be related to bugs or feature requests with provider codebase, +so we recommend using our other community resources instead of asking here 👍. + +--- + +If you have a support request or question please submit them to one of these resources: + +* [Terraform community resources](https://www.terraform.io/docs/extend/community/index.html) +* [HashiCorp support](https://support.hashicorp.com) (Terraform Enterprise customers) diff --git a/.github/ISSUE_TEMPLATE/success-story.md b/.github/ISSUE_TEMPLATE/success-story.md new file mode 100644 index 00000000..0d18e985 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/success-story.md @@ -0,0 +1,26 @@ +--- +name: Success story +about: Tell us about how the provider worked out well for you and things you love + about it. + +--- + + + + +**Company**: +**Project**: + +## How the Google Provider Helped + +## Things I Really Enjoyed About Using the Provider From b69754ae38e42483c584147b4e3ff4f22ed0b282 Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 22 Aug 2018 17:27:35 -0700 Subject: [PATCH 02/31] Delete feature_request.md --- .github/ISSUE_TEMPLATE/feature_request.md | 44 ----------------------- 1 file changed, 44 deletions(-) delete mode 100644 .github/ISSUE_TEMPLATE/feature_request.md diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index 4bbbfffb..00000000 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -name: Feature request -about: For when something (a resource, field, etc.) is missing, but should be added. - ---- - - - -### Community Note - -* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request -* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request -* If you are interested in working on this issue or have submitted a pull request, please leave a commentIf the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. - - - -### Description - - - -### New or Affected Resource(s) - - - -* google_XXXXX - -### Potential Terraform Configuration - - - -```tf -# Propose what you think the configuration to take advantage of this feature should look like. -# We may not use it verbatim, but it's helpful in understanding your intent. -``` - -### References - - - -* #0000 From 2e76b3c1da274805557c7471fe8371badc264136 Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 22 Aug 2018 17:27:47 -0700 Subject: [PATCH 03/31] Delete bug_report.md --- .github/ISSUE_TEMPLATE/bug_report.md | 77 ---------------------------- 1 file changed, 77 deletions(-) delete mode 100644 .github/ISSUE_TEMPLATE/bug_report.md diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md deleted file mode 100644 index 86050f12..00000000 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -name: Bug report -about: For when something is there, but doesn't work how it should. - ---- - - - - -### Community Note - -* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request -* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request -* If you are interested in working on this issue or have submitted a pull request, please leave a comment -* If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. - - - -### Terraform Version - - - -### Affected Resource(s) - - - -* google_XXXXX - -### Terraform Configuration Files - - - -```tf -# Copy-paste your Terraform configurations here - for large Terraform configs, -# please use a service like Dropbox and share a link to the ZIP file. For -# security, you can also encrypt the files using our GPG public key: https://www.hashicorp.com/security -``` - -### Debug Output - - - -### Panic Output - - - -### Expected Behavior - - - -### Actual Behavior - - - -### Steps to Reproduce - - - -1. `terraform apply` - -### Important Factoids - - - -### References - - - -* #0000 From 83406c3c5ff764521ad2a45cdb3aa87aace2ae12 Mon Sep 17 00:00:00 2001 From: emily Date: Mon, 27 Aug 2018 16:22:06 -0700 Subject: [PATCH 04/31] fix possible error from nil check (#1942) --- google/resource_compute_instance.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/google/resource_compute_instance.go b/google/resource_compute_instance.go index 97a46154..18fdc052 100644 --- a/google/resource_compute_instance.go +++ b/google/resource_compute_instance.go @@ -1379,7 +1379,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if d.HasChange("service_account.0.email") || scopesChange { sa := d.Get("service_account").([]interface{}) req := &compute.InstancesSetServiceAccountRequest{ForceSendFields: []string{"email"}} - if len(sa) > 0 { + if len(sa) > 0 && sa[0] != nil { saMap := sa[0].(map[string]interface{}) req.Email = saMap["email"].(string) req.Scopes = canonicalizeServiceScopes(convertStringSet(saMap["scopes"].(*schema.Set))) From f1f0bc97e29dec2088929428609aa2e3b5680d29 Mon Sep 17 00:00:00 2001 From: The Magician Date: Mon, 27 Aug 2018 16:35:47 -0700 Subject: [PATCH 05/31] Autogenerate HealthCheck resource (#1941) --- google/provider_compute_gen.go | 1 + google/resource_compute_health_check.go | 1276 +++++++++++------ .../docs/r/compute_health_check.html.markdown | 252 +++- 3 files changed, 1036 insertions(+), 493 deletions(-) diff --git a/google/provider_compute_gen.go b/google/provider_compute_gen.go index 50baf00e..b5407b92 100644 --- a/google/provider_compute_gen.go +++ b/google/provider_compute_gen.go @@ -26,6 +26,7 @@ var GeneratedComputeResourcesMap = map[string]*schema.Resource{ "google_compute_global_address": resourceComputeGlobalAddress(), "google_compute_http_health_check": resourceComputeHttpHealthCheck(), "google_compute_https_health_check": resourceComputeHttpsHealthCheck(), + "google_compute_health_check": resourceComputeHealthCheck(), "google_compute_region_autoscaler": resourceComputeRegionAutoscaler(), "google_compute_region_disk": resourceComputeRegionDisk(), "google_compute_route": resourceComputeRoute(), diff --git a/google/resource_compute_health_check.go b/google/resource_compute_health_check.go index d143d05d..eaaeac49 100644 --- a/google/resource_compute_health_check.go +++ b/google/resource_compute_health_check.go @@ -1,188 +1,214 @@ +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** AUTO GENERATED CODE *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + package google import ( "fmt" "log" + "reflect" + "strconv" + "time" "github.com/hashicorp/terraform/helper/schema" - "google.golang.org/api/compute/v1" + "github.com/hashicorp/terraform/helper/validation" + compute "google.golang.org/api/compute/v1" ) func resourceComputeHealthCheck() *schema.Resource { return &schema.Resource{ Create: resourceComputeHealthCheckCreate, Read: resourceComputeHealthCheckRead, - Delete: resourceComputeHealthCheckDelete, Update: resourceComputeHealthCheckUpdate, + Delete: resourceComputeHealthCheckDelete, + Importer: &schema.ResourceImporter{ - State: schema.ImportStatePassthrough, + State: resourceComputeHealthCheckImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(240 * time.Second), + Update: schema.DefaultTimeout(240 * time.Second), + Delete: schema.DefaultTimeout(240 * time.Second), }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - - "check_interval_sec": &schema.Schema{ + "check_interval_sec": { Type: schema.TypeInt, Optional: true, Default: 5, }, - - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, }, - - "healthy_threshold": &schema.Schema{ + "healthy_threshold": { Type: schema.TypeInt, Optional: true, Default: 2, }, - - "tcp_health_check": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - ConflictsWith: []string{"ssl_health_check", "http_health_check", "https_health_check"}, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "port": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Default: 80, - }, - "proxy_header": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Default: "NONE", - }, - "request": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - }, - "response": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - }, - }, - }, + "timeout_sec": { + Type: schema.TypeInt, + Optional: true, + Default: 5, }, - - "ssl_health_check": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - ConflictsWith: []string{"tcp_health_check", "http_health_check", "https_health_check"}, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "port": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Default: 443, - }, - "proxy_header": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Default: "NONE", - }, - "request": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - }, - "response": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - }, - }, - }, + "unhealthy_threshold": { + Type: schema.TypeInt, + Optional: true, + Default: 2, }, - - "http_health_check": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - ConflictsWith: []string{"tcp_health_check", "ssl_health_check", "https_health_check"}, + "http_health_check": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "host": &schema.Schema{ + "host": { Type: schema.TypeString, Optional: true, }, - "port": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Default: 80, - }, - "proxy_header": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Default: "NONE", - }, - "request_path": &schema.Schema{ + "request_path": { Type: schema.TypeString, Optional: true, Default: "/", }, - }, - }, - }, - - "https_health_check": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - ConflictsWith: []string{"tcp_health_check", "ssl_health_check", "http_health_check"}, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "host": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - }, - "port": &schema.Schema{ + "port": { Type: schema.TypeInt, Optional: true, - Default: 443, + Default: 80, }, - "proxy_header": &schema.Schema{ + "proxy_header": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NONE", "PROXY_V1", ""}, false), + Default: "NONE", + }, + }, + }, + ConflictsWith: []string{"https_health_check", "tcp_health_check", "ssl_health_check"}, + }, + "https_health_check": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "host": { Type: schema.TypeString, Optional: true, - Default: "NONE", }, - "request_path": &schema.Schema{ + "request_path": { Type: schema.TypeString, Optional: true, Default: "/", }, + "port": { + Type: schema.TypeInt, + Optional: true, + Default: 443, + }, + "proxy_header": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NONE", "PROXY_V1", ""}, false), + Default: "NONE", + }, }, }, + ConflictsWith: []string{"http_health_check", "tcp_health_check", "ssl_health_check"}, }, - - "project": &schema.Schema{ + "tcp_health_check": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "request": { + Type: schema.TypeString, + Optional: true, + }, + "response": { + Type: schema.TypeString, + Optional: true, + }, + "port": { + Type: schema.TypeInt, + Optional: true, + Default: 80, + }, + "proxy_header": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NONE", "PROXY_V1", ""}, false), + Default: "NONE", + }, + }, + }, + ConflictsWith: []string{"http_health_check", "https_health_check", "ssl_health_check"}, + }, + "ssl_health_check": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "request": { + Type: schema.TypeString, + Optional: true, + }, + "response": { + Type: schema.TypeString, + Optional: true, + }, + "port": { + Type: schema.TypeInt, + Optional: true, + Default: 443, + }, + "proxy_header": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NONE", "PROXY_V1", ""}, false), + Default: "NONE", + }, + }, + }, + ConflictsWith: []string{"http_health_check", "https_health_check", "tcp_health_check"}, + }, + "creation_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "type": { + Type: schema.TypeString, + Computed: true, + }, + "project": { Type: schema.TypeString, Optional: true, + Computed: true, ForceNew: true, - Computed: true, }, - - "self_link": &schema.Schema{ + "self_link": { Type: schema.TypeString, Computed: true, }, - - "timeout_sec": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Default: 5, - }, - - "unhealthy_threshold": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Default: 2, - }, }, } } @@ -190,369 +216,771 @@ func resourceComputeHealthCheck() *schema.Resource { func resourceComputeHealthCheckCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - project, err := getProject(d, config) + obj := make(map[string]interface{}) + checkIntervalSecProp, err := expandComputeHealthCheckCheckIntervalSec(d.Get("check_interval_sec"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("check_interval_sec"); !isEmptyValue(reflect.ValueOf(checkIntervalSecProp)) && (ok || !reflect.DeepEqual(v, checkIntervalSecProp)) { + obj["checkIntervalSec"] = checkIntervalSecProp + } + descriptionProp, err := expandComputeHealthCheckDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !isEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + healthyThresholdProp, err := expandComputeHealthCheckHealthyThreshold(d.Get("healthy_threshold"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("healthy_threshold"); !isEmptyValue(reflect.ValueOf(healthyThresholdProp)) && (ok || !reflect.DeepEqual(v, healthyThresholdProp)) { + obj["healthyThreshold"] = healthyThresholdProp + } + nameProp, err := expandComputeHealthCheckName(d.Get("name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("name"); !isEmptyValue(reflect.ValueOf(nameProp)) && (ok || !reflect.DeepEqual(v, nameProp)) { + obj["name"] = nameProp + } + timeoutSecProp, err := expandComputeHealthCheckTimeoutSec(d.Get("timeout_sec"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("timeout_sec"); !isEmptyValue(reflect.ValueOf(timeoutSecProp)) && (ok || !reflect.DeepEqual(v, timeoutSecProp)) { + obj["timeoutSec"] = timeoutSecProp + } + unhealthyThresholdProp, err := expandComputeHealthCheckUnhealthyThreshold(d.Get("unhealthy_threshold"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("unhealthy_threshold"); !isEmptyValue(reflect.ValueOf(unhealthyThresholdProp)) && (ok || !reflect.DeepEqual(v, unhealthyThresholdProp)) { + obj["unhealthyThreshold"] = unhealthyThresholdProp + } + httpHealthCheckProp, err := expandComputeHealthCheckHttpHealthCheck(d.Get("http_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("http_health_check"); !isEmptyValue(reflect.ValueOf(httpHealthCheckProp)) && (ok || !reflect.DeepEqual(v, httpHealthCheckProp)) { + obj["httpHealthCheck"] = httpHealthCheckProp + } + httpsHealthCheckProp, err := expandComputeHealthCheckHttpsHealthCheck(d.Get("https_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("https_health_check"); !isEmptyValue(reflect.ValueOf(httpsHealthCheckProp)) && (ok || !reflect.DeepEqual(v, httpsHealthCheckProp)) { + obj["httpsHealthCheck"] = httpsHealthCheckProp + } + tcpHealthCheckProp, err := expandComputeHealthCheckTcpHealthCheck(d.Get("tcp_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("tcp_health_check"); !isEmptyValue(reflect.ValueOf(tcpHealthCheckProp)) && (ok || !reflect.DeepEqual(v, tcpHealthCheckProp)) { + obj["tcpHealthCheck"] = tcpHealthCheckProp + } + sslHealthCheckProp, err := expandComputeHealthCheckSslHealthCheck(d.Get("ssl_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("ssl_health_check"); !isEmptyValue(reflect.ValueOf(sslHealthCheckProp)) && (ok || !reflect.DeepEqual(v, sslHealthCheckProp)) { + obj["sslHealthCheck"] = sslHealthCheckProp + } + + obj, err = resourceComputeHealthCheckEncoder(d, meta, obj) if err != nil { return err } - // Build the parameter - hchk := &compute.HealthCheck{ - Name: d.Get("name").(string), - } - // Optional things - if v, ok := d.GetOk("description"); ok { - hchk.Description = v.(string) - } - if v, ok := d.GetOk("check_interval_sec"); ok { - hchk.CheckIntervalSec = int64(v.(int)) - } - if v, ok := d.GetOk("healthy_threshold"); ok { - hchk.HealthyThreshold = int64(v.(int)) - } - if v, ok := d.GetOk("timeout_sec"); ok { - hchk.TimeoutSec = int64(v.(int)) - } - if v, ok := d.GetOk("unhealthy_threshold"); ok { - hchk.UnhealthyThreshold = int64(v.(int)) + url, err := replaceVars(d, config, "https://www.googleapis.com/compute/v1/projects/{{project}}/global/healthChecks") + if err != nil { + return err } - if v, ok := d.GetOk("tcp_health_check"); ok { - hchk.Type = "TCP" - tcpcheck := v.([]interface{})[0].(map[string]interface{}) - tcpHealthCheck := &compute.TCPHealthCheck{} - if val, ok := tcpcheck["port"]; ok { - tcpHealthCheck.Port = int64(val.(int)) - } - if val, ok := tcpcheck["proxy_header"]; ok { - tcpHealthCheck.ProxyHeader = val.(string) - } - if val, ok := tcpcheck["request"]; ok { - tcpHealthCheck.Request = val.(string) - } - if val, ok := tcpcheck["response"]; ok { - tcpHealthCheck.Response = val.(string) - } - hchk.TcpHealthCheck = tcpHealthCheck - } - - if v, ok := d.GetOk("ssl_health_check"); ok { - hchk.Type = "SSL" - sslcheck := v.([]interface{})[0].(map[string]interface{}) - sslHealthCheck := &compute.SSLHealthCheck{} - if val, ok := sslcheck["port"]; ok { - sslHealthCheck.Port = int64(val.(int)) - } - if val, ok := sslcheck["proxy_header"]; ok { - sslHealthCheck.ProxyHeader = val.(string) - } - if val, ok := sslcheck["request"]; ok { - sslHealthCheck.Request = val.(string) - } - if val, ok := sslcheck["response"]; ok { - sslHealthCheck.Response = val.(string) - } - hchk.SslHealthCheck = sslHealthCheck - } - - if v, ok := d.GetOk("http_health_check"); ok { - hchk.Type = "HTTP" - httpcheck := v.([]interface{})[0].(map[string]interface{}) - httpHealthCheck := &compute.HTTPHealthCheck{} - if val, ok := httpcheck["host"]; ok { - httpHealthCheck.Host = val.(string) - } - if val, ok := httpcheck["port"]; ok { - httpHealthCheck.Port = int64(val.(int)) - } - if val, ok := httpcheck["proxy_header"]; ok { - httpHealthCheck.ProxyHeader = val.(string) - } - if val, ok := httpcheck["request_path"]; ok { - httpHealthCheck.RequestPath = val.(string) - } - hchk.HttpHealthCheck = httpHealthCheck - } - - if v, ok := d.GetOk("https_health_check"); ok { - hchk.Type = "HTTPS" - httpscheck := v.([]interface{})[0].(map[string]interface{}) - httpsHealthCheck := &compute.HTTPSHealthCheck{} - if val, ok := httpscheck["host"]; ok { - httpsHealthCheck.Host = val.(string) - } - if val, ok := httpscheck["port"]; ok { - httpsHealthCheck.Port = int64(val.(int)) - } - if val, ok := httpscheck["proxy_header"]; ok { - httpsHealthCheck.ProxyHeader = val.(string) - } - if val, ok := httpscheck["request_path"]; ok { - httpsHealthCheck.RequestPath = val.(string) - } - hchk.HttpsHealthCheck = httpsHealthCheck - } - - log.Printf("[DEBUG] HealthCheck insert request: %#v", hchk) - op, err := config.clientCompute.HealthChecks.Insert( - project, hchk).Do() + log.Printf("[DEBUG] Creating new HealthCheck: %#v", obj) + res, err := sendRequest(config, "POST", url, obj) if err != nil { return fmt.Errorf("Error creating HealthCheck: %s", err) } - // It probably maybe worked, so store the ID now - d.SetId(hchk.Name) - - err = computeOperationWait(config.clientCompute, op, project, "Creating Health Check") + // Store the ID now + id, err := replaceVars(d, config, "{{name}}") if err != nil { - return err + return fmt.Errorf("Error constructing id: %s", err) } - - return resourceComputeHealthCheckRead(d, meta) -} - -func resourceComputeHealthCheckUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) + d.SetId(id) project, err := getProject(d, config) if err != nil { return err } - - // Build the parameter - hchk := &compute.HealthCheck{ - Name: d.Get("name").(string), - } - - nullFields := make([]string, 0, 3) - - // Optional things - if v, ok := d.GetOk("description"); ok { - hchk.Description = v.(string) - } - if v, ok := d.GetOk("check_interval_sec"); ok { - hchk.CheckIntervalSec = int64(v.(int)) - } - if v, ok := d.GetOk("healthy_threshold"); ok { - hchk.HealthyThreshold = int64(v.(int)) - } - if v, ok := d.GetOk("timeout_sec"); ok { - hchk.TimeoutSec = int64(v.(int)) - } - if v, ok := d.GetOk("unhealthy_threshold"); ok { - hchk.UnhealthyThreshold = int64(v.(int)) - } - if v, ok := d.GetOk("tcp_health_check"); ok { - hchk.Type = "TCP" - tcpcheck := v.([]interface{})[0].(map[string]interface{}) - tcpHealthCheck := &compute.TCPHealthCheck{} - if val, ok := tcpcheck["port"]; ok { - tcpHealthCheck.Port = int64(val.(int)) - } - if val, ok := tcpcheck["proxy_header"]; ok { - tcpHealthCheck.ProxyHeader = val.(string) - } - if val, ok := tcpcheck["request"]; ok { - tcpHealthCheck.Request = val.(string) - } - if val, ok := tcpcheck["response"]; ok { - tcpHealthCheck.Response = val.(string) - } - hchk.TcpHealthCheck = tcpHealthCheck - } else { - nullFields = append(nullFields, "TcpHealthCheck") - } - if v, ok := d.GetOk("ssl_health_check"); ok { - hchk.Type = "SSL" - sslcheck := v.([]interface{})[0].(map[string]interface{}) - sslHealthCheck := &compute.SSLHealthCheck{} - if val, ok := sslcheck["port"]; ok { - sslHealthCheck.Port = int64(val.(int)) - } - if val, ok := sslcheck["proxy_header"]; ok { - sslHealthCheck.ProxyHeader = val.(string) - } - if val, ok := sslcheck["request"]; ok { - sslHealthCheck.Request = val.(string) - } - if val, ok := sslcheck["response"]; ok { - sslHealthCheck.Response = val.(string) - } - hchk.SslHealthCheck = sslHealthCheck - } else { - nullFields = append(nullFields, "SslHealthCheck") - } - if v, ok := d.GetOk("http_health_check"); ok { - hchk.Type = "HTTP" - httpcheck := v.([]interface{})[0].(map[string]interface{}) - httpHealthCheck := &compute.HTTPHealthCheck{} - if val, ok := httpcheck["host"]; ok { - httpHealthCheck.Host = val.(string) - } - if val, ok := httpcheck["port"]; ok { - httpHealthCheck.Port = int64(val.(int)) - } - if val, ok := httpcheck["proxy_header"]; ok { - httpHealthCheck.ProxyHeader = val.(string) - } - if val, ok := httpcheck["request_path"]; ok { - httpHealthCheck.RequestPath = val.(string) - } - hchk.HttpHealthCheck = httpHealthCheck - } else { - nullFields = append(nullFields, "HttpHealthCheck") - } - - if v, ok := d.GetOk("https_health_check"); ok { - hchk.Type = "HTTPS" - httpscheck := v.([]interface{})[0].(map[string]interface{}) - httpsHealthCheck := &compute.HTTPSHealthCheck{} - if val, ok := httpscheck["host"]; ok { - httpsHealthCheck.Host = val.(string) - } - if val, ok := httpscheck["port"]; ok { - httpsHealthCheck.Port = int64(val.(int)) - } - if val, ok := httpscheck["proxy_header"]; ok { - httpsHealthCheck.ProxyHeader = val.(string) - } - if val, ok := httpscheck["request_path"]; ok { - httpsHealthCheck.RequestPath = val.(string) - } - hchk.HttpsHealthCheck = httpsHealthCheck - } else { - nullFields = append(nullFields, "HttpsHealthCheck") - } - - hchk.NullFields = nullFields - - log.Printf("[DEBUG] HealthCheck patch request: %#v", hchk) - op, err := config.clientCompute.HealthChecks.Patch( - project, hchk.Name, hchk).Do() - if err != nil { - return fmt.Errorf("Error patching HealthCheck: %s", err) - } - - // It probably maybe worked, so store the ID now - d.SetId(hchk.Name) - - err = computeOperationWait(config.clientCompute, op, project, "Updating Health Check") + op := &compute.Operation{} + err = Convert(res, op) if err != nil { return err } + waitErr := computeOperationWaitTime( + config.clientCompute, op, project, "Creating HealthCheck", + int(d.Timeout(schema.TimeoutCreate).Minutes())) + + if waitErr != nil { + // The resource didn't actually create + d.SetId("") + return fmt.Errorf("Error waiting to create HealthCheck: %s", waitErr) + } + + log.Printf("[DEBUG] Finished creating HealthCheck %q: %#v", d.Id(), res) + return resourceComputeHealthCheckRead(d, meta) } func resourceComputeHealthCheckRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - project, err := getProject(d, config) + url, err := replaceVars(d, config, "https://www.googleapis.com/compute/v1/projects/{{project}}/global/healthChecks/{{name}}") if err != nil { return err } - hchk, err := config.clientCompute.HealthChecks.Get( - project, d.Id()).Do() + res, err := sendRequest(config, "GET", url, nil) if err != nil { - return handleNotFoundError(err, d, fmt.Sprintf("Health Check %q", d.Get("name").(string))) + return handleNotFoundError(err, d, fmt.Sprintf("ComputeHealthCheck %q", d.Id())) } - d.Set("check_interval_sec", hchk.CheckIntervalSec) - d.Set("healthy_threshold", hchk.HealthyThreshold) - d.Set("timeout_sec", hchk.TimeoutSec) - d.Set("unhealthy_threshold", hchk.UnhealthyThreshold) - d.Set("tcp_health_check", flattenTcpHealthCheck(hchk.TcpHealthCheck)) - d.Set("ssl_health_check", flattenSslHealthCheck(hchk.SslHealthCheck)) - d.Set("http_health_check", flattenHttpHealthCheck(hchk.HttpHealthCheck)) - d.Set("https_health_check", flattenHttpsHealthCheck(hchk.HttpsHealthCheck)) - d.Set("self_link", hchk.SelfLink) - d.Set("name", hchk.Name) - d.Set("description", hchk.Description) - d.Set("project", project) + if err := d.Set("check_interval_sec", flattenComputeHealthCheckCheckIntervalSec(res["checkIntervalSec"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("creation_timestamp", flattenComputeHealthCheckCreationTimestamp(res["creationTimestamp"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("description", flattenComputeHealthCheckDescription(res["description"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("healthy_threshold", flattenComputeHealthCheckHealthyThreshold(res["healthyThreshold"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("name", flattenComputeHealthCheckName(res["name"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("timeout_sec", flattenComputeHealthCheckTimeoutSec(res["timeoutSec"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("unhealthy_threshold", flattenComputeHealthCheckUnhealthyThreshold(res["unhealthyThreshold"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("type", flattenComputeHealthCheckType(res["type"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("http_health_check", flattenComputeHealthCheckHttpHealthCheck(res["httpHealthCheck"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("https_health_check", flattenComputeHealthCheckHttpsHealthCheck(res["httpsHealthCheck"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("tcp_health_check", flattenComputeHealthCheckTcpHealthCheck(res["tcpHealthCheck"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("ssl_health_check", flattenComputeHealthCheckSslHealthCheck(res["sslHealthCheck"])); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + if err := d.Set("self_link", ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } + project, err := getProject(d, config) + if err != nil { + return err + } + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error reading HealthCheck: %s", err) + } return nil } +func resourceComputeHealthCheckUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + obj := make(map[string]interface{}) + checkIntervalSecProp, err := expandComputeHealthCheckCheckIntervalSec(d.Get("check_interval_sec"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("check_interval_sec"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, checkIntervalSecProp)) { + obj["checkIntervalSec"] = checkIntervalSecProp + } + descriptionProp, err := expandComputeHealthCheckDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + healthyThresholdProp, err := expandComputeHealthCheckHealthyThreshold(d.Get("healthy_threshold"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("healthy_threshold"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, healthyThresholdProp)) { + obj["healthyThreshold"] = healthyThresholdProp + } + nameProp, err := expandComputeHealthCheckName(d.Get("name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("name"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, nameProp)) { + obj["name"] = nameProp + } + timeoutSecProp, err := expandComputeHealthCheckTimeoutSec(d.Get("timeout_sec"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("timeout_sec"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, timeoutSecProp)) { + obj["timeoutSec"] = timeoutSecProp + } + unhealthyThresholdProp, err := expandComputeHealthCheckUnhealthyThreshold(d.Get("unhealthy_threshold"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("unhealthy_threshold"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, unhealthyThresholdProp)) { + obj["unhealthyThreshold"] = unhealthyThresholdProp + } + httpHealthCheckProp, err := expandComputeHealthCheckHttpHealthCheck(d.Get("http_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("http_health_check"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, httpHealthCheckProp)) { + obj["httpHealthCheck"] = httpHealthCheckProp + } + httpsHealthCheckProp, err := expandComputeHealthCheckHttpsHealthCheck(d.Get("https_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("https_health_check"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, httpsHealthCheckProp)) { + obj["httpsHealthCheck"] = httpsHealthCheckProp + } + tcpHealthCheckProp, err := expandComputeHealthCheckTcpHealthCheck(d.Get("tcp_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("tcp_health_check"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, tcpHealthCheckProp)) { + obj["tcpHealthCheck"] = tcpHealthCheckProp + } + sslHealthCheckProp, err := expandComputeHealthCheckSslHealthCheck(d.Get("ssl_health_check"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("ssl_health_check"); !isEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, sslHealthCheckProp)) { + obj["sslHealthCheck"] = sslHealthCheckProp + } + + obj, err = resourceComputeHealthCheckEncoder(d, meta, obj) + + url, err := replaceVars(d, config, "https://www.googleapis.com/compute/v1/projects/{{project}}/global/healthChecks/{{name}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Updating HealthCheck %q: %#v", d.Id(), obj) + res, err := sendRequest(config, "PUT", url, obj) + + if err != nil { + return fmt.Errorf("Error updating HealthCheck %q: %s", d.Id(), err) + } + + project, err := getProject(d, config) + if err != nil { + return err + } + op := &compute.Operation{} + err = Convert(res, op) + if err != nil { + return err + } + + err = computeOperationWaitTime( + config.clientCompute, op, project, "Updating HealthCheck", + int(d.Timeout(schema.TimeoutUpdate).Minutes())) + + if err != nil { + return err + } + + return resourceComputeHealthCheckRead(d, meta) +} + func resourceComputeHealthCheckDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + url, err := replaceVars(d, config, "https://www.googleapis.com/compute/v1/projects/{{project}}/global/healthChecks/{{name}}") + if err != nil { + return err + } + + var obj map[string]interface{} + log.Printf("[DEBUG] Deleting HealthCheck %q", d.Id()) + res, err := sendRequest(config, "DELETE", url, obj) + if err != nil { + return handleNotFoundError(err, d, "HealthCheck") + } + project, err := getProject(d, config) if err != nil { return err } - - // Delete the HealthCheck - op, err := config.clientCompute.HealthChecks.Delete( - project, d.Id()).Do() - if err != nil { - return fmt.Errorf("Error deleting HealthCheck: %s", err) - } - - err = computeOperationWait(config.clientCompute, op, project, "Deleting Health Check") + op := &compute.Operation{} + err = Convert(res, op) if err != nil { return err } - d.SetId("") + err = computeOperationWaitTime( + config.clientCompute, op, project, "Deleting HealthCheck", + int(d.Timeout(schema.TimeoutDelete).Minutes())) + + if err != nil { + return err + } + + log.Printf("[DEBUG] Finished deleting HealthCheck %q: %#v", d.Id(), res) return nil } -func flattenTcpHealthCheck(hchk *compute.TCPHealthCheck) []map[string]interface{} { - if hchk == nil { - return nil - } +func resourceComputeHealthCheckImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*Config) + parseImportId([]string{"projects/(?P[^/]+)/global/healthChecks/(?P[^/]+)", "(?P[^/]+)/(?P[^/]+)", "(?P[^/]+)"}, d, config) - result := make([]map[string]interface{}, 0, 1) - data := make(map[string]interface{}) - data["port"] = hchk.Port - data["proxy_header"] = hchk.ProxyHeader - data["request"] = hchk.Request - data["response"] = hchk.Response - result = append(result, data) - return result + // Replace import id for the resource id + id, err := replaceVars(d, config, "{{name}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return []*schema.ResourceData{d}, nil } -func flattenSslHealthCheck(hchk *compute.SSLHealthCheck) []map[string]interface{} { - if hchk == nil { - return nil +func flattenComputeHealthCheckCheckIntervalSec(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. } - - result := make([]map[string]interface{}, 0, 1) - data := make(map[string]interface{}) - data["port"] = hchk.Port - data["proxy_header"] = hchk.ProxyHeader - data["request"] = hchk.Request - data["response"] = hchk.Response - result = append(result, data) - return result + return v } -func flattenHttpHealthCheck(hchk *compute.HTTPHealthCheck) []map[string]interface{} { - if hchk == nil { - return nil - } - - result := make([]map[string]interface{}, 0, 1) - data := make(map[string]interface{}) - data["host"] = hchk.Host - data["port"] = hchk.Port - data["proxy_header"] = hchk.ProxyHeader - data["request_path"] = hchk.RequestPath - result = append(result, data) - return result +func flattenComputeHealthCheckCreationTimestamp(v interface{}) interface{} { + return v } -func flattenHttpsHealthCheck(hchk *compute.HTTPSHealthCheck) []map[string]interface{} { - if hchk == nil { +func flattenComputeHealthCheckDescription(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckHealthyThreshold(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. + } + return v +} + +func flattenComputeHealthCheckName(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckTimeoutSec(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. + } + return v +} + +func flattenComputeHealthCheckUnhealthyThreshold(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. + } + return v +} + +func flattenComputeHealthCheckType(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckHttpHealthCheck(v interface{}) interface{} { + if v == nil { return nil } - - result := make([]map[string]interface{}, 0, 1) - data := make(map[string]interface{}) - data["host"] = hchk.Host - data["port"] = hchk.Port - data["proxy_header"] = hchk.ProxyHeader - data["request_path"] = hchk.RequestPath - result = append(result, data) - return result + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + transformed["host"] = + flattenComputeHealthCheckHttpHealthCheckHost(original["host"]) + transformed["request_path"] = + flattenComputeHealthCheckHttpHealthCheckRequestPath(original["requestPath"]) + transformed["port"] = + flattenComputeHealthCheckHttpHealthCheckPort(original["port"]) + transformed["proxy_header"] = + flattenComputeHealthCheckHttpHealthCheckProxyHeader(original["proxyHeader"]) + return []interface{}{transformed} +} +func flattenComputeHealthCheckHttpHealthCheckHost(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckHttpHealthCheckRequestPath(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckHttpHealthCheckPort(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. + } + return v +} + +func flattenComputeHealthCheckHttpHealthCheckProxyHeader(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckHttpsHealthCheck(v interface{}) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + transformed["host"] = + flattenComputeHealthCheckHttpsHealthCheckHost(original["host"]) + transformed["request_path"] = + flattenComputeHealthCheckHttpsHealthCheckRequestPath(original["requestPath"]) + transformed["port"] = + flattenComputeHealthCheckHttpsHealthCheckPort(original["port"]) + transformed["proxy_header"] = + flattenComputeHealthCheckHttpsHealthCheckProxyHeader(original["proxyHeader"]) + return []interface{}{transformed} +} +func flattenComputeHealthCheckHttpsHealthCheckHost(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckHttpsHealthCheckRequestPath(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckHttpsHealthCheckPort(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. + } + return v +} + +func flattenComputeHealthCheckHttpsHealthCheckProxyHeader(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckTcpHealthCheck(v interface{}) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + transformed["request"] = + flattenComputeHealthCheckTcpHealthCheckRequest(original["request"]) + transformed["response"] = + flattenComputeHealthCheckTcpHealthCheckResponse(original["response"]) + transformed["port"] = + flattenComputeHealthCheckTcpHealthCheckPort(original["port"]) + transformed["proxy_header"] = + flattenComputeHealthCheckTcpHealthCheckProxyHeader(original["proxyHeader"]) + return []interface{}{transformed} +} +func flattenComputeHealthCheckTcpHealthCheckRequest(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckTcpHealthCheckResponse(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckTcpHealthCheckPort(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. + } + return v +} + +func flattenComputeHealthCheckTcpHealthCheckProxyHeader(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckSslHealthCheck(v interface{}) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + transformed["request"] = + flattenComputeHealthCheckSslHealthCheckRequest(original["request"]) + transformed["response"] = + flattenComputeHealthCheckSslHealthCheckResponse(original["response"]) + transformed["port"] = + flattenComputeHealthCheckSslHealthCheckPort(original["port"]) + transformed["proxy_header"] = + flattenComputeHealthCheckSslHealthCheckProxyHeader(original["proxyHeader"]) + return []interface{}{transformed} +} +func flattenComputeHealthCheckSslHealthCheckRequest(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckSslHealthCheckResponse(v interface{}) interface{} { + return v +} + +func flattenComputeHealthCheckSslHealthCheckPort(v interface{}) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := strconv.ParseInt(strVal, 10, 64); err == nil { + return intVal + } // let terraform core handle it if we can't convert the string to an int. + } + return v +} + +func flattenComputeHealthCheckSslHealthCheckProxyHeader(v interface{}) interface{} { + return v +} + +func expandComputeHealthCheckCheckIntervalSec(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckDescription(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHealthyThreshold(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckName(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckTimeoutSec(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckUnhealthyThreshold(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpHealthCheck(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedHost, err := expandComputeHealthCheckHttpHealthCheckHost(original["host"], d, config) + if err != nil { + return nil, err + } + transformed["host"] = transformedHost + transformedRequestPath, err := expandComputeHealthCheckHttpHealthCheckRequestPath(original["request_path"], d, config) + if err != nil { + return nil, err + } + transformed["requestPath"] = transformedRequestPath + transformedPort, err := expandComputeHealthCheckHttpHealthCheckPort(original["port"], d, config) + if err != nil { + return nil, err + } + transformed["port"] = transformedPort + transformedProxyHeader, err := expandComputeHealthCheckHttpHealthCheckProxyHeader(original["proxy_header"], d, config) + if err != nil { + return nil, err + } + transformed["proxyHeader"] = transformedProxyHeader + return transformed, nil +} + +func expandComputeHealthCheckHttpHealthCheckHost(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpHealthCheckRequestPath(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpHealthCheckPort(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpHealthCheckProxyHeader(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpsHealthCheck(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedHost, err := expandComputeHealthCheckHttpsHealthCheckHost(original["host"], d, config) + if err != nil { + return nil, err + } + transformed["host"] = transformedHost + transformedRequestPath, err := expandComputeHealthCheckHttpsHealthCheckRequestPath(original["request_path"], d, config) + if err != nil { + return nil, err + } + transformed["requestPath"] = transformedRequestPath + transformedPort, err := expandComputeHealthCheckHttpsHealthCheckPort(original["port"], d, config) + if err != nil { + return nil, err + } + transformed["port"] = transformedPort + transformedProxyHeader, err := expandComputeHealthCheckHttpsHealthCheckProxyHeader(original["proxy_header"], d, config) + if err != nil { + return nil, err + } + transformed["proxyHeader"] = transformedProxyHeader + return transformed, nil +} + +func expandComputeHealthCheckHttpsHealthCheckHost(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpsHealthCheckRequestPath(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpsHealthCheckPort(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckHttpsHealthCheckProxyHeader(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckTcpHealthCheck(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedRequest, err := expandComputeHealthCheckTcpHealthCheckRequest(original["request"], d, config) + if err != nil { + return nil, err + } + transformed["request"] = transformedRequest + transformedResponse, err := expandComputeHealthCheckTcpHealthCheckResponse(original["response"], d, config) + if err != nil { + return nil, err + } + transformed["response"] = transformedResponse + transformedPort, err := expandComputeHealthCheckTcpHealthCheckPort(original["port"], d, config) + if err != nil { + return nil, err + } + transformed["port"] = transformedPort + transformedProxyHeader, err := expandComputeHealthCheckTcpHealthCheckProxyHeader(original["proxy_header"], d, config) + if err != nil { + return nil, err + } + transformed["proxyHeader"] = transformedProxyHeader + return transformed, nil +} + +func expandComputeHealthCheckTcpHealthCheckRequest(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckTcpHealthCheckResponse(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckTcpHealthCheckPort(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckTcpHealthCheckProxyHeader(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckSslHealthCheck(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedRequest, err := expandComputeHealthCheckSslHealthCheckRequest(original["request"], d, config) + if err != nil { + return nil, err + } + transformed["request"] = transformedRequest + transformedResponse, err := expandComputeHealthCheckSslHealthCheckResponse(original["response"], d, config) + if err != nil { + return nil, err + } + transformed["response"] = transformedResponse + transformedPort, err := expandComputeHealthCheckSslHealthCheckPort(original["port"], d, config) + if err != nil { + return nil, err + } + transformed["port"] = transformedPort + transformedProxyHeader, err := expandComputeHealthCheckSslHealthCheckProxyHeader(original["proxy_header"], d, config) + if err != nil { + return nil, err + } + transformed["proxyHeader"] = transformedProxyHeader + return transformed, nil +} + +func expandComputeHealthCheckSslHealthCheckRequest(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckSslHealthCheckResponse(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckSslHealthCheckPort(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandComputeHealthCheckSslHealthCheckProxyHeader(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func resourceComputeHealthCheckEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { + if _, ok := d.GetOk("http_health_check"); ok { + obj["type"] = "HTTP" + return obj, nil + } + if _, ok := d.GetOk("https_health_check"); ok { + obj["type"] = "HTTPS" + return obj, nil + } + if _, ok := d.GetOk("tcp_health_check"); ok { + obj["type"] = "TCP" + return obj, nil + } + if _, ok := d.GetOk("ssl_health_check"); ok { + obj["type"] = "SSL" + return obj, nil + } + + return nil, fmt.Errorf("error in HealthCheck %s: No health check block specified.", d.Get("name").(string)) } diff --git a/website/docs/r/compute_health_check.html.markdown b/website/docs/r/compute_health_check.html.markdown index 26df46a0..ec8167f8 100644 --- a/website/docs/r/compute_health_check.html.markdown +++ b/website/docs/r/compute_health_check.html.markdown @@ -1,32 +1,56 @@ --- +# ---------------------------------------------------------------------------- +# +# *** AUTO GENERATED CODE *** AUTO GENERATED CODE *** +# +# ---------------------------------------------------------------------------- +# +# This file is automatically generated by Magic Modules and manual +# changes will be clobbered when the file is regenerated. +# +# Please read more about how to change this file in +# .github/CONTRIBUTING.md. +# +# ---------------------------------------------------------------------------- layout: "google" page_title: "Google: google_compute_health_check" sidebar_current: "docs-google-compute-health-check" description: |- - Manages a Health Check within GCE. + Health Checks determine whether instances are responsive and able to do work. --- # google\_compute\_health\_check -Manages a health check within GCE. This is used to monitor instances -behind load balancers. Timeouts or HTTP errors cause the instance to be -removed from the pool. For more information, see [the official -documentation](https://cloud.google.com/compute/docs/load-balancing/health-checks) -and -[API](https://cloud.google.com/compute/docs/reference/latest/healthChecks). +Health Checks determine whether instances are responsive and able to do work. +They are an important part of a comprehensive load balancing configuration, +as they enable monitoring instances behind load balancers. + +Health Checks poll instances at a specified interval. Instances that +do not respond successfully to some number of probes in a row are marked +as unhealthy. No new connections are sent to unhealthy instances, +though existing connections will continue. The health check will +continue to poll unhealthy instances. If an instance later responds +successfully to some number of consecutive probes, it is marked +healthy again and can receive new connections. + +To get more information about HealthCheck, see: + +* [API documentation](https://cloud.google.com/compute/docs/reference/rest/latest/healthChecks) +* How-to Guides + * [Official Documentation](https://cloud.google.com/load-balancing/docs/health-checks) ## Example Usage -```tf -resource "google_compute_health_check" "default" { - name = "internal-service-health-check" +```hcl +resource "google_compute_health_check" "internal-health-check" { + name = "internal-service-health-check" - timeout_sec = 1 - check_interval_sec = 1 + timeout_sec = 1 + check_interval_sec = 1 - tcp_health_check { - port = "80" - } + tcp_health_check { + port = "80" + } } ``` @@ -34,100 +58,190 @@ resource "google_compute_health_check" "default" { The following arguments are supported: -* `name` - (Required) A unique name for the resource, required by GCE. - Changing this forces a new resource to be created. + +* `name` - + (Required) + Name of the resource. Provided by the client when the resource is + created. The name must be 1-63 characters long, and comply with + RFC1035. Specifically, the name must be 1-63 characters long and + match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means + the first character must be a lowercase letter, and all following + characters must be a dash, lowercase letter, or digit, except the + last character, which cannot be a dash. + - - - -* `check_interval_sec` - (Optional) The number of seconds between each poll of - the instance instance (default 5). -* `description` - (Optional) Textual description field. +* `check_interval_sec` - + (Optional) + How often (in seconds) to send a health check. The default value is 5 + seconds. -* `healthy_threshold` - (Optional) Consecutive successes required (default 2). +* `description` - + (Optional) + An optional description of this resource. Provide this property when + you create the resource. -* `http_health_check` - (Optional) An HTTP Health Check. Only one kind of Health Check can be added. - Structure is documented below. +* `healthy_threshold` - + (Optional) + A so-far unhealthy instance will be marked healthy after this many + consecutive successes. The default value is 2. -* `https_health_check` - (Optional) An HTTPS Health Check. Only one kind of Health Check can be added. - Structure is documented below. +* `timeout_sec` - + (Optional) + How long (in seconds) to wait before claiming failure. + The default value is 5 seconds. It is invalid for timeoutSec to have + greater value than checkIntervalSec. -* `ssl_health_check` - (Optional) An SSL Health Check. Only one kind of Health Check can be added. - Structure is documented below. +* `unhealthy_threshold` - + (Optional) + A so-far healthy instance will be marked unhealthy after this many + consecutive failures. The default value is 2. -* `tcp_health_check` - (Optional) A TCP Health Check. Only one kind of Health Check can be added. - Structure is documented below. +* `http_health_check` - + (Optional) + A nested object resource Structure is documented below. -* `project` - (Optional) The project in which the resource belongs. If it - is not provided, the provider project is used. +* `https_health_check` - + (Optional) + A nested object resource Structure is documented below. -* `timeout_sec` - (Optional) The number of seconds to wait before declaring - failure (default 5). +* `tcp_health_check` - + (Optional) + A nested object resource Structure is documented below. -* `unhealthy_threshold` - (Optional) Consecutive failures required (default 2). +* `ssl_health_check` - + (Optional) + A nested object resource Structure is documented below. +* `project` - (Optional) The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. The `http_health_check` block supports: -* `host` - (Optional) HTTP host header field (default instance's public ip). +* `host` - + (Optional) + The value of the host header in the HTTP health check request. + If left empty (default value), the public IP on behalf of which this health + check is performed will be used. -* `port` - (Optional) TCP port to connect to (default 80). +* `request_path` - + (Optional) + The request path of the HTTP health check request. + The default value is /. -* `proxy_header` - (Optional) Type of proxy header to append before sending - data to the backend, either NONE or PROXY_V1 (default NONE). - -* `request_path` - (Optional) URL path to query (default /). +* `port` - + (Optional) + The TCP port number for the HTTP health check request. + The default value is 80. +* `proxy_header` - + (Optional) + Specifies the type of proxy header to append before sending data to the + backend, either NONE or PROXY_V1. The default is NONE. The `https_health_check` block supports: -* `host` - (Optional) HTTPS host header field (default instance's public ip). +* `host` - + (Optional) + The value of the host header in the HTTPS health check request. + If left empty (default value), the public IP on behalf of which this health + check is performed will be used. -* `port` - (Optional) TCP port to connect to (default 443). +* `request_path` - + (Optional) + The request path of the HTTPS health check request. + The default value is /. -* `proxy_header` - (Optional) Type of proxy header to append before sending - data to the backend, either NONE or PROXY_V1 (default NONE). - -* `request_path` - (Optional) URL path to query (default /). - - -The `ssl_health_check` block supports: - -* `port` - (Optional) TCP port to connect to (default 443). - -* `proxy_header` - (Optional) Type of proxy header to append before sending - data to the backend, either NONE or PROXY_V1 (default NONE). - -* `request` - (Optional) Application data to send once the SSL connection has - been established (default ""). - -* `response` - (Optional) The response that indicates health (default "") +* `port` - + (Optional) + The TCP port number for the HTTPS health check request. + The default value is 443. +* `proxy_header` - + (Optional) + Specifies the type of proxy header to append before sending data to the + backend, either NONE or PROXY_V1. The default is NONE. The `tcp_health_check` block supports: -* `port` - (Optional) TCP port to connect to (default 80). +* `request` - + (Optional) + The application data to send once the TCP connection has been + established (default value is empty). If both request and response are + empty, the connection establishment alone will indicate health. The request + data can only be ASCII. -* `proxy_header` - (Optional) Type of proxy header to append before sending - data to the backend, either NONE or PROXY_V1 (default NONE). +* `response` - + (Optional) + The bytes to match against the beginning of the response data. If left empty + (the default value), any response will indicate health. The response data + can only be ASCII. -* `request` - (Optional) Application data to send once the TCP connection has - been established (default ""). +* `port` - + (Optional) + The TCP port number for the TCP health check request. + The default value is 443. -* `response` - (Optional) The response that indicates health (default "") +* `proxy_header` - + (Optional) + Specifies the type of proxy header to append before sending data to the + backend, either NONE or PROXY_V1. The default is NONE. +The `ssl_health_check` block supports: + +* `request` - + (Optional) + The application data to send once the SSL connection has been + established (default value is empty). If both request and response are + empty, the connection establishment alone will indicate health. The request + data can only be ASCII. + +* `response` - + (Optional) + The bytes to match against the beginning of the response data. If left empty + (the default value), any response will indicate health. The response data + can only be ASCII. + +* `port` - + (Optional) + The TCP port number for the SSL health check request. + The default value is 443. + +* `proxy_header` - + (Optional) + Specifies the type of proxy header to append before sending data to the + backend, either NONE or PROXY_V1. The default is NONE. ## Attributes Reference -In addition to the arguments listed above, the following computed attributes are -exported: +In addition to the arguments listed above, the following computed attributes are exported: + +* `creation_timestamp` - + Creation timestamp in RFC3339 text format. + +* `type` - + The type of the health check. One of HTTP, HTTPS, TCP, or SSL. * `self_link` - The URI of the created resource. + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `update` - Default is 4 minutes. +- `delete` - Default is 4 minutes. + ## Import -Health checks can be imported using the `name`, e.g. +HealthCheck can be imported using any of these accepted formats: ``` -$ terraform import google_compute_health_check.default internal-service-health-check +$ terraform import google_compute_health_check.default projects/{{project}}/global/healthChecks/{{name}} +$ terraform import google_compute_health_check.default {{project}}/{{name}} +$ terraform import google_compute_health_check.default {{name}} ``` From 10ec7b2ca9a5d191f2eb14b577f2202e2349f6f8 Mon Sep 17 00:00:00 2001 From: emily Date: Tue, 28 Aug 2018 11:37:07 -0700 Subject: [PATCH 06/31] Use beta API location for google_container_engine_versions (#1939) * use beta API location for data source * doc fixes * use getLocation * add note to docs about required locations --- ...source_google_container_engine_versions.go | 22 ++++++++++++++---- ...e_google_container_engine_versions_test.go | 23 +++++++++++++++++++ ...le_container_engine_versions.html.markdown | 10 ++++++-- 3 files changed, 48 insertions(+), 7 deletions(-) diff --git a/google/data_source_google_container_engine_versions.go b/google/data_source_google_container_engine_versions.go index b0385e70..6cd4bd94 100644 --- a/google/data_source_google_container_engine_versions.go +++ b/google/data_source_google_container_engine_versions.go @@ -19,6 +19,11 @@ func dataSourceGoogleContainerEngineVersions() *schema.Resource { Type: schema.TypeString, Optional: true, }, + "region": { + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"zone"}, + }, "default_cluster_version": { Type: schema.TypeString, Computed: true, @@ -53,12 +58,16 @@ func dataSourceGoogleContainerEngineVersionsRead(d *schema.ResourceData, meta in return err } - zone, err := getZone(d, meta.(*Config)) + location, err := getLocation(d, config) if err != nil { return err } + if len(location) == 0 { + return fmt.Errorf("Cannot determine location: set zone or region in this data source or at provider-level") + } - resp, err := config.clientContainer.Projects.Zones.GetServerconfig(project, zone).Do() + location = fmt.Sprintf("projects/%s/locations/%s", project, location) + resp, err := config.clientContainerBeta.Projects.Locations.GetServerConfig(location).Do() if err != nil { return fmt.Errorf("Error retrieving available container cluster versions: %s", err.Error()) } @@ -66,10 +75,13 @@ func dataSourceGoogleContainerEngineVersionsRead(d *schema.ResourceData, meta in d.Set("valid_master_versions", resp.ValidMasterVersions) d.Set("default_cluster_version", resp.DefaultClusterVersion) d.Set("valid_node_versions", resp.ValidNodeVersions) - d.Set("latest_master_version", resp.ValidMasterVersions[0]) - d.Set("latest_node_version", resp.ValidNodeVersions[0]) + if len(resp.ValidMasterVersions) > 0 { + d.Set("latest_master_version", resp.ValidMasterVersions[0]) + } + if len(resp.ValidNodeVersions) > 0 { + d.Set("latest_node_version", resp.ValidNodeVersions[0]) + } d.SetId(time.Now().UTC().String()) - return nil } diff --git a/google/data_source_google_container_engine_versions_test.go b/google/data_source_google_container_engine_versions_test.go index 455db568..4a6cef8b 100644 --- a/google/data_source_google_container_engine_versions_test.go +++ b/google/data_source_google_container_engine_versions_test.go @@ -27,6 +27,23 @@ func TestAccContainerEngineVersions_basic(t *testing.T) { }) } +func TestAccContainerEngineVersions_regional(t *testing.T) { + t.Parallel() + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckGoogleContainerEngineVersionsRegionalConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleContainerEngineVersionsMeta("data.google_container_engine_versions.versions"), + ), + }, + }, + }) +} + func testAccCheckGoogleContainerEngineVersionsMeta(n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -102,3 +119,9 @@ data "google_container_engine_versions" "versions" { zone = "us-central1-b" } ` + +var testAccCheckGoogleContainerEngineVersionsRegionalConfig = ` +data "google_container_engine_versions" "versions" { + region = "us-central1" +} +` diff --git a/website/docs/d/google_container_engine_versions.html.markdown b/website/docs/d/google_container_engine_versions.html.markdown index 95275b71..c32e1013 100644 --- a/website/docs/d/google_container_engine_versions.html.markdown +++ b/website/docs/d/google_container_engine_versions.html.markdown @@ -8,7 +8,7 @@ description: |- # google\_container\_engine\_versions -Provides access to available Google Container Engine versions in a zone for a given project. +Provides access to available Google Container Engine versions in a zone or region for a given project. ```hcl data "google_container_engine_versions" "central1b" { @@ -32,7 +32,13 @@ resource "google_container_cluster" "foo" { The following arguments are supported: -* `zone` (required) - Zone to list available cluster versions for. Should match the zone the cluster will be deployed in. +* `zone` (optional) - Zone to list available cluster versions for. Should match the zone the cluster will be deployed in. + If not specified, the provider-level zone is used. One of zone, region, or provider-level zone is required. + +* `region` (optional) - Region to list available cluster versions for. Should match the region the cluster will be deployed in. + For regional clusters, this value must be specified and cannot be inferred from provider-level region. One of zone, + region, or provider-level zone is required. + * `project` (optional) - ID of the project to list available cluster versions for. Should match the project the cluster will be deployed to. Defaults to the project that the provider is authenticated with. From 2afafa0573290e4af6f2f0b6f1192d6f5865cd8b Mon Sep 17 00:00:00 2001 From: The Magician Date: Tue, 28 Aug 2018 16:48:00 -0700 Subject: [PATCH 07/31] Magic Modules changes. (#1951) --- google/resource_compute_firewall.go | 27 ++++++ google/resource_compute_firewall_test.go | 95 +++++++++++++++++-- website/docs/r/compute_firewall.html.markdown | 6 ++ 3 files changed, 119 insertions(+), 9 deletions(-) diff --git a/google/resource_compute_firewall.go b/google/resource_compute_firewall.go index 9a782f9b..35273aab 100644 --- a/google/resource_compute_firewall.go +++ b/google/resource_compute_firewall.go @@ -145,6 +145,10 @@ func resourceComputeFirewall() *schema.Resource { Type: schema.TypeBool, Optional: true, }, + "enable_logging": { + Type: schema.TypeBool, + Optional: true, + }, "priority": { Type: schema.TypeInt, Optional: true, @@ -254,6 +258,12 @@ func resourceComputeFirewallCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("disabled"); ok || !reflect.DeepEqual(v, disabledProp) { obj["disabled"] = disabledProp } + enableLoggingProp, err := expandComputeFirewallEnableLogging(d.Get("enable_logging"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("enable_logging"); ok || !reflect.DeepEqual(v, enableLoggingProp) { + obj["enableLogging"] = enableLoggingProp + } nameProp, err := expandComputeFirewallName(d.Get("name"), d, config) if err != nil { return err @@ -380,6 +390,9 @@ func resourceComputeFirewallRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("disabled", flattenComputeFirewallDisabled(res["disabled"])); err != nil { return fmt.Errorf("Error reading Firewall: %s", err) } + if err := d.Set("enable_logging", flattenComputeFirewallEnableLogging(res["enableLogging"])); err != nil { + return fmt.Errorf("Error reading Firewall: %s", err) + } if err := d.Set("name", flattenComputeFirewallName(res["name"])); err != nil { return fmt.Errorf("Error reading Firewall: %s", err) } @@ -458,6 +471,12 @@ func resourceComputeFirewallUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("disabled"); ok || !reflect.DeepEqual(v, disabledProp) { obj["disabled"] = disabledProp } + enableLoggingProp, err := expandComputeFirewallEnableLogging(d.Get("enable_logging"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("enable_logging"); ok || !reflect.DeepEqual(v, enableLoggingProp) { + obj["enableLogging"] = enableLoggingProp + } nameProp, err := expandComputeFirewallName(d.Get("name"), d, config) if err != nil { return err @@ -660,6 +679,10 @@ func flattenComputeFirewallDisabled(v interface{}) interface{} { return v } +func flattenComputeFirewallEnableLogging(v interface{}) interface{} { + return v +} + func flattenComputeFirewallName(v interface{}) interface{} { return v } @@ -795,6 +818,10 @@ func expandComputeFirewallDisabled(v interface{}, d *schema.ResourceData, config return v, nil } +func expandComputeFirewallEnableLogging(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { + return v, nil +} + func expandComputeFirewallName(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) { return v, nil } diff --git a/google/resource_compute_firewall_test.go b/google/resource_compute_firewall_test.go index 40a3932b..3b4fb766 100644 --- a/google/resource_compute_firewall_test.go +++ b/google/resource_compute_firewall_test.go @@ -283,6 +283,48 @@ func TestAccComputeFirewall_disabled(t *testing.T) { }) } +func TestAccComputeFirewall_enableLogging(t *testing.T) { + t.Parallel() + + var firewall computeBeta.Firewall + networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeFirewallDestroy, + Steps: []resource.TestStep{ + { + Config: testAccComputeFirewall_enableLogging(networkName, firewallName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeBetaFirewallExists("google_compute_firewall.foobar", &firewall), + testAccCheckComputeFirewallLoggingEnabled(&firewall, false), + ), + }, + { + ResourceName: "google_compute_firewall.foobar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeFirewall_enableLogging(networkName, firewallName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeBetaFirewallExists("google_compute_firewall.foobar", &firewall), + testAccCheckComputeFirewallLoggingEnabled(&firewall, true), + ), + }, + { + Config: testAccComputeFirewall_enableLogging(networkName, firewallName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeBetaFirewallExists("google_compute_firewall.foobar", &firewall), + testAccCheckComputeFirewallLoggingEnabled(&firewall, false), + ), + }, + }, + }) +} + func testAccCheckComputeFirewallDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) @@ -330,15 +372,6 @@ func testAccCheckComputeFirewallExists(n string, firewall *compute.Firewall) res } } -func testAccCheckComputeFirewallHasPriority(firewall *compute.Firewall, priority int) resource.TestCheckFunc { - return func(s *terraform.State) error { - if firewall.Priority != int64(priority) { - return fmt.Errorf("Priority for firewall does not match: expected %d, found %d", priority, firewall.Priority) - } - return nil - } -} - func testAccCheckComputeBetaFirewallExists(n string, firewall *computeBeta.Firewall) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -368,6 +401,15 @@ func testAccCheckComputeBetaFirewallExists(n string, firewall *computeBeta.Firew } } +func testAccCheckComputeFirewallHasPriority(firewall *compute.Firewall, priority int) resource.TestCheckFunc { + return func(s *terraform.State) error { + if firewall.Priority != int64(priority) { + return fmt.Errorf("Priority for firewall does not match: expected %d, found %d", priority, firewall.Priority) + } + return nil + } +} + func testAccCheckComputeFirewallPorts( firewall *compute.Firewall, ports string) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -444,6 +486,15 @@ func testAccCheckComputeFirewallApiVersion(firewall *compute.Firewall) resource. } } +func testAccCheckComputeFirewallLoggingEnabled(firewall *computeBeta.Firewall, enabled bool) resource.TestCheckFunc { + return func(s *terraform.State) error { + if firewall == nil || firewall.EnableLogging != enabled { + return fmt.Errorf("expected firewall enable_logging to be %t, got %t", enabled, firewall.EnableLogging) + } + return nil + } +} + func testAccComputeFirewall_basic(network, firewall string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { @@ -618,3 +669,29 @@ func testAccComputeFirewall_disabled(network, firewall string) string { disabled = true }`, network, firewall) } + +func testAccComputeFirewall_enableLogging(network, firewall string, enableLogging bool) string { + enableLoggingCfg := "" + if enableLogging { + enableLoggingCfg = "enable_logging= true" + } + return fmt.Sprintf(` + resource "google_compute_network" "foobar" { + name = "%s" + auto_create_subnetworks = false + ipv4_range = "10.0.0.0/16" + } + + resource "google_compute_firewall" "foobar" { + name = "firewall-test-%s" + description = "Resource created for Terraform acceptance testing" + network = "${google_compute_network.foobar.name}" + source_tags = ["foo"] + + allow { + protocol = "icmp" + } + + %s + }`, network, firewall, enableLoggingCfg) +} diff --git a/website/docs/r/compute_firewall.html.markdown b/website/docs/r/compute_firewall.html.markdown index 677890c7..7ee53fc9 100644 --- a/website/docs/r/compute_firewall.html.markdown +++ b/website/docs/r/compute_firewall.html.markdown @@ -124,6 +124,12 @@ The following arguments are supported: not enforced and the network behaves as if it did not exist. If this is unspecified, the firewall rule will be enabled. +* `enable_logging` - + (Optional) + This field denotes whether to enable logging for a particular + firewall rule. If logging is enabled, logs will be exported to + Stackdriver. + * `priority` - (Optional) Priority for this rule. This is an integer between 0 and 65535, both From 27ade47e5a153f76a7a274a40f057fcd3097b7a8 Mon Sep 17 00:00:00 2001 From: emily Date: Tue, 28 Aug 2018 16:51:37 -0700 Subject: [PATCH 08/31] Fixes for importing sql database instances (#1956) * fix importing for sql resources * fmt * fix id for sql db import, tests * change test names to be more specific --- google/import_sql_database_instance_test.go | 61 ------------------- google/import_sql_database_test.go | 33 ---------- google/resource_sql_database.go | 22 ++++++- google/resource_sql_database_instance.go | 19 +++++- google/resource_sql_database_instance_test.go | 35 ++++++++--- google/resource_sql_database_test.go | 51 +++++++++++++--- website/docs/r/sql_database.html.markdown | 9 ++- .../r/sql_database_instance.html.markdown | 9 ++- 8 files changed, 119 insertions(+), 120 deletions(-) delete mode 100644 google/import_sql_database_instance_test.go delete mode 100644 google/import_sql_database_test.go diff --git a/google/import_sql_database_instance_test.go b/google/import_sql_database_instance_test.go deleted file mode 100644 index 625a3707..00000000 --- a/google/import_sql_database_instance_test.go +++ /dev/null @@ -1,61 +0,0 @@ -package google - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -// Test importing a first generation database -func TestAccSqlDatabaseInstance_importBasic(t *testing.T) { - t.Parallel() - - resourceName := "google_sql_database_instance.instance" - databaseID := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: fmt.Sprintf( - testGoogleSqlDatabaseInstance_basic, databaseID), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -// Test importing a second generation database -func TestAccSqlDatabaseInstance_importBasic3(t *testing.T) { - t.Parallel() - - resourceName := "google_sql_database_instance.instance" - databaseID := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: fmt.Sprintf( - testGoogleSqlDatabaseInstance_basic3, databaseID), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/google/import_sql_database_test.go b/google/import_sql_database_test.go deleted file mode 100644 index a5c1c61d..00000000 --- a/google/import_sql_database_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package google - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccSqlDatabase_importBasic(t *testing.T) { - t.Parallel() - - resourceName := "google_sql_database.database" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: fmt.Sprintf( - testGoogleSqlDatabase_basic, acctest.RandString(10), acctest.RandString(10)), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/google/resource_sql_database.go b/google/resource_sql_database.go index 49c4f4d3..3839690c 100644 --- a/google/resource_sql_database.go +++ b/google/resource_sql_database.go @@ -17,7 +17,7 @@ func resourceSqlDatabase() *schema.Resource { Update: resourceSqlDatabaseUpdate, Delete: resourceSqlDatabaseDelete, Importer: &schema.ResourceImporter{ - State: schema.ImportStatePassthrough, + State: resourceSqlDatabaseImport, }, Schema: map[string]*schema.Schema{ @@ -211,3 +211,23 @@ func resourceSqlDatabaseDelete(d *schema.ResourceData, meta interface{}) error { return nil } + +func resourceSqlDatabaseImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*Config) + parseImportId([]string{ + "projects/(?P[^/]+)/instances/(?P[^/]+)/databases/(?P[^/]+)", + "instances/(?P[^/]+)/databases/(?P[^/]+)", + "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "(?P[^/]+)/(?P[^/]+)", + "(?P[^/]+):(?P[^/]+)", + }, d, config) + + // Replace import id for the resource id + id, err := replaceVars(d, config, "{{instance}}:{{name}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return []*schema.ResourceData{d}, nil +} diff --git a/google/resource_sql_database_instance.go b/google/resource_sql_database_instance.go index 3f019ea4..5770e92f 100644 --- a/google/resource_sql_database_instance.go +++ b/google/resource_sql_database_instance.go @@ -40,7 +40,7 @@ func resourceSqlDatabaseInstance() *schema.Resource { Update: resourceSqlDatabaseInstanceUpdate, Delete: resourceSqlDatabaseInstanceDelete, Importer: &schema.ResourceImporter{ - State: schema.ImportStatePassthrough, + State: resourceSqlDatabaseInstanceImport, }, Timeouts: &schema.ResourceTimeout{ @@ -1105,6 +1105,23 @@ func resourceSqlDatabaseInstanceDelete(d *schema.ResourceData, meta interface{}) return nil } +func resourceSqlDatabaseInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*Config) + parseImportId([]string{ + "projects/(?P[^/]+)/instances/(?P[^/]+)", + "(?P[^/]+)/(?P[^/]+)", + "(?P[^/]+)"}, d, config) + + // Replace import id for the resource id + id, err := replaceVars(d, config, "{{name}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return []*schema.ResourceData{d}, nil +} + func flattenSettings(settings *sqladmin.Settings) []map[string]interface{} { data := map[string]interface{}{ "version": settings.SettingsVersion, diff --git a/google/resource_sql_database_instance_test.go b/google/resource_sql_database_instance_test.go index 468a4424..87c0f03b 100644 --- a/google/resource_sql_database_instance_test.go +++ b/google/resource_sql_database_instance_test.go @@ -154,11 +154,13 @@ func testSweepDatabases(region string) error { return nil } -func TestAccSqlDatabaseInstance_basic(t *testing.T) { +func TestAccSqlDatabaseInstance_basicFirstGen(t *testing.T) { t.Parallel() var instance sqladmin.DatabaseInstance - databaseID := acctest.RandInt() + instanceID := acctest.RandInt() + instanceName := fmt.Sprintf("tf-lw-%d", instanceID) + resourceName := "google_sql_database_instance.instance" resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -167,19 +169,34 @@ func TestAccSqlDatabaseInstance_basic(t *testing.T) { Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( - testGoogleSqlDatabaseInstance_basic, databaseID), + testGoogleSqlDatabaseInstance_basic, instanceID), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlDatabaseInstanceExists( - "google_sql_database_instance.instance", &instance), - testAccCheckGoogleSqlDatabaseInstanceEquals( - "google_sql_database_instance.instance", &instance), + testAccCheckGoogleSqlDatabaseInstanceExists(resourceName, &instance), + testAccCheckGoogleSqlDatabaseInstanceEquals(resourceName, &instance), ), }, + resource.TestStep{ + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + resource.TestStep{ + ResourceName: resourceName, + ImportStateId: fmt.Sprintf("projects/%s/instances/%s", getTestProjectFromEnv(), instanceName), + ImportState: true, + ImportStateVerify: true, + }, + resource.TestStep{ + ResourceName: resourceName, + ImportStateId: fmt.Sprintf("%s/%s", getTestProjectFromEnv(), instanceName), + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func TestAccSqlDatabaseInstance_basic2(t *testing.T) { +func TestAccSqlDatabaseInstance_basicInferredName(t *testing.T) { t.Parallel() var instance sqladmin.DatabaseInstance @@ -202,7 +219,7 @@ func TestAccSqlDatabaseInstance_basic2(t *testing.T) { }) } -func TestAccSqlDatabaseInstance_basic3(t *testing.T) { +func TestAccSqlDatabaseInstance_basicSecondGen(t *testing.T) { t.Parallel() var instance sqladmin.DatabaseInstance diff --git a/google/resource_sql_database_test.go b/google/resource_sql_database_test.go index f419be2a..96a62013 100644 --- a/google/resource_sql_database_test.go +++ b/google/resource_sql_database_test.go @@ -16,21 +16,52 @@ func TestAccSqlDatabase_basic(t *testing.T) { var database sqladmin.Database + resourceName := "google_sql_database.database" + instanceName := fmt.Sprintf("sqldatabasetest%s", acctest.RandString(10)) + dbName := fmt.Sprintf("sqldatabasetest%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccSqlDatabaseDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: fmt.Sprintf( - testGoogleSqlDatabase_basic, acctest.RandString(10), acctest.RandString(10)), + Config: fmt.Sprintf(testGoogleSqlDatabase_basic, instanceName, dbName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlDatabaseExists( - "google_sql_database.database", &database), - testAccCheckGoogleSqlDatabaseEquals( - "google_sql_database.database", &database), + testAccCheckGoogleSqlDatabaseExists(resourceName, &database), + testAccCheckGoogleSqlDatabaseEquals(resourceName, &database), ), }, + resource.TestStep{ + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + resource.TestStep{ + ResourceName: resourceName, + ImportStateId: fmt.Sprintf("%s/%s", instanceName, dbName), + ImportState: true, + ImportStateVerify: true, + }, + + resource.TestStep{ + ResourceName: resourceName, + ImportStateId: fmt.Sprintf("instances/%s/databases/%s", instanceName, dbName), + ImportState: true, + ImportStateVerify: true, + }, + resource.TestStep{ + ResourceName: resourceName, + ImportStateId: fmt.Sprintf("%s/%s/%s", getTestProjectFromEnv(), instanceName, dbName), + ImportState: true, + ImportStateVerify: true, + }, + resource.TestStep{ + ResourceName: resourceName, + ImportStateId: fmt.Sprintf("projects/%s/instances/%s/databases/%s", getTestProjectFromEnv(), instanceName, dbName), + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -151,7 +182,7 @@ func testAccSqlDatabaseDestroy(s *terraform.State) error { var testGoogleSqlDatabase_basic = ` resource "google_sql_database_instance" "instance" { - name = "sqldatabasetest%s" + name = "%s" region = "us-central" settings { tier = "D0" @@ -159,13 +190,13 @@ resource "google_sql_database_instance" "instance" { } resource "google_sql_database" "database" { - name = "sqldatabasetest%s" + name = "%s" instance = "${google_sql_database_instance.instance.name}" } ` var testGoogleSqlDatabase_latin1 = ` resource "google_sql_database_instance" "instance" { - name = "sqldatabasetest%s" + name = "%s" region = "us-central" settings { tier = "D0" @@ -173,7 +204,7 @@ resource "google_sql_database_instance" "instance" { } resource "google_sql_database" "database" { - name = "sqldatabasetest%s" + name = "%s" instance = "${google_sql_database_instance.instance.name}" charset = "latin1" collation = "latin1_swedish_ci" diff --git a/website/docs/r/sql_database.html.markdown b/website/docs/r/sql_database.html.markdown index 619cadb2..d461bfb1 100644 --- a/website/docs/r/sql_database.html.markdown +++ b/website/docs/r/sql_database.html.markdown @@ -67,8 +67,13 @@ exported: ## Import -SQL databases can be imported using the `instance` and `name`, e.g. +SQL databases can be imported using one of any of these accepted formats: ``` -$ terraform import google_sql_database.database master-instance:users-db +$ terraform import google_sql_database.database projects/{{project}}/instances/{{instance}}/databases/{{name}} +$ terraform import google_sql_database.database {{project}}/{{instance}}/{{name}} +$ terraform import google_sql_database.database instances/{{name}}/databases/{{name}} +$ terraform import google_sql_database.database {{instance}}/{{name}} +$ terraform import google_sql_database.database {{name}} + ``` diff --git a/website/docs/r/sql_database_instance.html.markdown b/website/docs/r/sql_database_instance.html.markdown index 1839aea9..4ecfac70 100644 --- a/website/docs/r/sql_database_instance.html.markdown +++ b/website/docs/r/sql_database_instance.html.markdown @@ -313,8 +313,11 @@ when the resource is configured with a `count`. ## Import -Database instances can be imported using the `name`, e.g. +Database instances can be imported using one of any of these accepted formats: ``` -$ terraform import google_sql_database_instance.master master-instance -``` +$ terraform import google_sql_database_instance.master projects/{{project}}/instances/{{name}} +$ terraform import google_sql_database_instance.master {{project}}/{{name}} +$ terraform import google_sql_database_instance.master {{name}} + +``` \ No newline at end of file From ea6b4ff881bbada656cdfb13429bcdad2fdcd56b Mon Sep 17 00:00:00 2001 From: Chris Stephens Date: Tue, 28 Aug 2018 16:58:27 -0700 Subject: [PATCH 09/31] updating examples in documentation for debian-9 --- website/docs/r/compute_backend_service.html.markdown | 2 +- website/docs/r/compute_instance.html.markdown | 2 +- website/docs/r/compute_instance_from_template.html.markdown | 2 +- website/docs/r/compute_instance_template.html.markdown | 2 +- website/docs/r/compute_region_backend_service.html.markdown | 2 +- website/docs/r/dns_record_set.markdown | 2 +- website/docs/r/logging_project_sink.html.markdown | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/website/docs/r/compute_backend_service.html.markdown b/website/docs/r/compute_backend_service.html.markdown index 3d439aec..aed8d16a 100644 --- a/website/docs/r/compute_backend_service.html.markdown +++ b/website/docs/r/compute_backend_service.html.markdown @@ -49,7 +49,7 @@ resource "google_compute_instance_template" "webserver" { } disk { - source_image = "debian-cloud/debian-8" + source_image = "debian-cloud/debian-9" auto_delete = true boot = true } diff --git a/website/docs/r/compute_instance.html.markdown b/website/docs/r/compute_instance.html.markdown index e988ec91..8ce954e9 100644 --- a/website/docs/r/compute_instance.html.markdown +++ b/website/docs/r/compute_instance.html.markdown @@ -26,7 +26,7 @@ resource "google_compute_instance" "default" { boot_disk { initialize_params { - image = "debian-cloud/debian-8" + image = "debian-cloud/debian-9" } } diff --git a/website/docs/r/compute_instance_from_template.html.markdown b/website/docs/r/compute_instance_from_template.html.markdown index 85d08619..15c96d38 100644 --- a/website/docs/r/compute_instance_from_template.html.markdown +++ b/website/docs/r/compute_instance_from_template.html.markdown @@ -26,7 +26,7 @@ resource "google_compute_instance_template" "tpl" { machine_type = "n1-standard-1" disk { - source_image = "debian-cloud/debian-8" + source_image = "debian-cloud/debian-9" auto_delete = true disk_size_gb = 100 boot = true diff --git a/website/docs/r/compute_instance_template.html.markdown b/website/docs/r/compute_instance_template.html.markdown index ef50d420..b55211107 100644 --- a/website/docs/r/compute_instance_template.html.markdown +++ b/website/docs/r/compute_instance_template.html.markdown @@ -38,7 +38,7 @@ resource "google_compute_instance_template" "default" { // Create a new boot disk from an image disk { - source_image = "debian-cloud/debian-8" + source_image = "debian-cloud/debian-9" auto_delete = true boot = true } diff --git a/website/docs/r/compute_region_backend_service.html.markdown b/website/docs/r/compute_region_backend_service.html.markdown index 3c4126d2..fb8f3d87 100644 --- a/website/docs/r/compute_region_backend_service.html.markdown +++ b/website/docs/r/compute_region_backend_service.html.markdown @@ -49,7 +49,7 @@ resource "google_compute_instance_template" "foobar" { } disk { - source_image = "debian-cloud/debian-8" + source_image = "debian-cloud/debian-9" auto_delete = true boot = true } diff --git a/website/docs/r/dns_record_set.markdown b/website/docs/r/dns_record_set.markdown index 0916f62d..d37d8b5f 100644 --- a/website/docs/r/dns_record_set.markdown +++ b/website/docs/r/dns_record_set.markdown @@ -39,7 +39,7 @@ resource "google_compute_instance" "frontend" { boot_disk { initialize_params { - image = "debian-cloud/debian-8" + image = "debian-cloud/debian-9" } } diff --git a/website/docs/r/logging_project_sink.html.markdown b/website/docs/r/logging_project_sink.html.markdown index f9137b87..7ab405cc 100644 --- a/website/docs/r/logging_project_sink.html.markdown +++ b/website/docs/r/logging_project_sink.html.markdown @@ -48,7 +48,7 @@ resource "google_compute_instance" "my-logged-instance" { boot_disk { initialize_params { - image = "debian-cloud/debian-8" + image = "debian-cloud/debian-9" } } From ca0317037be9c1ac07ead0258085c284526096e8 Mon Sep 17 00:00:00 2001 From: Riley Karson Date: Wed, 29 Aug 2018 14:19:09 -0700 Subject: [PATCH 10/31] Update CHANGELOG.md --- CHANGELOG.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index c6fbb725..59820e71 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,6 +3,9 @@ BACKWARDS INCOMPATIBILITIES: * compute: instance templates used to not set any disks in the template in state unless they were in the config, as well. It also only stored the image name in state. Both of these were bugs, and have been fixed. They should not cause any disruption. If you were interpolating an image name from a disk in an instance template, you'll need to update your config to strip out everything before the last `/`. If you imported an instance template, and did not add all the disks in the template to your config, you'll see a diff; add those disks to your config, and it will go away. Those are the only two instances where this change should effect you. We apologise for the inconvenience. [GH-1916] +IMPROVEMENTS: +* compute: `google_compute_health_check` is autogenerated, exposing the `type` attribute and accepting more import formats. [GH-1941] + ## 1.17.1 (August 22, 2018) BUG FIXES: From f946490db27cf37970583448fd4da57ee5348209 Mon Sep 17 00:00:00 2001 From: Paddy Carver Date: Fri, 31 Aug 2018 10:13:52 -0700 Subject: [PATCH 11/31] Update logging vendor. Update the vendored version of our logging helper, so we get requests printed. --- .../terraform/helper/logging/transport.go | 21 +++++++++++++++++-- vendor/vendor.json | 6 +++--- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/vendor/github.com/hashicorp/terraform/helper/logging/transport.go b/vendor/github.com/hashicorp/terraform/helper/logging/transport.go index 44779248..bddabe64 100644 --- a/vendor/github.com/hashicorp/terraform/helper/logging/transport.go +++ b/vendor/github.com/hashicorp/terraform/helper/logging/transport.go @@ -1,9 +1,12 @@ package logging import ( + "bytes" + "encoding/json" "log" "net/http" "net/http/httputil" + "strings" ) type transport struct { @@ -15,7 +18,7 @@ func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) { if IsDebugOrHigher() { reqData, err := httputil.DumpRequestOut(req, true) if err == nil { - log.Printf("[DEBUG] "+logReqMsg, t.name, string(reqData)) + log.Printf("[DEBUG] "+logReqMsg, t.name, prettyPrintJsonLines(reqData)) } else { log.Printf("[ERROR] %s API Request error: %#v", t.name, err) } @@ -29,7 +32,7 @@ func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) { if IsDebugOrHigher() { respData, err := httputil.DumpResponse(resp, true) if err == nil { - log.Printf("[DEBUG] "+logRespMsg, t.name, string(respData)) + log.Printf("[DEBUG] "+logRespMsg, t.name, prettyPrintJsonLines(respData)) } else { log.Printf("[ERROR] %s API Response error: %#v", t.name, err) } @@ -42,6 +45,20 @@ func NewTransport(name string, t http.RoundTripper) *transport { return &transport{name, t} } +// prettyPrintJsonLines iterates through a []byte line-by-line, +// transforming any lines that are complete json into pretty-printed json. +func prettyPrintJsonLines(b []byte) string { + parts := strings.Split(string(b), "\n") + for i, p := range parts { + if b := []byte(p); json.Valid(b) { + var out bytes.Buffer + json.Indent(&out, b, "", " ") + parts[i] = out.String() + } + } + return strings.Join(parts, "\n") +} + const logReqMsg = `%s API Request Details: ---[ REQUEST ]--------------------------------------- %s diff --git a/vendor/vendor.json b/vendor/vendor.json index 675ec13d..3296411b 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -633,10 +633,10 @@ "versionExact": "v0.11.2" }, { - "checksumSHA1": "BAXV9ruAyno3aFgwYI2/wWzB2Gc=", + "checksumSHA1": "j8XqkwLh2W3r3i6wnCRmve07BgI=", "path": "github.com/hashicorp/terraform/helper/logging", - "revision": "41e50bd32a8825a84535e353c3674af8ce799161", - "revisionTime": "2018-04-10T16:50:42Z", + "revision": "6dfc4d748de9cda23835bc5704307ed45e839622", + "revisionTime": "2018-08-15T22:00:39Z", "version": "v0.11.2", "versionExact": "v0.11.2" }, From 9065b5a62476ce91e21e69db7a52d8cb4f4f9e13 Mon Sep 17 00:00:00 2001 From: Nathan McKinley Date: Wed, 5 Sep 2018 09:52:06 -0700 Subject: [PATCH 12/31] Addition of create_subnetwork and other fields relevant for Alias IPs (#1921) * Addition of create_subnetwork and use_ip_aliases. * add fields for [cluster|services]_ipv4_cidr_block and subnetwork_name --- google/resource_container_cluster.go | 94 +++++++-- google/resource_container_cluster_test.go | 190 ++++++++++++------ .../docs/r/container_cluster.html.markdown | 17 ++ 3 files changed, 222 insertions(+), 79 deletions(-) diff --git a/google/resource_container_cluster.go b/google/resource_container_cluster.go index f841178d..31856d25 100644 --- a/google/resource_container_cluster.go +++ b/google/resource_container_cluster.go @@ -42,6 +42,10 @@ var ( }, }, } + + ipAllocationSubnetFields = []string{"ip_allocation_policy.0.create_subnetwork", "ip_allocation_policy.0.subnetwork_name"} + ipAllocationCidrBlockFields = []string{"ip_allocation_policy.0.cluster_ipv4_cidr_block", "ip_allocation_policy.0.services_ipv4_cidr_block"} + ipAllocationRangeFields = []string{"ip_allocation_policy.0.cluster_secondary_range_name", "ip_allocation_policy.0.services_secondary_range_name"} ) func resourceContainerCluster() *schema.Resource { @@ -433,15 +437,52 @@ func resourceContainerCluster() *schema.Resource { MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + // GKE creates subnetwork automatically + "create_subnetwork": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + ConflictsWith: append(ipAllocationCidrBlockFields, ipAllocationRangeFields...), + }, + "subnetwork_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: append(ipAllocationCidrBlockFields, ipAllocationRangeFields...), + }, + + // GKE creates/deletes secondary ranges in VPC + "cluster_ipv4_cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: append(ipAllocationSubnetFields, ipAllocationRangeFields...), + DiffSuppressFunc: cidrOrSizeDiffSuppress, + }, + "services_ipv4_cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: append(ipAllocationSubnetFields, ipAllocationRangeFields...), + DiffSuppressFunc: cidrOrSizeDiffSuppress, + }, + + // User manages secondary ranges manually "cluster_secondary_range_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: append(ipAllocationSubnetFields, ipAllocationCidrBlockFields...), }, "services_secondary_range_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: append(ipAllocationSubnetFields, ipAllocationCidrBlockFields...), }, }, }, @@ -475,6 +516,11 @@ func resourceContainerCluster() *schema.Resource { } } +func cidrOrSizeDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + // If the user specified a size and the API returned a full cidr block, suppress. + return strings.HasPrefix(new, "/") && strings.HasSuffix(old, new) +} + func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -1409,24 +1455,24 @@ func expandClusterAddonsConfig(configured interface{}) *containerBeta.AddonsConf } func expandIPAllocationPolicy(configured interface{}) (*containerBeta.IPAllocationPolicy, error) { - ap := &containerBeta.IPAllocationPolicy{} l := configured.([]interface{}) - if len(l) > 0 { - if config, ok := l[0].(map[string]interface{}); ok { - ap.UseIpAliases = true - if v, ok := config["cluster_secondary_range_name"]; ok { - ap.ClusterSecondaryRangeName = v.(string) - } - - if v, ok := config["services_secondary_range_name"]; ok { - ap.ServicesSecondaryRangeName = v.(string) - } - } else { - return nil, fmt.Errorf("clusters using IP aliases must specify secondary ranges") - } + if len(l) == 0 { + return &containerBeta.IPAllocationPolicy{}, nil } + config := l[0].(map[string]interface{}) - return ap, nil + return &containerBeta.IPAllocationPolicy{ + UseIpAliases: true, + + CreateSubnetwork: config["create_subnetwork"].(bool), + SubnetworkName: config["subnetwork_name"].(string), + + ClusterIpv4CidrBlock: config["cluster_ipv4_cidr_block"].(string), + ServicesIpv4CidrBlock: config["services_ipv4_cidr_block"].(string), + + ClusterSecondaryRangeName: config["cluster_secondary_range_name"].(string), + ServicesSecondaryRangeName: config["services_secondary_range_name"].(string), + }, nil } func expandMaintenancePolicy(configured interface{}) *containerBeta.MaintenancePolicy { @@ -1583,6 +1629,12 @@ func flattenIPAllocationPolicy(c *containerBeta.IPAllocationPolicy) []map[string } return []map[string]interface{}{ { + "create_subnetwork": c.CreateSubnetwork, + "subnetwork_name": c.SubnetworkName, + + "cluster_ipv4_cidr_block": c.ClusterIpv4CidrBlock, + "services_ipv4_cidr_block": c.ServicesIpv4CidrBlock, + "cluster_secondary_range_name": c.ClusterSecondaryRangeName, "services_secondary_range_name": c.ServicesSecondaryRangeName, }, diff --git a/google/resource_container_cluster_test.go b/google/resource_container_cluster_test.go index 3566fb9f..2cc4615d 100644 --- a/google/resource_container_cluster_test.go +++ b/google/resource_container_cluster_test.go @@ -1093,7 +1093,7 @@ func TestAccContainerCluster_withMaintenanceWindow(t *testing.T) { }) } -func TestAccContainerCluster_withIPAllocationPolicy(t *testing.T) { +func TestAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(t *testing.T) { t.Parallel() cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10)) @@ -1103,23 +1103,7 @@ func TestAccContainerCluster_withIPAllocationPolicy(t *testing.T) { CheckDestroy: testAccCheckContainerClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withIPAllocationPolicy( - cluster, - map[string]string{ - "pods": "10.1.0.0/16", - "services": "10.2.0.0/20", - }, - map[string]string{ - "cluster_secondary_range_name": "pods", - "services_secondary_range_name": "services", - }, - ), - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr("google_container_cluster.with_ip_allocation_policy", - "ip_allocation_policy.0.cluster_secondary_range_name", "pods"), - resource.TestCheckResourceAttr("google_container_cluster.with_ip_allocation_policy", - "ip_allocation_policy.0.services_secondary_range_name", "services"), - ), + Config: testAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(cluster), }, { ResourceName: "google_container_cluster.with_ip_allocation_policy", @@ -1127,29 +1111,71 @@ func TestAccContainerCluster_withIPAllocationPolicy(t *testing.T) { ImportState: true, ImportStateVerify: true, }, + }, + }) +} + +func TestAccContainerCluster_withIPAllocationPolicy_specificIPRanges(t *testing.T) { + t.Parallel() + + cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroy, + Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withIPAllocationPolicy( - cluster, - map[string]string{ - "pods": "10.1.0.0/16", - "services": "10.2.0.0/20", - }, - map[string]string{}, - ), - ExpectError: regexp.MustCompile("clusters using IP aliases must specify secondary ranges"), + Config: testAccContainerCluster_withIPAllocationPolicy_specificIPRanges(cluster), }, { - Config: testAccContainerCluster_withIPAllocationPolicy( - cluster, - map[string]string{ - "pods": "10.1.0.0/16", - }, - map[string]string{ - "cluster_secondary_range_name": "pods", - "services_secondary_range_name": "services", - }, - ), - ExpectError: regexp.MustCompile("secondary range \"services\" does not exist in network"), + ResourceName: "google_container_cluster.with_ip_allocation_policy", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccContainerCluster_withIPAllocationPolicy_specificSizes(t *testing.T) { + t.Parallel() + + cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccContainerCluster_withIPAllocationPolicy_specificSizes(cluster), + }, + { + ResourceName: "google_container_cluster.with_ip_allocation_policy", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccContainerCluster_withIPAllocationPolicy_createSubnetwork(t *testing.T) { + t.Parallel() + + cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccContainerCluster_withIPAllocationPolicy_createSubnetwork(cluster), + }, + { + ResourceName: "google_container_cluster.with_ip_allocation_policy", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -2233,23 +2259,7 @@ resource "google_container_cluster" "with_maintenance_window" { }`, clusterName, maintenancePolicy) } -func testAccContainerCluster_withIPAllocationPolicy(cluster string, ranges, policy map[string]string) string { - - var secondaryRanges bytes.Buffer - for rangeName, cidr := range ranges { - secondaryRanges.WriteString(fmt.Sprintf(` - secondary_ip_range { - range_name = "%s" - ip_cidr_range = "%s" - }`, rangeName, cidr)) - } - - var ipAllocationPolicy bytes.Buffer - for key, value := range policy { - ipAllocationPolicy.WriteString(fmt.Sprintf(` - %s = "%s"`, key, value)) - } - +func testAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(cluster string) string { return fmt.Sprintf(` resource "google_compute_network" "container_network" { name = "container-net-%s" @@ -2262,7 +2272,14 @@ resource "google_compute_subnetwork" "container_subnetwork" { ip_cidr_range = "10.0.0.0/24" region = "us-central1" - %s + secondary_ip_range { + range_name = "pods" + ip_cidr_range = "10.1.0.0/16" + } + secondary_ip_range { + range_name = "services" + ip_cidr_range = "10.2.0.0/20" + } } resource "google_container_cluster" "with_ip_allocation_policy" { @@ -2274,9 +2291,66 @@ resource "google_container_cluster" "with_ip_allocation_policy" { initial_node_count = 1 ip_allocation_policy { - %s + cluster_secondary_range_name = "pods" + services_secondary_range_name = "services" } -}`, acctest.RandString(10), secondaryRanges.String(), cluster, ipAllocationPolicy.String()) +}`, cluster, cluster) +} + +func testAccContainerCluster_withIPAllocationPolicy_specificIPRanges(cluster string) string { + return fmt.Sprintf(` +resource "google_container_cluster" "with_ip_allocation_policy" { + name = "%s" + zone = "us-central1-a" + + initial_node_count = 1 + ip_allocation_policy { + cluster_ipv4_cidr_block = "10.90.0.0/19" + services_ipv4_cidr_block = "10.40.0.0/19" + } +}`, cluster) +} + +func testAccContainerCluster_withIPAllocationPolicy_specificSizes(cluster string) string { + return fmt.Sprintf(` +resource "google_compute_network" "container_network" { + name = "container-net-%s" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "container_subnetwork" { + name = "${google_compute_network.container_network.name}" + network = "${google_compute_network.container_network.name}" + ip_cidr_range = "10.0.0.0/24" + region = "us-central1" +} + +resource "google_container_cluster" "with_ip_allocation_policy" { + name = "%s" + zone = "us-central1-a" + + network = "${google_compute_network.container_network.name}" + subnetwork = "${google_compute_subnetwork.container_subnetwork.name}" + + initial_node_count = 1 + ip_allocation_policy { + cluster_ipv4_cidr_block = "/16" + services_ipv4_cidr_block = "/22" + } +}`, cluster, cluster) +} + +func testAccContainerCluster_withIPAllocationPolicy_createSubnetwork(cluster string) string { + return fmt.Sprintf(` +resource "google_container_cluster" "with_ip_allocation_policy" { + name = "%s" + zone = "us-central1-a" + + initial_node_count = 1 + ip_allocation_policy { + create_subnetwork = true + } +}`, cluster) } func testAccContainerCluster_withPodSecurityPolicy(clusterName string, enabled bool) string { diff --git a/website/docs/r/container_cluster.html.markdown b/website/docs/r/container_cluster.html.markdown index e0c11093..a8742b57 100644 --- a/website/docs/r/container_cluster.html.markdown +++ b/website/docs/r/container_cluster.html.markdown @@ -238,6 +238,23 @@ The `ip_allocation_policy` block supports: ClusterIPs. This must be an existing secondary range associated with the cluster subnetwork. +* `cluster_ipv4_cidr_block` - (Optional) The IP address range for the cluster pod IPs. + Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) + to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) + from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to + pick a specific range to use. + +* `services_ipv4_cidr_block` - (Optional) The IP address range of the services IPs in this cluster. + Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) + to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) + from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to + pick a specific range to use. + +* `create_subnetwork`- (Optional) Whether a new subnetwork will be created automatically for the cluster. + +* `subnetwork_name` - (Optional) A custom subnetwork name to be used if create_subnetwork is true. + If this field is empty, then an automatic name will be chosen for the new subnetwork. + The `master_auth` block supports: * `password` - (Required) The password to use for HTTP basic authentication when accessing From 37534f542da4241d1eee16737f0a65c97b35395f Mon Sep 17 00:00:00 2001 From: Dana Hoffman Date: Wed, 5 Sep 2018 09:53:02 -0700 Subject: [PATCH 13/31] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 59820e71..913a851f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,7 @@ BACKWARDS INCOMPATIBILITIES: IMPROVEMENTS: * compute: `google_compute_health_check` is autogenerated, exposing the `type` attribute and accepting more import formats. [GH-1941] +* container: Addition of create_subnetwork and other fields relevant for Alias IPs [GH-1921] ## 1.17.1 (August 22, 2018) From 96cb1fbc1632ce56f10fcd496ad91e77f23f3662 Mon Sep 17 00:00:00 2001 From: Seth Vargo Date: Wed, 5 Sep 2018 12:59:41 -0400 Subject: [PATCH 14/31] Encourage users to set lifecycle hooks on kms_crypto_key resources (#1896) @michaelharo suggested this would be a good best practice for this particular resource to prevent users from accidentally deleting a bunch of encryption keys on a -/+ and losing data, and I agree. This commit adds a quick documentation blurb saying what actually happens if those crypto key versions are destroyed and also updates the snippet to showcase a lifecycle hook. --- website/docs/r/google_kms_crypto_key.html.markdown | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/website/docs/r/google_kms_crypto_key.html.markdown b/website/docs/r/google_kms_crypto_key.html.markdown index 3dc49166..ac07e13b 100644 --- a/website/docs/r/google_kms_crypto_key.html.markdown +++ b/website/docs/r/google_kms_crypto_key.html.markdown @@ -16,8 +16,13 @@ and A CryptoKey is an interface to key material which can be used to encrypt and decrypt data. A CryptoKey belongs to a Google Cloud KMS KeyRing. -~> Note: CryptoKeys cannot be deleted from Google Cloud Platform. Destroying a Terraform-managed CryptoKey will remove it -from state and delete all CryptoKeyVersions, rendering the key unusable, but **will not delete the resource on the server**. +~> Note: CryptoKeys cannot be deleted from Google Cloud Platform. Destroying a +Terraform-managed CryptoKey will remove it from state and delete all +CryptoKeyVersions, rendering the key unusable, but **will not delete the +resource on the server**. When Terraform destroys these keys, any data +previously encrypted with these keys will be irrecoverable. For this reason, it +is strongly recommended that you add lifecycle hooks to the resource to prevent +accidental destruction. ## Example Usage @@ -32,6 +37,10 @@ resource "google_kms_crypto_key" "my_crypto_key" { name = "my-crypto-key" key_ring = "${google_kms_key_ring.my_key_ring.self_link}" rotation_period = "100000s" + + lifecycle { + prevent_destroy = true + } } ``` From c8ba3c0b3f82c0ea337e17536be096aeff44c677 Mon Sep 17 00:00:00 2001 From: Riley Karson Date: Wed, 5 Sep 2018 10:31:35 -0700 Subject: [PATCH 15/31] Cleaned up google_project_usage_export_bucket code and docs. (#1922) --- google/resource_usage_export_bucket.go | 26 ++++++++++++++---- .../docs/r/usage_export_bucket.html.markdown | 27 ++++++++++++------- 2 files changed, 39 insertions(+), 14 deletions(-) diff --git a/google/resource_usage_export_bucket.go b/google/resource_usage_export_bucket.go index 730ec600..7e87e426 100644 --- a/google/resource_usage_export_bucket.go +++ b/google/resource_usage_export_bucket.go @@ -14,7 +14,7 @@ func resourceProjectUsageBucket() *schema.Resource { Read: resourceProjectUsageBucketRead, Delete: resourceProjectUsageBucketDelete, Importer: &schema.ResourceImporter{ - State: schema.ImportStatePassthrough, + State: resourceProjectUsageBucketImportState, }, Schema: map[string]*schema.Schema{ @@ -40,7 +40,11 @@ func resourceProjectUsageBucket() *schema.Resource { func resourceProjectUsageBucketRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - project := d.Id() + + project, err := getProject(d, config) + if err != nil { + return err + } p, err := config.clientCompute.Projects.Get(project).Do() if err != nil { @@ -60,6 +64,7 @@ func resourceProjectUsageBucketRead(d *schema.ResourceData, meta interface{}) er func resourceProjectUsageBucketCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + project, err := getProject(d, config) if err != nil { return err @@ -86,14 +91,19 @@ func resourceProjectUsageBucketCreate(d *schema.ResourceData, meta interface{}) func resourceProjectUsageBucketDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - project := d.Id() + + project, err := getProject(d, config) + if err != nil { + return err + } op, err := config.clientCompute.Projects.SetUsageExportBucket(project, nil).Do() if err != nil { return err } - d.SetId(project) - err = computeOperationWait(config.clientCompute, op, project, "Setting usage export bucket.") + + err = computeOperationWait(config.clientCompute, op, project, + "Setting usage export bucket to nil, automatically disabling usage export.") if err != nil { return err } @@ -101,3 +111,9 @@ func resourceProjectUsageBucketDelete(d *schema.ResourceData, meta interface{}) return nil } + +func resourceProjectUsageBucketImportState(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + project := d.Id() + d.Set("project", project) + return []*schema.ResourceData{d}, nil +} diff --git a/website/docs/r/usage_export_bucket.html.markdown b/website/docs/r/usage_export_bucket.html.markdown index 85abcea6..8f187ef6 100644 --- a/website/docs/r/usage_export_bucket.html.markdown +++ b/website/docs/r/usage_export_bucket.html.markdown @@ -3,7 +3,7 @@ layout: "google" page_title: "Google: google_project_usage_export_bucket" sidebar_current: "docs-google-project-usage-export-bucket" description: |- - Creates a dataset resource for Google BigQuery. + Manages a project's usage export bucket. --- # google_project_usage_export_bucket @@ -16,23 +16,32 @@ For more information see the [Docs](https://cloud.google.com/compute/docs/usage- and for further details, the [API Documentation](https://cloud.google.com/compute/docs/reference/rest/beta/projects/setUsageExportBucket). +~> **Note:** You should specify only one of these per project. If there are two or more +they will fight over which bucket the reports should be stored in. It is +safe to have multiple resources with the same backing bucket. ## Example Usage ```hcl -resource "google_project_usage_export_bucket" "export" { - project = "foo" - bucket_name = "bar" +resource "google_project_usage_export_bucket" "usage_export" { + project = "development-project" + bucket_name = "usage-tracking-bucket" } ``` ## Argument Reference -* `project`: (Required) The project to set the export bucket on. * `bucket_name`: (Required) The bucket to store reports in. + +- - - + * `prefix`: (Optional) A prefix for the reports, for instance, the project name. -## Note +* `project`: (Optional) The project to set the export bucket on. If it is not provided, the provider project is used. -You should specify only one of these per project. If there are two or more -they will fight over which bucket the reports should be stored in. It is -safe to have multiple resources with the same backing bucket. +## Import + +A project's Usage Export Bucket can be imported using this format: + +``` +$ terraform import google_project_usage_export_bucket.usage_export {{project}} +``` From abcd2179de03a216bdf23cb43f955892f491031d Mon Sep 17 00:00:00 2001 From: Chris Stephens Date: Wed, 5 Sep 2018 14:12:31 -0700 Subject: [PATCH 16/31] Updating the GCP dns api client. --- .../google.golang.org/api/dns/v1/dns-api.json | 2077 +++++++++++------ .../google.golang.org/api/dns/v1/dns-gen.go | 1830 ++++++++++++++- vendor/vendor.json | 6 +- 3 files changed, 3178 insertions(+), 735 deletions(-) diff --git a/vendor/google.golang.org/api/dns/v1/dns-api.json b/vendor/google.golang.org/api/dns/v1/dns-api.json index 38bdda5b..ab721983 100644 --- a/vendor/google.golang.org/api/dns/v1/dns-api.json +++ b/vendor/google.golang.org/api/dns/v1/dns-api.json @@ -1,708 +1,1401 @@ { - "kind": "discovery#restDescription", - "etag": "\"tbys6C40o18GZwyMen5GMkdK-3s/RqBsQyB2YZT-ZAkK7pcLByI9SZs\"", - "discoveryVersion": "v1", - "id": "dns:v1", - "name": "dns", - "version": "v1", - "revision": "20161110", - "title": "Google Cloud DNS API", - "description": "Configures and serves authoritative DNS records.", - "ownerDomain": "google.com", - "ownerName": "Google", - "icons": { - "x16": "https://www.gstatic.com/images/branding/product/1x/googleg_16dp.png", - "x32": "https://www.gstatic.com/images/branding/product/1x/googleg_32dp.png" - }, - "documentationLink": "https://developers.google.com/cloud-dns", - "protocol": "rest", - "baseUrl": "https://www.googleapis.com/dns/v1/projects/", - "basePath": "/dns/v1/projects/", - "rootUrl": "https://www.googleapis.com/", - "servicePath": "dns/v1/projects/", - "batchPath": "batch", - "parameters": { - "alt": { - "type": "string", - "description": "Data format for the response.", - "default": "json", - "enum": [ - "json" - ], - "enumDescriptions": [ - "Responses with Content-Type of application/json" - ], - "location": "query" - }, - "fields": { - "type": "string", - "description": "Selector specifying which fields to include in a partial response.", - "location": "query" - }, - "key": { - "type": "string", - "description": "API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.", - "location": "query" - }, - "oauth_token": { - "type": "string", - "description": "OAuth 2.0 token for the current user.", - "location": "query" - }, - "prettyPrint": { - "type": "boolean", - "description": "Returns response with indentations and line breaks.", - "default": "true", - "location": "query" - }, - "quotaUser": { - "type": "string", - "description": "Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. Overrides userIp if both are provided.", - "location": "query" - }, - "userIp": { - "type": "string", - "description": "IP address of the site where the request originates. Use this if you want to enforce per-user limits.", - "location": "query" - } - }, - "auth": { - "oauth2": { - "scopes": { - "https://www.googleapis.com/auth/cloud-platform": { - "description": "View and manage your data across Google Cloud Platform services" - }, - "https://www.googleapis.com/auth/cloud-platform.read-only": { - "description": "View your data across Google Cloud Platform services" - }, - "https://www.googleapis.com/auth/ndev.clouddns.readonly": { - "description": "View your DNS records hosted by Google Cloud DNS" - }, - "https://www.googleapis.com/auth/ndev.clouddns.readwrite": { - "description": "View and manage your DNS records hosted by Google Cloud DNS" + "auth": { + "oauth2": { + "scopes": { + "https://www.googleapis.com/auth/cloud-platform": { + "description": "View and manage your data across Google Cloud Platform services" + }, + "https://www.googleapis.com/auth/cloud-platform.read-only": { + "description": "View your data across Google Cloud Platform services" + }, + "https://www.googleapis.com/auth/ndev.clouddns.readonly": { + "description": "View your DNS records hosted by Google Cloud DNS" + }, + "https://www.googleapis.com/auth/ndev.clouddns.readwrite": { + "description": "View and manage your DNS records hosted by Google Cloud DNS" + } + } } - } - } - }, - "schemas": { - "Change": { - "id": "Change", - "type": "object", - "description": "An atomic update to a collection of ResourceRecordSets.", - "properties": { - "additions": { - "type": "array", - "description": "Which ResourceRecordSets to add?", - "items": { - "$ref": "ResourceRecordSet" - } - }, - "deletions": { - "type": "array", - "description": "Which ResourceRecordSets to remove? Must match existing data exactly.", - "items": { - "$ref": "ResourceRecordSet" - } - }, - "id": { - "type": "string", - "description": "Unique identifier for the resource; defined by the server (output only)." - }, - "kind": { - "type": "string", - "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#change\".", - "default": "dns#change" - }, - "startTime": { - "type": "string", - "description": "The time that this operation was started by the server (output only). This is in RFC3339 text format." - }, - "status": { - "type": "string", - "description": "Status of the operation (output only).", - "enum": [ - "done", - "pending" - ], - "enumDescriptions": [ - "", - "" - ] - } - } }, - "ChangesListResponse": { - "id": "ChangesListResponse", - "type": "object", - "description": "The response to a request to enumerate Changes to a ResourceRecordSets collection.", - "properties": { + "basePath": "/dns/v1/projects/", + "baseUrl": "https://www.googleapis.com/dns/v1/projects/", + "batchPath": "batch/dns/v1", + "description": "Configures and serves authoritative DNS records.", + "discoveryVersion": "v1", + "documentationLink": "https://developers.google.com/cloud-dns", + "etag": "\"J3WqvAcMk4eQjJXvfSI4Yr8VouA/EQMOijfSxjBH7q8fB_7QzVLzbgs\"", + "icons": { + "x16": "https://www.gstatic.com/images/branding/product/1x/googleg_16dp.png", + "x32": "https://www.gstatic.com/images/branding/product/1x/googleg_32dp.png" + }, + "id": "dns:v1", + "kind": "discovery#restDescription", + "name": "dns", + "ownerDomain": "google.com", + "ownerName": "Google", + "parameters": { + "alt": { + "default": "json", + "description": "Data format for the response.", + "enum": [ + "json" + ], + "enumDescriptions": [ + "Responses with Content-Type of application/json" + ], + "location": "query", + "type": "string" + }, + "fields": { + "description": "Selector specifying which fields to include in a partial response.", + "location": "query", + "type": "string" + }, + "key": { + "description": "API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.", + "location": "query", + "type": "string" + }, + "oauth_token": { + "description": "OAuth 2.0 token for the current user.", + "location": "query", + "type": "string" + }, + "prettyPrint": { + "default": "true", + "description": "Returns response with indentations and line breaks.", + "location": "query", + "type": "boolean" + }, + "quotaUser": { + "description": "An opaque string that represents a user for quota purposes. Must not exceed 40 characters.", + "location": "query", + "type": "string" + }, + "userIp": { + "description": "Deprecated. Please use quotaUser instead.", + "location": "query", + "type": "string" + } + }, + "protocol": "rest", + "resources": { "changes": { - "type": "array", - "description": "The requested changes.", - "items": { - "$ref": "Change" - } + "methods": { + "create": { + "description": "Atomically update the ResourceRecordSet collection.", + "httpMethod": "POST", + "id": "dns.changes.create", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/changes", + "request": { + "$ref": "Change" + }, + "response": { + "$ref": "Change" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "get": { + "description": "Fetch the representation of an existing Change.", + "httpMethod": "GET", + "id": "dns.changes.get", + "parameterOrder": [ + "project", + "managedZone", + "changeId" + ], + "parameters": { + "changeId": { + "description": "The identifier of the requested change, from a previous ResourceRecordSetsChangeResponse.", + "location": "path", + "required": true, + "type": "string" + }, + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/changes/{changeId}", + "response": { + "$ref": "Change" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "list": { + "description": "Enumerate Changes to a ResourceRecordSet collection.", + "httpMethod": "GET", + "id": "dns.changes.list", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "maxResults": { + "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", + "format": "int32", + "location": "query", + "type": "integer" + }, + "pageToken": { + "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", + "location": "query", + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + }, + "sortBy": { + "default": "changeSequence", + "description": "Sorting criterion. The only supported value is change sequence.", + "enum": [ + "changeSequence" + ], + "enumDescriptions": [ + "" + ], + "location": "query", + "type": "string" + }, + "sortOrder": { + "description": "Sorting order direction: 'ascending' or 'descending'.", + "location": "query", + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/changes", + "response": { + "$ref": "ChangesListResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + } + } }, - "kind": { - "type": "string", - "description": "Type of resource.", - "default": "dns#changesListResponse" + "dnsKeys": { + "methods": { + "get": { + "description": "Fetch the representation of an existing DnsKey.", + "httpMethod": "GET", + "id": "dns.dnsKeys.get", + "parameterOrder": [ + "project", + "managedZone", + "dnsKeyId" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "digestType": { + "description": "An optional comma-separated list of digest types to compute and display for key signing keys. If omitted, the recommended digest type will be computed and displayed.", + "location": "query", + "type": "string" + }, + "dnsKeyId": { + "description": "The identifier of the requested DnsKey.", + "location": "path", + "required": true, + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/dnsKeys/{dnsKeyId}", + "response": { + "$ref": "DnsKey" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "list": { + "description": "Enumerate DnsKeys to a ResourceRecordSet collection.", + "httpMethod": "GET", + "id": "dns.dnsKeys.list", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "digestType": { + "description": "An optional comma-separated list of digest types to compute and display for key signing keys. If omitted, the recommended digest type will be computed and displayed.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "maxResults": { + "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", + "format": "int32", + "location": "query", + "type": "integer" + }, + "pageToken": { + "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", + "location": "query", + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/dnsKeys", + "response": { + "$ref": "DnsKeysListResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + } + } }, - "nextPageToken": { - "type": "string", - "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your pagination token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a \"snapshot\" of collections larger than the maximum page size." - } - } - }, - "ManagedZone": { - "id": "ManagedZone", - "type": "object", - "description": "A zone is a subtree of the DNS namespace under one administrative responsibility. A ManagedZone is a resource that represents a DNS zone hosted by the Cloud DNS service.", - "properties": { - "creationTime": { - "type": "string", - "description": "The time that this resource was created on the server. This is in RFC3339 text format. Output only." - }, - "description": { - "type": "string", - "description": "A mutable string of at most 1024 characters associated with this resource for the user's convenience. Has no effect on the managed zone's function." - }, - "dnsName": { - "type": "string", - "description": "The DNS name of this managed zone, for instance \"example.com.\"." - }, - "id": { - "type": "string", - "description": "Unique identifier for the resource; defined by the server (output only)", - "format": "uint64" - }, - "kind": { - "type": "string", - "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#managedZone\".", - "default": "dns#managedZone" - }, - "name": { - "type": "string", - "description": "User assigned name for this resource. Must be unique within the project. The name must be 1-63 characters long, must begin with a letter, end with a letter or digit, and only contain lowercase letters, digits or dashes." - }, - "nameServerSet": { - "type": "string", - "description": "Optionally specifies the NameServerSet for this ManagedZone. A NameServerSet is a set of DNS name servers that all host the same ManagedZones. Most users will leave this field unset." - }, - "nameServers": { - "type": "array", - "description": "Delegate your managed_zone to these virtual name servers; defined by the server (output only)", - "items": { - "type": "string" - } - } - } - }, - "ManagedZonesListResponse": { - "id": "ManagedZonesListResponse", - "type": "object", - "properties": { - "kind": { - "type": "string", - "description": "Type of resource.", - "default": "dns#managedZonesListResponse" + "managedZoneOperations": { + "methods": { + "get": { + "description": "Fetch the representation of an existing Operation.", + "httpMethod": "GET", + "id": "dns.managedZoneOperations.get", + "parameterOrder": [ + "project", + "managedZone", + "operation" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request.", + "location": "path", + "required": true, + "type": "string" + }, + "operation": { + "description": "Identifies the operation addressed by this request.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/operations/{operation}", + "response": { + "$ref": "Operation" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "list": { + "description": "Enumerate Operations for the given ManagedZone.", + "httpMethod": "GET", + "id": "dns.managedZoneOperations.list", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "managedZone": { + "description": "Identifies the managed zone addressed by this request.", + "location": "path", + "required": true, + "type": "string" + }, + "maxResults": { + "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", + "format": "int32", + "location": "query", + "type": "integer" + }, + "pageToken": { + "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", + "location": "query", + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + }, + "sortBy": { + "default": "startTime", + "description": "Sorting criterion. The only supported values are START_TIME and ID.", + "enum": [ + "id", + "startTime" + ], + "enumDescriptions": [ + "", + "" + ], + "location": "query", + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/operations", + "response": { + "$ref": "ManagedZoneOperationsListResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + } + } }, "managedZones": { - "type": "array", - "description": "The managed zone resources.", - "items": { - "$ref": "ManagedZone" - } + "methods": { + "create": { + "description": "Create a new ManagedZone.", + "httpMethod": "POST", + "id": "dns.managedZones.create", + "parameterOrder": [ + "project" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones", + "request": { + "$ref": "ManagedZone" + }, + "response": { + "$ref": "ManagedZone" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "delete": { + "description": "Delete a previously created ManagedZone.", + "httpMethod": "DELETE", + "id": "dns.managedZones.delete", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "get": { + "description": "Fetch the representation of an existing ManagedZone.", + "httpMethod": "GET", + "id": "dns.managedZones.get", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}", + "response": { + "$ref": "ManagedZone" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "list": { + "description": "Enumerate ManagedZones that have been created but not yet deleted.", + "httpMethod": "GET", + "id": "dns.managedZones.list", + "parameterOrder": [ + "project" + ], + "parameters": { + "dnsName": { + "description": "Restricts the list to return only zones with this domain name.", + "location": "query", + "type": "string" + }, + "maxResults": { + "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", + "format": "int32", + "location": "query", + "type": "integer" + }, + "pageToken": { + "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", + "location": "query", + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones", + "response": { + "$ref": "ManagedZonesListResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "patch": { + "description": "Apply a partial update to an existing ManagedZone.", + "httpMethod": "PATCH", + "id": "dns.managedZones.patch", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}", + "request": { + "$ref": "ManagedZone" + }, + "response": { + "$ref": "Operation" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + }, + "update": { + "description": "Update an existing ManagedZone.", + "httpMethod": "PUT", + "id": "dns.managedZones.update", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}", + "request": { + "$ref": "ManagedZone" + }, + "response": { + "$ref": "Operation" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + } + } }, - "nextPageToken": { - "type": "string", - "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your page token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a consistent snapshot of a collection larger than the maximum page size." + "projects": { + "methods": { + "get": { + "description": "Fetch the representation of an existing Project.", + "httpMethod": "GET", + "id": "dns.projects.get", + "parameterOrder": [ + "project" + ], + "parameters": { + "clientOperationId": { + "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + "location": "query", + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + } + }, + "path": "{project}", + "response": { + "$ref": "Project" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + } + } + }, + "resourceRecordSets": { + "methods": { + "list": { + "description": "Enumerate ResourceRecordSets that have been created but not yet deleted.", + "httpMethod": "GET", + "id": "dns.resourceRecordSets.list", + "parameterOrder": [ + "project", + "managedZone" + ], + "parameters": { + "managedZone": { + "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + "location": "path", + "required": true, + "type": "string" + }, + "maxResults": { + "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", + "format": "int32", + "location": "query", + "type": "integer" + }, + "name": { + "description": "Restricts the list to return only records with this fully qualified domain name.", + "location": "query", + "type": "string" + }, + "pageToken": { + "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", + "location": "query", + "type": "string" + }, + "project": { + "description": "Identifies the project addressed by this request.", + "location": "path", + "required": true, + "type": "string" + }, + "type": { + "description": "Restricts the list to return only records of this type. If present, the \"name\" parameter must also be present.", + "location": "query", + "type": "string" + } + }, + "path": "{project}/managedZones/{managedZone}/rrsets", + "response": { + "$ref": "ResourceRecordSetsListResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/ndev.clouddns.readonly", + "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + ] + } + } } - } }, - "Project": { - "id": "Project", - "type": "object", - "description": "A project resource. The project is a top level container for resources including Cloud DNS ManagedZones. Projects can be created only in the APIs console.", - "properties": { - "id": { - "type": "string", - "description": "User assigned unique identifier for the resource (output only)." + "revision": "20180808", + "rootUrl": "https://www.googleapis.com/", + "schemas": { + "Change": { + "description": "An atomic update to a collection of ResourceRecordSets.", + "id": "Change", + "properties": { + "additions": { + "description": "Which ResourceRecordSets to add?", + "items": { + "$ref": "ResourceRecordSet" + }, + "type": "array" + }, + "deletions": { + "description": "Which ResourceRecordSets to remove? Must match existing data exactly.", + "items": { + "$ref": "ResourceRecordSet" + }, + "type": "array" + }, + "id": { + "description": "Unique identifier for the resource; defined by the server (output only).", + "type": "string" + }, + "isServing": { + "description": "If the DNS queries for the zone will be served.", + "type": "boolean" + }, + "kind": { + "default": "dns#change", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#change\".", + "type": "string" + }, + "startTime": { + "description": "The time that this operation was started by the server (output only). This is in RFC3339 text format.", + "type": "string" + }, + "status": { + "description": "Status of the operation (output only).", + "enum": [ + "done", + "pending" + ], + "enumDescriptions": [ + "", + "" + ], + "type": "string" + } + }, + "type": "object" }, - "kind": { - "type": "string", - "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#project\".", - "default": "dns#project" + "ChangesListResponse": { + "description": "The response to a request to enumerate Changes to a ResourceRecordSets collection.", + "id": "ChangesListResponse", + "properties": { + "changes": { + "description": "The requested changes.", + "items": { + "$ref": "Change" + }, + "type": "array" + }, + "header": { + "$ref": "ResponseHeader" + }, + "kind": { + "default": "dns#changesListResponse", + "description": "Type of resource.", + "type": "string" + }, + "nextPageToken": { + "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your pagination token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a \"snapshot\" of collections larger than the maximum page size.", + "type": "string" + } + }, + "type": "object" }, - "number": { - "type": "string", - "description": "Unique numeric identifier for the resource; defined by the server (output only).", - "format": "uint64" + "DnsKey": { + "description": "A DNSSEC key pair.", + "id": "DnsKey", + "properties": { + "algorithm": { + "description": "String mnemonic specifying the DNSSEC algorithm of this key. Immutable after creation time.", + "enum": [ + "ecdsap256sha256", + "ecdsap384sha384", + "rsasha1", + "rsasha256", + "rsasha512" + ], + "enumDescriptions": [ + "", + "", + "", + "", + "" + ], + "type": "string" + }, + "creationTime": { + "description": "The time that this resource was created in the control plane. This is in RFC3339 text format. Output only.", + "type": "string" + }, + "description": { + "description": "A mutable string of at most 1024 characters associated with this resource for the user's convenience. Has no effect on the resource's function.", + "type": "string" + }, + "digests": { + "description": "Cryptographic hashes of the DNSKEY resource record associated with this DnsKey. These digests are needed to construct a DS record that points at this DNS key. Output only.", + "items": { + "$ref": "DnsKeyDigest" + }, + "type": "array" + }, + "id": { + "description": "Unique identifier for the resource; defined by the server (output only).", + "type": "string" + }, + "isActive": { + "description": "Active keys will be used to sign subsequent changes to the ManagedZone. Inactive keys will still be present as DNSKEY Resource Records for the use of resolvers validating existing signatures.", + "type": "boolean" + }, + "keyLength": { + "description": "Length of the key in bits. Specified at creation time then immutable.", + "format": "uint32", + "type": "integer" + }, + "keyTag": { + "description": "The key tag is a non-cryptographic hash of the a DNSKEY resource record associated with this DnsKey. The key tag can be used to identify a DNSKEY more quickly (but it is not a unique identifier). In particular, the key tag is used in a parent zone's DS record to point at the DNSKEY in this child ManagedZone. The key tag is a number in the range [0, 65535] and the algorithm to calculate it is specified in RFC4034 Appendix B. Output only.", + "format": "int32", + "type": "integer" + }, + "kind": { + "default": "dns#dnsKey", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#dnsKey\".", + "type": "string" + }, + "publicKey": { + "description": "Base64 encoded public half of this key. Output only.", + "type": "string" + }, + "type": { + "description": "One of \"KEY_SIGNING\" or \"ZONE_SIGNING\". Keys of type KEY_SIGNING have the Secure Entry Point flag set and, when active, will be used to sign only resource record sets of type DNSKEY. Otherwise, the Secure Entry Point flag will be cleared and this key will be used to sign only resource record sets of other types. Immutable after creation time.", + "enum": [ + "keySigning", + "zoneSigning" + ], + "enumDescriptions": [ + "", + "" + ], + "type": "string" + } + }, + "type": "object" }, - "quota": { - "$ref": "Quota", - "description": "Quotas assigned to this project (output only)." + "DnsKeyDigest": { + "id": "DnsKeyDigest", + "properties": { + "digest": { + "description": "The base-16 encoded bytes of this digest. Suitable for use in a DS resource record.", + "type": "string" + }, + "type": { + "description": "Specifies the algorithm used to calculate this digest.", + "enum": [ + "sha1", + "sha256", + "sha384" + ], + "enumDescriptions": [ + "", + "", + "" + ], + "type": "string" + } + }, + "type": "object" + }, + "DnsKeySpec": { + "description": "Parameters for DnsKey key generation. Used for generating initial keys for a new ManagedZone and as default when adding a new DnsKey.", + "id": "DnsKeySpec", + "properties": { + "algorithm": { + "description": "String mnemonic specifying the DNSSEC algorithm of this key.", + "enum": [ + "ecdsap256sha256", + "ecdsap384sha384", + "rsasha1", + "rsasha256", + "rsasha512" + ], + "enumDescriptions": [ + "", + "", + "", + "", + "" + ], + "type": "string" + }, + "keyLength": { + "description": "Length of the keys in bits.", + "format": "uint32", + "type": "integer" + }, + "keyType": { + "description": "One of \"KEY_SIGNING\" or \"ZONE_SIGNING\". Keys of type KEY_SIGNING have the Secure Entry Point flag set and, when active, will be used to sign only resource record sets of type DNSKEY. Otherwise, the Secure Entry Point flag will be cleared and this key will be used to sign only resource record sets of other types.", + "enum": [ + "keySigning", + "zoneSigning" + ], + "enumDescriptions": [ + "", + "" + ], + "type": "string" + }, + "kind": { + "default": "dns#dnsKeySpec", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#dnsKeySpec\".", + "type": "string" + } + }, + "type": "object" + }, + "DnsKeysListResponse": { + "description": "The response to a request to enumerate DnsKeys in a ManagedZone.", + "id": "DnsKeysListResponse", + "properties": { + "dnsKeys": { + "description": "The requested resources.", + "items": { + "$ref": "DnsKey" + }, + "type": "array" + }, + "header": { + "$ref": "ResponseHeader" + }, + "kind": { + "default": "dns#dnsKeysListResponse", + "description": "Type of resource.", + "type": "string" + }, + "nextPageToken": { + "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your pagination token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a \"snapshot\" of collections larger than the maximum page size.", + "type": "string" + } + }, + "type": "object" + }, + "ManagedZone": { + "description": "A zone is a subtree of the DNS namespace under one administrative responsibility. A ManagedZone is a resource that represents a DNS zone hosted by the Cloud DNS service.", + "id": "ManagedZone", + "properties": { + "creationTime": { + "description": "The time that this resource was created on the server. This is in RFC3339 text format. Output only.", + "type": "string" + }, + "description": { + "description": "A mutable string of at most 1024 characters associated with this resource for the user's convenience. Has no effect on the managed zone's function.", + "type": "string" + }, + "dnsName": { + "description": "The DNS name of this managed zone, for instance \"example.com.\".", + "type": "string" + }, + "dnssecConfig": { + "$ref": "ManagedZoneDnsSecConfig", + "description": "DNSSEC configuration." + }, + "id": { + "description": "Unique identifier for the resource; defined by the server (output only)", + "format": "uint64", + "type": "string" + }, + "kind": { + "default": "dns#managedZone", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#managedZone\".", + "type": "string" + }, + "labels": { + "additionalProperties": { + "type": "string" + }, + "description": "User labels.", + "type": "object" + }, + "name": { + "description": "User assigned name for this resource. Must be unique within the project. The name must be 1-63 characters long, must begin with a letter, end with a letter or digit, and only contain lowercase letters, digits or dashes.", + "type": "string" + }, + "nameServerSet": { + "description": "Optionally specifies the NameServerSet for this ManagedZone. A NameServerSet is a set of DNS name servers that all host the same ManagedZones. Most users will leave this field unset.", + "type": "string" + }, + "nameServers": { + "description": "Delegate your managed_zone to these virtual name servers; defined by the server (output only)", + "items": { + "type": "string" + }, + "type": "array" + } + }, + "type": "object" + }, + "ManagedZoneDnsSecConfig": { + "id": "ManagedZoneDnsSecConfig", + "properties": { + "defaultKeySpecs": { + "description": "Specifies parameters that will be used for generating initial DnsKeys for this ManagedZone. Output only while state is not OFF.", + "items": { + "$ref": "DnsKeySpec" + }, + "type": "array" + }, + "kind": { + "default": "dns#managedZoneDnsSecConfig", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#managedZoneDnsSecConfig\".", + "type": "string" + }, + "nonExistence": { + "description": "Specifies the mechanism used to provide authenticated denial-of-existence responses. Output only while state is not OFF.", + "enum": [ + "nsec", + "nsec3" + ], + "enumDescriptions": [ + "", + "" + ], + "type": "string" + }, + "state": { + "description": "Specifies whether DNSSEC is enabled, and what mode it is in.", + "enum": [ + "off", + "on", + "transfer" + ], + "enumDescriptions": [ + "", + "", + "" + ], + "type": "string" + } + }, + "type": "object" + }, + "ManagedZoneOperationsListResponse": { + "id": "ManagedZoneOperationsListResponse", + "properties": { + "header": { + "$ref": "ResponseHeader" + }, + "kind": { + "default": "dns#managedZoneOperationsListResponse", + "description": "Type of resource.", + "type": "string" + }, + "nextPageToken": { + "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your page token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a consistent snapshot of a collection larger than the maximum page size.", + "type": "string" + }, + "operations": { + "description": "The operation resources.", + "items": { + "$ref": "Operation" + }, + "type": "array" + } + }, + "type": "object" + }, + "ManagedZonesListResponse": { + "id": "ManagedZonesListResponse", + "properties": { + "header": { + "$ref": "ResponseHeader" + }, + "kind": { + "default": "dns#managedZonesListResponse", + "description": "Type of resource.", + "type": "string" + }, + "managedZones": { + "description": "The managed zone resources.", + "items": { + "$ref": "ManagedZone" + }, + "type": "array" + }, + "nextPageToken": { + "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your page token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a consistent snapshot of a collection larger than the maximum page size.", + "type": "string" + } + }, + "type": "object" + }, + "Operation": { + "description": "An operation represents a successful mutation performed on a Cloud DNS resource. Operations provide: - An audit log of server resource mutations. - A way to recover/retry API calls in the case where the response is never received by the caller. Use the caller specified client_operation_id.", + "id": "Operation", + "properties": { + "dnsKeyContext": { + "$ref": "OperationDnsKeyContext", + "description": "Only populated if the operation targeted a DnsKey (output only)." + }, + "id": { + "description": "Unique identifier for the resource. This is the client_operation_id if the client specified it when the mutation was initiated, otherwise, it is generated by the server. The name must be 1-63 characters long and match the regular expression [-a-z0-9]? (output only)", + "type": "string" + }, + "kind": { + "default": "dns#operation", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#operation\".", + "type": "string" + }, + "startTime": { + "description": "The time that this operation was started by the server. This is in RFC3339 text format (output only).", + "type": "string" + }, + "status": { + "description": "Status of the operation. Can be one of the following: \"PENDING\" or \"DONE\" (output only).", + "enum": [ + "done", + "pending" + ], + "enumDescriptions": [ + "", + "" + ], + "type": "string" + }, + "type": { + "description": "Type of the operation. Operations include insert, update, and delete (output only).", + "type": "string" + }, + "user": { + "description": "User who requested the operation, for example: user@example.com. cloud-dns-system for operations automatically done by the system. (output only)", + "type": "string" + }, + "zoneContext": { + "$ref": "OperationManagedZoneContext", + "description": "Only populated if the operation targeted a ManagedZone (output only)." + } + }, + "type": "object" + }, + "OperationDnsKeyContext": { + "id": "OperationDnsKeyContext", + "properties": { + "newValue": { + "$ref": "DnsKey", + "description": "The post-operation DnsKey resource." + }, + "oldValue": { + "$ref": "DnsKey", + "description": "The pre-operation DnsKey resource." + } + }, + "type": "object" + }, + "OperationManagedZoneContext": { + "id": "OperationManagedZoneContext", + "properties": { + "newValue": { + "$ref": "ManagedZone", + "description": "The post-operation ManagedZone resource." + }, + "oldValue": { + "$ref": "ManagedZone", + "description": "The pre-operation ManagedZone resource." + } + }, + "type": "object" + }, + "Project": { + "description": "A project resource. The project is a top level container for resources including Cloud DNS ManagedZones. Projects can be created only in the APIs console.", + "id": "Project", + "properties": { + "id": { + "description": "User assigned unique identifier for the resource (output only).", + "type": "string" + }, + "kind": { + "default": "dns#project", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#project\".", + "type": "string" + }, + "number": { + "description": "Unique numeric identifier for the resource; defined by the server (output only).", + "format": "uint64", + "type": "string" + }, + "quota": { + "$ref": "Quota", + "description": "Quotas assigned to this project (output only)." + } + }, + "type": "object" + }, + "Quota": { + "description": "Limits associated with a Project.", + "id": "Quota", + "properties": { + "dnsKeysPerManagedZone": { + "description": "Maximum allowed number of DnsKeys per ManagedZone.", + "format": "int32", + "type": "integer" + }, + "kind": { + "default": "dns#quota", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#quota\".", + "type": "string" + }, + "managedZones": { + "description": "Maximum allowed number of managed zones in the project.", + "format": "int32", + "type": "integer" + }, + "resourceRecordsPerRrset": { + "description": "Maximum allowed number of ResourceRecords per ResourceRecordSet.", + "format": "int32", + "type": "integer" + }, + "rrsetAdditionsPerChange": { + "description": "Maximum allowed number of ResourceRecordSets to add per ChangesCreateRequest.", + "format": "int32", + "type": "integer" + }, + "rrsetDeletionsPerChange": { + "description": "Maximum allowed number of ResourceRecordSets to delete per ChangesCreateRequest.", + "format": "int32", + "type": "integer" + }, + "rrsetsPerManagedZone": { + "description": "Maximum allowed number of ResourceRecordSets per zone in the project.", + "format": "int32", + "type": "integer" + }, + "totalRrdataSizePerChange": { + "description": "Maximum allowed size for total rrdata in one ChangesCreateRequest in bytes.", + "format": "int32", + "type": "integer" + }, + "whitelistedKeySpecs": { + "description": "DNSSEC algorithm and key length types that can be used for DnsKeys.", + "items": { + "$ref": "DnsKeySpec" + }, + "type": "array" + } + }, + "type": "object" + }, + "ResourceRecordSet": { + "description": "A unit of data that will be returned by the DNS servers.", + "id": "ResourceRecordSet", + "properties": { + "kind": { + "default": "dns#resourceRecordSet", + "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#resourceRecordSet\".", + "type": "string" + }, + "name": { + "description": "For example, www.example.com.", + "type": "string" + }, + "rrdatas": { + "description": "As defined in RFC 1035 (section 5) and RFC 1034 (section 3.6.1).", + "items": { + "type": "string" + }, + "type": "array" + }, + "signatureRrdatas": { + "description": "As defined in RFC 4034 (section 3.2).", + "items": { + "type": "string" + }, + "type": "array" + }, + "ttl": { + "description": "Number of seconds that this ResourceRecordSet can be cached by resolvers.", + "format": "int32", + "type": "integer" + }, + "type": { + "description": "The identifier of a supported record type, for example, A, AAAA, MX, TXT, and so on.", + "type": "string" + } + }, + "type": "object" + }, + "ResourceRecordSetsListResponse": { + "id": "ResourceRecordSetsListResponse", + "properties": { + "header": { + "$ref": "ResponseHeader" + }, + "kind": { + "default": "dns#resourceRecordSetsListResponse", + "description": "Type of resource.", + "type": "string" + }, + "nextPageToken": { + "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your pagination token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a consistent snapshot of a collection larger than the maximum page size.", + "type": "string" + }, + "rrsets": { + "description": "The resource record set resources.", + "items": { + "$ref": "ResourceRecordSet" + }, + "type": "array" + } + }, + "type": "object" + }, + "ResponseHeader": { + "description": "Elements common to every response.", + "id": "ResponseHeader", + "properties": { + "operationId": { + "description": "For mutating operation requests that completed successfully. This is the client_operation_id if the client specified it, otherwise it is generated by the server (output only).", + "type": "string" + } + }, + "type": "object" } - } }, - "Quota": { - "id": "Quota", - "type": "object", - "description": "Limits associated with a Project.", - "properties": { - "kind": { - "type": "string", - "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#quota\".", - "default": "dns#quota" - }, - "managedZones": { - "type": "integer", - "description": "Maximum allowed number of managed zones in the project.", - "format": "int32" - }, - "resourceRecordsPerRrset": { - "type": "integer", - "description": "Maximum allowed number of ResourceRecords per ResourceRecordSet.", - "format": "int32" - }, - "rrsetAdditionsPerChange": { - "type": "integer", - "description": "Maximum allowed number of ResourceRecordSets to add per ChangesCreateRequest.", - "format": "int32" - }, - "rrsetDeletionsPerChange": { - "type": "integer", - "description": "Maximum allowed number of ResourceRecordSets to delete per ChangesCreateRequest.", - "format": "int32" - }, - "rrsetsPerManagedZone": { - "type": "integer", - "description": "Maximum allowed number of ResourceRecordSets per zone in the project.", - "format": "int32" - }, - "totalRrdataSizePerChange": { - "type": "integer", - "description": "Maximum allowed size for total rrdata in one ChangesCreateRequest in bytes.", - "format": "int32" - } - } - }, - "ResourceRecordSet": { - "id": "ResourceRecordSet", - "type": "object", - "description": "A unit of data that will be returned by the DNS servers.", - "properties": { - "kind": { - "type": "string", - "description": "Identifies what kind of resource this is. Value: the fixed string \"dns#resourceRecordSet\".", - "default": "dns#resourceRecordSet" - }, - "name": { - "type": "string", - "description": "For example, www.example.com." - }, - "rrdatas": { - "type": "array", - "description": "As defined in RFC 1035 (section 5) and RFC 1034 (section 3.6.1).", - "items": { - "type": "string" - } - }, - "ttl": { - "type": "integer", - "description": "Number of seconds that this ResourceRecordSet can be cached by resolvers.", - "format": "int32" - }, - "type": { - "type": "string", - "description": "The identifier of a supported record type, for example, A, AAAA, MX, TXT, and so on." - } - } - }, - "ResourceRecordSetsListResponse": { - "id": "ResourceRecordSetsListResponse", - "type": "object", - "properties": { - "kind": { - "type": "string", - "description": "Type of resource.", - "default": "dns#resourceRecordSetsListResponse" - }, - "nextPageToken": { - "type": "string", - "description": "The presence of this field indicates that there exist more results following your last page of results in pagination order. To fetch them, make another list request using this value as your pagination token.\n\nIn this way you can retrieve the complete contents of even very large collections one page at a time. However, if the contents of the collection change between the first and last paginated list request, the set of all elements returned will be an inconsistent view of the collection. There is no way to retrieve a consistent snapshot of a collection larger than the maximum page size." - }, - "rrsets": { - "type": "array", - "description": "The resource record set resources.", - "items": { - "$ref": "ResourceRecordSet" - } - } - } - } - }, - "resources": { - "changes": { - "methods": { - "create": { - "id": "dns.changes.create", - "path": "{project}/managedZones/{managedZone}/changes", - "httpMethod": "POST", - "description": "Atomically update the ResourceRecordSet collection.", - "parameters": { - "managedZone": { - "type": "string", - "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", - "required": true, - "location": "path" - }, - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - } - }, - "parameterOrder": [ - "project", - "managedZone" - ], - "request": { - "$ref": "Change" - }, - "response": { - "$ref": "Change" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - }, - "get": { - "id": "dns.changes.get", - "path": "{project}/managedZones/{managedZone}/changes/{changeId}", - "httpMethod": "GET", - "description": "Fetch the representation of an existing Change.", - "parameters": { - "changeId": { - "type": "string", - "description": "The identifier of the requested change, from a previous ResourceRecordSetsChangeResponse.", - "required": true, - "location": "path" - }, - "managedZone": { - "type": "string", - "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", - "required": true, - "location": "path" - }, - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - } - }, - "parameterOrder": [ - "project", - "managedZone", - "changeId" - ], - "response": { - "$ref": "Change" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/cloud-platform.read-only", - "https://www.googleapis.com/auth/ndev.clouddns.readonly", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - }, - "list": { - "id": "dns.changes.list", - "path": "{project}/managedZones/{managedZone}/changes", - "httpMethod": "GET", - "description": "Enumerate Changes to a ResourceRecordSet collection.", - "parameters": { - "managedZone": { - "type": "string", - "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", - "required": true, - "location": "path" - }, - "maxResults": { - "type": "integer", - "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", - "format": "int32", - "location": "query" - }, - "pageToken": { - "type": "string", - "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", - "location": "query" - }, - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - }, - "sortBy": { - "type": "string", - "description": "Sorting criterion. The only supported value is change sequence.", - "default": "changeSequence", - "enum": [ - "changeSequence" - ], - "enumDescriptions": [ - "" - ], - "location": "query" - }, - "sortOrder": { - "type": "string", - "description": "Sorting order direction: 'ascending' or 'descending'.", - "location": "query" - } - }, - "parameterOrder": [ - "project", - "managedZone" - ], - "response": { - "$ref": "ChangesListResponse" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/cloud-platform.read-only", - "https://www.googleapis.com/auth/ndev.clouddns.readonly", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - } - } - }, - "managedZones": { - "methods": { - "create": { - "id": "dns.managedZones.create", - "path": "{project}/managedZones", - "httpMethod": "POST", - "description": "Create a new ManagedZone.", - "parameters": { - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - } - }, - "parameterOrder": [ - "project" - ], - "request": { - "$ref": "ManagedZone" - }, - "response": { - "$ref": "ManagedZone" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - }, - "delete": { - "id": "dns.managedZones.delete", - "path": "{project}/managedZones/{managedZone}", - "httpMethod": "DELETE", - "description": "Delete a previously created ManagedZone.", - "parameters": { - "managedZone": { - "type": "string", - "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", - "required": true, - "location": "path" - }, - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - } - }, - "parameterOrder": [ - "project", - "managedZone" - ], - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - }, - "get": { - "id": "dns.managedZones.get", - "path": "{project}/managedZones/{managedZone}", - "httpMethod": "GET", - "description": "Fetch the representation of an existing ManagedZone.", - "parameters": { - "managedZone": { - "type": "string", - "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", - "required": true, - "location": "path" - }, - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - } - }, - "parameterOrder": [ - "project", - "managedZone" - ], - "response": { - "$ref": "ManagedZone" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/cloud-platform.read-only", - "https://www.googleapis.com/auth/ndev.clouddns.readonly", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - }, - "list": { - "id": "dns.managedZones.list", - "path": "{project}/managedZones", - "httpMethod": "GET", - "description": "Enumerate ManagedZones that have been created but not yet deleted.", - "parameters": { - "dnsName": { - "type": "string", - "description": "Restricts the list to return only zones with this domain name.", - "location": "query" - }, - "maxResults": { - "type": "integer", - "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", - "format": "int32", - "location": "query" - }, - "pageToken": { - "type": "string", - "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", - "location": "query" - }, - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - } - }, - "parameterOrder": [ - "project" - ], - "response": { - "$ref": "ManagedZonesListResponse" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/cloud-platform.read-only", - "https://www.googleapis.com/auth/ndev.clouddns.readonly", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - } - } - }, - "projects": { - "methods": { - "get": { - "id": "dns.projects.get", - "path": "{project}", - "httpMethod": "GET", - "description": "Fetch the representation of an existing Project.", - "parameters": { - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - } - }, - "parameterOrder": [ - "project" - ], - "response": { - "$ref": "Project" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/cloud-platform.read-only", - "https://www.googleapis.com/auth/ndev.clouddns.readonly", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - } - } - }, - "resourceRecordSets": { - "methods": { - "list": { - "id": "dns.resourceRecordSets.list", - "path": "{project}/managedZones/{managedZone}/rrsets", - "httpMethod": "GET", - "description": "Enumerate ResourceRecordSets that have been created but not yet deleted.", - "parameters": { - "managedZone": { - "type": "string", - "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", - "required": true, - "location": "path" - }, - "maxResults": { - "type": "integer", - "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", - "format": "int32", - "location": "query" - }, - "name": { - "type": "string", - "description": "Restricts the list to return only records with this fully qualified domain name.", - "location": "query" - }, - "pageToken": { - "type": "string", - "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", - "location": "query" - }, - "project": { - "type": "string", - "description": "Identifies the project addressed by this request.", - "required": true, - "location": "path" - }, - "type": { - "type": "string", - "description": "Restricts the list to return only records of this type. If present, the \"name\" parameter must also be present.", - "location": "query" - } - }, - "parameterOrder": [ - "project", - "managedZone" - ], - "response": { - "$ref": "ResourceRecordSetsListResponse" - }, - "scopes": [ - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/cloud-platform.read-only", - "https://www.googleapis.com/auth/ndev.clouddns.readonly", - "https://www.googleapis.com/auth/ndev.clouddns.readwrite" - ] - } - } - } - } -} + "servicePath": "dns/v1/projects/", + "title": "Google Cloud DNS API", + "version": "v1" +} \ No newline at end of file diff --git a/vendor/google.golang.org/api/dns/v1/dns-gen.go b/vendor/google.golang.org/api/dns/v1/dns-gen.go index fa66410d..629e1ebf 100644 --- a/vendor/google.golang.org/api/dns/v1/dns-gen.go +++ b/vendor/google.golang.org/api/dns/v1/dns-gen.go @@ -66,6 +66,8 @@ func New(client *http.Client) (*Service, error) { } s := &Service{client: client, BasePath: basePath} s.Changes = NewChangesService(s) + s.DnsKeys = NewDnsKeysService(s) + s.ManagedZoneOperations = NewManagedZoneOperationsService(s) s.ManagedZones = NewManagedZonesService(s) s.Projects = NewProjectsService(s) s.ResourceRecordSets = NewResourceRecordSetsService(s) @@ -79,6 +81,10 @@ type Service struct { Changes *ChangesService + DnsKeys *DnsKeysService + + ManagedZoneOperations *ManagedZoneOperationsService + ManagedZones *ManagedZonesService Projects *ProjectsService @@ -102,6 +108,24 @@ type ChangesService struct { s *Service } +func NewDnsKeysService(s *Service) *DnsKeysService { + rs := &DnsKeysService{s: s} + return rs +} + +type DnsKeysService struct { + s *Service +} + +func NewManagedZoneOperationsService(s *Service) *ManagedZoneOperationsService { + rs := &ManagedZoneOperationsService{s: s} + return rs +} + +type ManagedZoneOperationsService struct { + s *Service +} + func NewManagedZonesService(s *Service) *ManagedZonesService { rs := &ManagedZonesService{s: s} return rs @@ -142,6 +166,9 @@ type Change struct { // only). Id string `json:"id,omitempty"` + // IsServing: If the DNS queries for the zone will be served. + IsServing bool `json:"isServing,omitempty"` + // Kind: Identifies what kind of resource this is. Value: the fixed // string "dns#change". Kind string `json:"kind,omitempty"` @@ -179,8 +206,8 @@ type Change struct { } func (s *Change) MarshalJSON() ([]byte, error) { - type noMethod Change - raw := noMethod(*s) + type NoMethod Change + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } @@ -190,6 +217,8 @@ type ChangesListResponse struct { // Changes: The requested changes. Changes []*Change `json:"changes,omitempty"` + Header *ResponseHeader `json:"header,omitempty"` + // Kind: Type of resource. Kind string `json:"kind,omitempty"` @@ -228,8 +257,246 @@ type ChangesListResponse struct { } func (s *ChangesListResponse) MarshalJSON() ([]byte, error) { - type noMethod ChangesListResponse - raw := noMethod(*s) + type NoMethod ChangesListResponse + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// DnsKey: A DNSSEC key pair. +type DnsKey struct { + // Algorithm: String mnemonic specifying the DNSSEC algorithm of this + // key. Immutable after creation time. + // + // Possible values: + // "ecdsap256sha256" + // "ecdsap384sha384" + // "rsasha1" + // "rsasha256" + // "rsasha512" + Algorithm string `json:"algorithm,omitempty"` + + // CreationTime: The time that this resource was created in the control + // plane. This is in RFC3339 text format. Output only. + CreationTime string `json:"creationTime,omitempty"` + + // Description: A mutable string of at most 1024 characters associated + // with this resource for the user's convenience. Has no effect on the + // resource's function. + Description string `json:"description,omitempty"` + + // Digests: Cryptographic hashes of the DNSKEY resource record + // associated with this DnsKey. These digests are needed to construct a + // DS record that points at this DNS key. Output only. + Digests []*DnsKeyDigest `json:"digests,omitempty"` + + // Id: Unique identifier for the resource; defined by the server (output + // only). + Id string `json:"id,omitempty"` + + // IsActive: Active keys will be used to sign subsequent changes to the + // ManagedZone. Inactive keys will still be present as DNSKEY Resource + // Records for the use of resolvers validating existing signatures. + IsActive bool `json:"isActive,omitempty"` + + // KeyLength: Length of the key in bits. Specified at creation time then + // immutable. + KeyLength int64 `json:"keyLength,omitempty"` + + // KeyTag: The key tag is a non-cryptographic hash of the a DNSKEY + // resource record associated with this DnsKey. The key tag can be used + // to identify a DNSKEY more quickly (but it is not a unique + // identifier). In particular, the key tag is used in a parent zone's DS + // record to point at the DNSKEY in this child ManagedZone. The key tag + // is a number in the range [0, 65535] and the algorithm to calculate it + // is specified in RFC4034 Appendix B. Output only. + KeyTag int64 `json:"keyTag,omitempty"` + + // Kind: Identifies what kind of resource this is. Value: the fixed + // string "dns#dnsKey". + Kind string `json:"kind,omitempty"` + + // PublicKey: Base64 encoded public half of this key. Output only. + PublicKey string `json:"publicKey,omitempty"` + + // Type: One of "KEY_SIGNING" or "ZONE_SIGNING". Keys of type + // KEY_SIGNING have the Secure Entry Point flag set and, when active, + // will be used to sign only resource record sets of type DNSKEY. + // Otherwise, the Secure Entry Point flag will be cleared and this key + // will be used to sign only resource record sets of other types. + // Immutable after creation time. + // + // Possible values: + // "keySigning" + // "zoneSigning" + Type string `json:"type,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Algorithm") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Algorithm") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *DnsKey) MarshalJSON() ([]byte, error) { + type NoMethod DnsKey + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type DnsKeyDigest struct { + // Digest: The base-16 encoded bytes of this digest. Suitable for use in + // a DS resource record. + Digest string `json:"digest,omitempty"` + + // Type: Specifies the algorithm used to calculate this digest. + // + // Possible values: + // "sha1" + // "sha256" + // "sha384" + Type string `json:"type,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Digest") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Digest") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *DnsKeyDigest) MarshalJSON() ([]byte, error) { + type NoMethod DnsKeyDigest + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// DnsKeySpec: Parameters for DnsKey key generation. Used for generating +// initial keys for a new ManagedZone and as default when adding a new +// DnsKey. +type DnsKeySpec struct { + // Algorithm: String mnemonic specifying the DNSSEC algorithm of this + // key. + // + // Possible values: + // "ecdsap256sha256" + // "ecdsap384sha384" + // "rsasha1" + // "rsasha256" + // "rsasha512" + Algorithm string `json:"algorithm,omitempty"` + + // KeyLength: Length of the keys in bits. + KeyLength int64 `json:"keyLength,omitempty"` + + // KeyType: One of "KEY_SIGNING" or "ZONE_SIGNING". Keys of type + // KEY_SIGNING have the Secure Entry Point flag set and, when active, + // will be used to sign only resource record sets of type DNSKEY. + // Otherwise, the Secure Entry Point flag will be cleared and this key + // will be used to sign only resource record sets of other types. + // + // Possible values: + // "keySigning" + // "zoneSigning" + KeyType string `json:"keyType,omitempty"` + + // Kind: Identifies what kind of resource this is. Value: the fixed + // string "dns#dnsKeySpec". + Kind string `json:"kind,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Algorithm") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Algorithm") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *DnsKeySpec) MarshalJSON() ([]byte, error) { + type NoMethod DnsKeySpec + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// DnsKeysListResponse: The response to a request to enumerate DnsKeys +// in a ManagedZone. +type DnsKeysListResponse struct { + // DnsKeys: The requested resources. + DnsKeys []*DnsKey `json:"dnsKeys,omitempty"` + + Header *ResponseHeader `json:"header,omitempty"` + + // Kind: Type of resource. + Kind string `json:"kind,omitempty"` + + // NextPageToken: The presence of this field indicates that there exist + // more results following your last page of results in pagination order. + // To fetch them, make another list request using this value as your + // pagination token. + // + // In this way you can retrieve the complete contents of even very large + // collections one page at a time. However, if the contents of the + // collection change between the first and last paginated list request, + // the set of all elements returned will be an inconsistent view of the + // collection. There is no way to retrieve a "snapshot" of collections + // larger than the maximum page size. + NextPageToken string `json:"nextPageToken,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "DnsKeys") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "DnsKeys") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *DnsKeysListResponse) MarshalJSON() ([]byte, error) { + type NoMethod DnsKeysListResponse + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } @@ -250,6 +517,9 @@ type ManagedZone struct { // "example.com.". DnsName string `json:"dnsName,omitempty"` + // DnssecConfig: DNSSEC configuration. + DnssecConfig *ManagedZoneDnsSecConfig `json:"dnssecConfig,omitempty"` + // Id: Unique identifier for the resource; defined by the server (output // only) Id uint64 `json:"id,omitempty,string"` @@ -258,6 +528,9 @@ type ManagedZone struct { // string "dns#managedZone". Kind string `json:"kind,omitempty"` + // Labels: User labels. + Labels map[string]string `json:"labels,omitempty"` + // Name: User assigned name for this resource. Must be unique within the // project. The name must be 1-63 characters long, must begin with a // letter, end with a letter or digit, and only contain lowercase @@ -295,12 +568,113 @@ type ManagedZone struct { } func (s *ManagedZone) MarshalJSON() ([]byte, error) { - type noMethod ManagedZone - raw := noMethod(*s) + type NoMethod ManagedZone + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type ManagedZoneDnsSecConfig struct { + // DefaultKeySpecs: Specifies parameters that will be used for + // generating initial DnsKeys for this ManagedZone. Output only while + // state is not OFF. + DefaultKeySpecs []*DnsKeySpec `json:"defaultKeySpecs,omitempty"` + + // Kind: Identifies what kind of resource this is. Value: the fixed + // string "dns#managedZoneDnsSecConfig". + Kind string `json:"kind,omitempty"` + + // NonExistence: Specifies the mechanism used to provide authenticated + // denial-of-existence responses. Output only while state is not OFF. + // + // Possible values: + // "nsec" + // "nsec3" + NonExistence string `json:"nonExistence,omitempty"` + + // State: Specifies whether DNSSEC is enabled, and what mode it is in. + // + // Possible values: + // "off" + // "on" + // "transfer" + State string `json:"state,omitempty"` + + // ForceSendFields is a list of field names (e.g. "DefaultKeySpecs") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "DefaultKeySpecs") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. + NullFields []string `json:"-"` +} + +func (s *ManagedZoneDnsSecConfig) MarshalJSON() ([]byte, error) { + type NoMethod ManagedZoneDnsSecConfig + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type ManagedZoneOperationsListResponse struct { + Header *ResponseHeader `json:"header,omitempty"` + + // Kind: Type of resource. + Kind string `json:"kind,omitempty"` + + // NextPageToken: The presence of this field indicates that there exist + // more results following your last page of results in pagination order. + // To fetch them, make another list request using this value as your + // page token. + // + // In this way you can retrieve the complete contents of even very large + // collections one page at a time. However, if the contents of the + // collection change between the first and last paginated list request, + // the set of all elements returned will be an inconsistent view of the + // collection. There is no way to retrieve a consistent snapshot of a + // collection larger than the maximum page size. + NextPageToken string `json:"nextPageToken,omitempty"` + + // Operations: The operation resources. + Operations []*Operation `json:"operations,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Header") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Header") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ManagedZoneOperationsListResponse) MarshalJSON() ([]byte, error) { + type NoMethod ManagedZoneOperationsListResponse + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } type ManagedZonesListResponse struct { + Header *ResponseHeader `json:"header,omitempty"` + // Kind: Type of resource. Kind string `json:"kind,omitempty"` @@ -324,7 +698,7 @@ type ManagedZonesListResponse struct { // server. googleapi.ServerResponse `json:"-"` - // ForceSendFields is a list of field names (e.g. "Kind") to + // ForceSendFields is a list of field names (e.g. "Header") to // unconditionally include in API requests. By default, fields with // empty values are omitted from API requests. However, any non-pointer, // non-interface field appearing in ForceSendFields will be sent to the @@ -332,7 +706,7 @@ type ManagedZonesListResponse struct { // used to include empty fields in Patch requests. ForceSendFields []string `json:"-"` - // NullFields is a list of field names (e.g. "Kind") to include in API + // NullFields is a list of field names (e.g. "Header") to include in API // requests with the JSON null value. By default, fields with empty // values are omitted from API requests. However, any field with an // empty value appearing in NullFields will be sent to the server as @@ -342,8 +716,141 @@ type ManagedZonesListResponse struct { } func (s *ManagedZonesListResponse) MarshalJSON() ([]byte, error) { - type noMethod ManagedZonesListResponse - raw := noMethod(*s) + type NoMethod ManagedZonesListResponse + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Operation: An operation represents a successful mutation performed on +// a Cloud DNS resource. Operations provide: - An audit log of server +// resource mutations. - A way to recover/retry API calls in the case +// where the response is never received by the caller. Use the caller +// specified client_operation_id. +type Operation struct { + // DnsKeyContext: Only populated if the operation targeted a DnsKey + // (output only). + DnsKeyContext *OperationDnsKeyContext `json:"dnsKeyContext,omitempty"` + + // Id: Unique identifier for the resource. This is the + // client_operation_id if the client specified it when the mutation was + // initiated, otherwise, it is generated by the server. The name must be + // 1-63 characters long and match the regular expression [-a-z0-9]? + // (output only) + Id string `json:"id,omitempty"` + + // Kind: Identifies what kind of resource this is. Value: the fixed + // string "dns#operation". + Kind string `json:"kind,omitempty"` + + // StartTime: The time that this operation was started by the server. + // This is in RFC3339 text format (output only). + StartTime string `json:"startTime,omitempty"` + + // Status: Status of the operation. Can be one of the following: + // "PENDING" or "DONE" (output only). + // + // Possible values: + // "done" + // "pending" + Status string `json:"status,omitempty"` + + // Type: Type of the operation. Operations include insert, update, and + // delete (output only). + Type string `json:"type,omitempty"` + + // User: User who requested the operation, for example: + // user@example.com. cloud-dns-system for operations automatically done + // by the system. (output only) + User string `json:"user,omitempty"` + + // ZoneContext: Only populated if the operation targeted a ManagedZone + // (output only). + ZoneContext *OperationManagedZoneContext `json:"zoneContext,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "DnsKeyContext") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "DnsKeyContext") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Operation) MarshalJSON() ([]byte, error) { + type NoMethod Operation + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type OperationDnsKeyContext struct { + // NewValue: The post-operation DnsKey resource. + NewValue *DnsKey `json:"newValue,omitempty"` + + // OldValue: The pre-operation DnsKey resource. + OldValue *DnsKey `json:"oldValue,omitempty"` + + // ForceSendFields is a list of field names (e.g. "NewValue") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "NewValue") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *OperationDnsKeyContext) MarshalJSON() ([]byte, error) { + type NoMethod OperationDnsKeyContext + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type OperationManagedZoneContext struct { + // NewValue: The post-operation ManagedZone resource. + NewValue *ManagedZone `json:"newValue,omitempty"` + + // OldValue: The pre-operation ManagedZone resource. + OldValue *ManagedZone `json:"oldValue,omitempty"` + + // ForceSendFields is a list of field names (e.g. "NewValue") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "NewValue") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *OperationManagedZoneContext) MarshalJSON() ([]byte, error) { + type NoMethod OperationManagedZoneContext + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } @@ -387,13 +894,17 @@ type Project struct { } func (s *Project) MarshalJSON() ([]byte, error) { - type noMethod Project - raw := noMethod(*s) + type NoMethod Project + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } // Quota: Limits associated with a Project. type Quota struct { + // DnsKeysPerManagedZone: Maximum allowed number of DnsKeys per + // ManagedZone. + DnsKeysPerManagedZone int64 `json:"dnsKeysPerManagedZone,omitempty"` + // Kind: Identifies what kind of resource this is. Value: the fixed // string "dns#quota". Kind string `json:"kind,omitempty"` @@ -421,26 +932,32 @@ type Quota struct { // one ChangesCreateRequest in bytes. TotalRrdataSizePerChange int64 `json:"totalRrdataSizePerChange,omitempty"` - // ForceSendFields is a list of field names (e.g. "Kind") to - // unconditionally include in API requests. By default, fields with - // empty values are omitted from API requests. However, any non-pointer, - // non-interface field appearing in ForceSendFields will be sent to the - // server regardless of whether the field is empty or not. This may be - // used to include empty fields in Patch requests. + // WhitelistedKeySpecs: DNSSEC algorithm and key length types that can + // be used for DnsKeys. + WhitelistedKeySpecs []*DnsKeySpec `json:"whitelistedKeySpecs,omitempty"` + + // ForceSendFields is a list of field names (e.g. + // "DnsKeysPerManagedZone") to unconditionally include in API requests. + // By default, fields with empty values are omitted from API requests. + // However, any non-pointer, non-interface field appearing in + // ForceSendFields will be sent to the server regardless of whether the + // field is empty or not. This may be used to include empty fields in + // Patch requests. ForceSendFields []string `json:"-"` - // NullFields is a list of field names (e.g. "Kind") to include in API - // requests with the JSON null value. By default, fields with empty - // values are omitted from API requests. However, any field with an - // empty value appearing in NullFields will be sent to the server as - // null. It is an error if a field in this list has a non-empty value. - // This may be used to include null fields in Patch requests. + // NullFields is a list of field names (e.g. "DnsKeysPerManagedZone") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. NullFields []string `json:"-"` } func (s *Quota) MarshalJSON() ([]byte, error) { - type noMethod Quota - raw := noMethod(*s) + type NoMethod Quota + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } @@ -458,6 +975,9 @@ type ResourceRecordSet struct { // 3.6.1). Rrdatas []string `json:"rrdatas,omitempty"` + // SignatureRrdatas: As defined in RFC 4034 (section 3.2). + SignatureRrdatas []string `json:"signatureRrdatas,omitempty"` + // Ttl: Number of seconds that this ResourceRecordSet can be cached by // resolvers. Ttl int64 `json:"ttl,omitempty"` @@ -484,12 +1004,14 @@ type ResourceRecordSet struct { } func (s *ResourceRecordSet) MarshalJSON() ([]byte, error) { - type noMethod ResourceRecordSet - raw := noMethod(*s) + type NoMethod ResourceRecordSet + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } type ResourceRecordSetsListResponse struct { + Header *ResponseHeader `json:"header,omitempty"` + // Kind: Type of resource. Kind string `json:"kind,omitempty"` @@ -513,7 +1035,7 @@ type ResourceRecordSetsListResponse struct { // server. googleapi.ServerResponse `json:"-"` - // ForceSendFields is a list of field names (e.g. "Kind") to + // ForceSendFields is a list of field names (e.g. "Header") to // unconditionally include in API requests. By default, fields with // empty values are omitted from API requests. However, any non-pointer, // non-interface field appearing in ForceSendFields will be sent to the @@ -521,7 +1043,7 @@ type ResourceRecordSetsListResponse struct { // used to include empty fields in Patch requests. ForceSendFields []string `json:"-"` - // NullFields is a list of field names (e.g. "Kind") to include in API + // NullFields is a list of field names (e.g. "Header") to include in API // requests with the JSON null value. By default, fields with empty // values are omitted from API requests. However, any field with an // empty value appearing in NullFields will be sent to the server as @@ -531,8 +1053,38 @@ type ResourceRecordSetsListResponse struct { } func (s *ResourceRecordSetsListResponse) MarshalJSON() ([]byte, error) { - type noMethod ResourceRecordSetsListResponse - raw := noMethod(*s) + type NoMethod ResourceRecordSetsListResponse + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ResponseHeader: Elements common to every response. +type ResponseHeader struct { + // OperationId: For mutating operation requests that completed + // successfully. This is the client_operation_id if the client specified + // it, otherwise it is generated by the server (output only). + OperationId string `json:"operationId,omitempty"` + + // ForceSendFields is a list of field names (e.g. "OperationId") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "OperationId") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ResponseHeader) MarshalJSON() ([]byte, error) { + type NoMethod ResponseHeader + raw := NoMethod(*s) return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } @@ -557,6 +1109,15 @@ func (r *ChangesService) Create(project string, managedZone string, change *Chan return c } +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ChangesCreateCall) ClientOperationId(clientOperationId string) *ChangesCreateCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + // Fields allows partial responses to be retrieved. See // https://developers.google.com/gdata/docs/2.0/basics#PartialResponse // for more information. @@ -639,7 +1200,7 @@ func (c *ChangesCreateCall) Do(opts ...googleapi.CallOption) (*Change, error) { }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil @@ -652,6 +1213,11 @@ func (c *ChangesCreateCall) Do(opts ...googleapi.CallOption) (*Change, error) { // "managedZone" // ], // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, // "managedZone": { // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", // "location": "path", @@ -702,6 +1268,15 @@ func (r *ChangesService) Get(project string, managedZone string, changeId string return c } +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ChangesGetCall) ClientOperationId(clientOperationId string) *ChangesGetCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + // Fields allows partial responses to be retrieved. See // https://developers.google.com/gdata/docs/2.0/basics#PartialResponse // for more information. @@ -793,7 +1368,7 @@ func (c *ChangesGetCall) Do(opts ...googleapi.CallOption) (*Change, error) { }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil @@ -813,6 +1388,11 @@ func (c *ChangesGetCall) Do(opts ...googleapi.CallOption) (*Change, error) { // "required": true, // "type": "string" // }, + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, // "managedZone": { // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", // "location": "path", @@ -983,7 +1563,7 @@ func (c *ChangesListCall) Do(opts ...googleapi.CallOption) (*ChangesListResponse }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil @@ -1072,6 +1652,804 @@ func (c *ChangesListCall) Pages(ctx context.Context, f func(*ChangesListResponse } } +// method id "dns.dnsKeys.get": + +type DnsKeysGetCall struct { + s *Service + project string + managedZone string + dnsKeyId string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Fetch the representation of an existing DnsKey. +func (r *DnsKeysService) Get(project string, managedZone string, dnsKeyId string) *DnsKeysGetCall { + c := &DnsKeysGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.project = project + c.managedZone = managedZone + c.dnsKeyId = dnsKeyId + return c +} + +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *DnsKeysGetCall) ClientOperationId(clientOperationId string) *DnsKeysGetCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + +// DigestType sets the optional parameter "digestType": An optional +// comma-separated list of digest types to compute and display for key +// signing keys. If omitted, the recommended digest type will be +// computed and displayed. +func (c *DnsKeysGetCall) DigestType(digestType string) *DnsKeysGetCall { + c.urlParams_.Set("digestType", digestType) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DnsKeysGetCall) Fields(s ...googleapi.Field) *DnsKeysGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *DnsKeysGetCall) IfNoneMatch(entityTag string) *DnsKeysGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DnsKeysGetCall) Context(ctx context.Context) *DnsKeysGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DnsKeysGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DnsKeysGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/managedZones/{managedZone}/dnsKeys/{dnsKeyId}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "project": c.project, + "managedZone": c.managedZone, + "dnsKeyId": c.dnsKeyId, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "dns.dnsKeys.get" call. +// Exactly one of *DnsKey or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *DnsKey.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *DnsKeysGetCall) Do(opts ...googleapi.CallOption) (*DnsKey, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &DnsKey{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Fetch the representation of an existing DnsKey.", + // "httpMethod": "GET", + // "id": "dns.dnsKeys.get", + // "parameterOrder": [ + // "project", + // "managedZone", + // "dnsKeyId" + // ], + // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, + // "digestType": { + // "description": "An optional comma-separated list of digest types to compute and display for key signing keys. If omitted, the recommended digest type will be computed and displayed.", + // "location": "query", + // "type": "string" + // }, + // "dnsKeyId": { + // "description": "The identifier of the requested DnsKey.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "managedZone": { + // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "project": { + // "description": "Identifies the project addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "{project}/managedZones/{managedZone}/dnsKeys/{dnsKeyId}", + // "response": { + // "$ref": "DnsKey" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/ndev.clouddns.readonly", + // "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + // ] + // } + +} + +// method id "dns.dnsKeys.list": + +type DnsKeysListCall struct { + s *Service + project string + managedZone string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Enumerate DnsKeys to a ResourceRecordSet collection. +func (r *DnsKeysService) List(project string, managedZone string) *DnsKeysListCall { + c := &DnsKeysListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.project = project + c.managedZone = managedZone + return c +} + +// DigestType sets the optional parameter "digestType": An optional +// comma-separated list of digest types to compute and display for key +// signing keys. If omitted, the recommended digest type will be +// computed and displayed. +func (c *DnsKeysListCall) DigestType(digestType string) *DnsKeysListCall { + c.urlParams_.Set("digestType", digestType) + return c +} + +// MaxResults sets the optional parameter "maxResults": Maximum number +// of results to be returned. If unspecified, the server will decide how +// many results to return. +func (c *DnsKeysListCall) MaxResults(maxResults int64) *DnsKeysListCall { + c.urlParams_.Set("maxResults", fmt.Sprint(maxResults)) + return c +} + +// PageToken sets the optional parameter "pageToken": A tag returned by +// a previous list request that was truncated. Use this parameter to +// continue a previous list request. +func (c *DnsKeysListCall) PageToken(pageToken string) *DnsKeysListCall { + c.urlParams_.Set("pageToken", pageToken) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DnsKeysListCall) Fields(s ...googleapi.Field) *DnsKeysListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *DnsKeysListCall) IfNoneMatch(entityTag string) *DnsKeysListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DnsKeysListCall) Context(ctx context.Context) *DnsKeysListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DnsKeysListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DnsKeysListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/managedZones/{managedZone}/dnsKeys") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "project": c.project, + "managedZone": c.managedZone, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "dns.dnsKeys.list" call. +// Exactly one of *DnsKeysListResponse or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *DnsKeysListResponse.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *DnsKeysListCall) Do(opts ...googleapi.CallOption) (*DnsKeysListResponse, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &DnsKeysListResponse{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Enumerate DnsKeys to a ResourceRecordSet collection.", + // "httpMethod": "GET", + // "id": "dns.dnsKeys.list", + // "parameterOrder": [ + // "project", + // "managedZone" + // ], + // "parameters": { + // "digestType": { + // "description": "An optional comma-separated list of digest types to compute and display for key signing keys. If omitted, the recommended digest type will be computed and displayed.", + // "location": "query", + // "type": "string" + // }, + // "managedZone": { + // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "maxResults": { + // "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", + // "format": "int32", + // "location": "query", + // "type": "integer" + // }, + // "pageToken": { + // "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", + // "location": "query", + // "type": "string" + // }, + // "project": { + // "description": "Identifies the project addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "{project}/managedZones/{managedZone}/dnsKeys", + // "response": { + // "$ref": "DnsKeysListResponse" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/ndev.clouddns.readonly", + // "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + // ] + // } + +} + +// Pages invokes f for each page of results. +// A non-nil error returned from f will halt the iteration. +// The provided context supersedes any context provided to the Context method. +func (c *DnsKeysListCall) Pages(ctx context.Context, f func(*DnsKeysListResponse) error) error { + c.ctx_ = ctx + defer c.PageToken(c.urlParams_.Get("pageToken")) // reset paging to original point + for { + x, err := c.Do() + if err != nil { + return err + } + if err := f(x); err != nil { + return err + } + if x.NextPageToken == "" { + return nil + } + c.PageToken(x.NextPageToken) + } +} + +// method id "dns.managedZoneOperations.get": + +type ManagedZoneOperationsGetCall struct { + s *Service + project string + managedZone string + operation string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Fetch the representation of an existing Operation. +func (r *ManagedZoneOperationsService) Get(project string, managedZone string, operation string) *ManagedZoneOperationsGetCall { + c := &ManagedZoneOperationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.project = project + c.managedZone = managedZone + c.operation = operation + return c +} + +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ManagedZoneOperationsGetCall) ClientOperationId(clientOperationId string) *ManagedZoneOperationsGetCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ManagedZoneOperationsGetCall) Fields(s ...googleapi.Field) *ManagedZoneOperationsGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ManagedZoneOperationsGetCall) IfNoneMatch(entityTag string) *ManagedZoneOperationsGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ManagedZoneOperationsGetCall) Context(ctx context.Context) *ManagedZoneOperationsGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ManagedZoneOperationsGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ManagedZoneOperationsGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/managedZones/{managedZone}/operations/{operation}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "project": c.project, + "managedZone": c.managedZone, + "operation": c.operation, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "dns.managedZoneOperations.get" call. +// Exactly one of *Operation or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Operation.ServerResponse.Header or (if a response was returned at +// all) in error.(*googleapi.Error).Header. Use googleapi.IsNotModified +// to check whether the returned error was because +// http.StatusNotModified was returned. +func (c *ManagedZoneOperationsGetCall) Do(opts ...googleapi.CallOption) (*Operation, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Operation{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Fetch the representation of an existing Operation.", + // "httpMethod": "GET", + // "id": "dns.managedZoneOperations.get", + // "parameterOrder": [ + // "project", + // "managedZone", + // "operation" + // ], + // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, + // "managedZone": { + // "description": "Identifies the managed zone addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "operation": { + // "description": "Identifies the operation addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "project": { + // "description": "Identifies the project addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "{project}/managedZones/{managedZone}/operations/{operation}", + // "response": { + // "$ref": "Operation" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/ndev.clouddns.readonly", + // "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + // ] + // } + +} + +// method id "dns.managedZoneOperations.list": + +type ManagedZoneOperationsListCall struct { + s *Service + project string + managedZone string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Enumerate Operations for the given ManagedZone. +func (r *ManagedZoneOperationsService) List(project string, managedZone string) *ManagedZoneOperationsListCall { + c := &ManagedZoneOperationsListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.project = project + c.managedZone = managedZone + return c +} + +// MaxResults sets the optional parameter "maxResults": Maximum number +// of results to be returned. If unspecified, the server will decide how +// many results to return. +func (c *ManagedZoneOperationsListCall) MaxResults(maxResults int64) *ManagedZoneOperationsListCall { + c.urlParams_.Set("maxResults", fmt.Sprint(maxResults)) + return c +} + +// PageToken sets the optional parameter "pageToken": A tag returned by +// a previous list request that was truncated. Use this parameter to +// continue a previous list request. +func (c *ManagedZoneOperationsListCall) PageToken(pageToken string) *ManagedZoneOperationsListCall { + c.urlParams_.Set("pageToken", pageToken) + return c +} + +// SortBy sets the optional parameter "sortBy": Sorting criterion. The +// only supported values are START_TIME and ID. +// +// Possible values: +// "id" +// "startTime" (default) +func (c *ManagedZoneOperationsListCall) SortBy(sortBy string) *ManagedZoneOperationsListCall { + c.urlParams_.Set("sortBy", sortBy) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ManagedZoneOperationsListCall) Fields(s ...googleapi.Field) *ManagedZoneOperationsListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ManagedZoneOperationsListCall) IfNoneMatch(entityTag string) *ManagedZoneOperationsListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ManagedZoneOperationsListCall) Context(ctx context.Context) *ManagedZoneOperationsListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ManagedZoneOperationsListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ManagedZoneOperationsListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/managedZones/{managedZone}/operations") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "project": c.project, + "managedZone": c.managedZone, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "dns.managedZoneOperations.list" call. +// Exactly one of *ManagedZoneOperationsListResponse or error will be +// non-nil. Any non-2xx status code is an error. Response headers are in +// either *ManagedZoneOperationsListResponse.ServerResponse.Header or +// (if a response was returned at all) in +// error.(*googleapi.Error).Header. Use googleapi.IsNotModified to check +// whether the returned error was because http.StatusNotModified was +// returned. +func (c *ManagedZoneOperationsListCall) Do(opts ...googleapi.CallOption) (*ManagedZoneOperationsListResponse, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ManagedZoneOperationsListResponse{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Enumerate Operations for the given ManagedZone.", + // "httpMethod": "GET", + // "id": "dns.managedZoneOperations.list", + // "parameterOrder": [ + // "project", + // "managedZone" + // ], + // "parameters": { + // "managedZone": { + // "description": "Identifies the managed zone addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "maxResults": { + // "description": "Optional. Maximum number of results to be returned. If unspecified, the server will decide how many results to return.", + // "format": "int32", + // "location": "query", + // "type": "integer" + // }, + // "pageToken": { + // "description": "Optional. A tag returned by a previous list request that was truncated. Use this parameter to continue a previous list request.", + // "location": "query", + // "type": "string" + // }, + // "project": { + // "description": "Identifies the project addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "sortBy": { + // "default": "startTime", + // "description": "Sorting criterion. The only supported values are START_TIME and ID.", + // "enum": [ + // "id", + // "startTime" + // ], + // "enumDescriptions": [ + // "", + // "" + // ], + // "location": "query", + // "type": "string" + // } + // }, + // "path": "{project}/managedZones/{managedZone}/operations", + // "response": { + // "$ref": "ManagedZoneOperationsListResponse" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/ndev.clouddns.readonly", + // "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + // ] + // } + +} + +// Pages invokes f for each page of results. +// A non-nil error returned from f will halt the iteration. +// The provided context supersedes any context provided to the Context method. +func (c *ManagedZoneOperationsListCall) Pages(ctx context.Context, f func(*ManagedZoneOperationsListResponse) error) error { + c.ctx_ = ctx + defer c.PageToken(c.urlParams_.Get("pageToken")) // reset paging to original point + for { + x, err := c.Do() + if err != nil { + return err + } + if err := f(x); err != nil { + return err + } + if x.NextPageToken == "" { + return nil + } + c.PageToken(x.NextPageToken) + } +} + // method id "dns.managedZones.create": type ManagedZonesCreateCall struct { @@ -1091,6 +2469,15 @@ func (r *ManagedZonesService) Create(project string, managedzone *ManagedZone) * return c } +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ManagedZonesCreateCall) ClientOperationId(clientOperationId string) *ManagedZonesCreateCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + // Fields allows partial responses to be retrieved. See // https://developers.google.com/gdata/docs/2.0/basics#PartialResponse // for more information. @@ -1172,7 +2559,7 @@ func (c *ManagedZonesCreateCall) Do(opts ...googleapi.CallOption) (*ManagedZone, }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil @@ -1184,6 +2571,11 @@ func (c *ManagedZonesCreateCall) Do(opts ...googleapi.CallOption) (*ManagedZone, // "project" // ], // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, // "project": { // "description": "Identifies the project addressed by this request.", // "location": "path", @@ -1225,6 +2617,15 @@ func (r *ManagedZonesService) Delete(project string, managedZone string) *Manage return c } +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ManagedZonesDeleteCall) ClientOperationId(clientOperationId string) *ManagedZonesDeleteCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + // Fields allows partial responses to be retrieved. See // https://developers.google.com/gdata/docs/2.0/basics#PartialResponse // for more information. @@ -1290,6 +2691,11 @@ func (c *ManagedZonesDeleteCall) Do(opts ...googleapi.CallOption) error { // "managedZone" // ], // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, // "managedZone": { // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", // "location": "path", @@ -1332,6 +2738,15 @@ func (r *ManagedZonesService) Get(project string, managedZone string) *ManagedZo return c } +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ManagedZonesGetCall) ClientOperationId(clientOperationId string) *ManagedZonesGetCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + // Fields allows partial responses to be retrieved. See // https://developers.google.com/gdata/docs/2.0/basics#PartialResponse // for more information. @@ -1422,7 +2837,7 @@ func (c *ManagedZonesGetCall) Do(opts ...googleapi.CallOption) (*ManagedZone, er }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil @@ -1435,6 +2850,11 @@ func (c *ManagedZonesGetCall) Do(opts ...googleapi.CallOption) (*ManagedZone, er // "managedZone" // ], // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, // "managedZone": { // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", // "location": "path", @@ -1593,7 +3013,7 @@ func (c *ManagedZonesListCall) Do(opts ...googleapi.CallOption) (*ManagedZonesLi }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil @@ -1663,6 +3083,322 @@ func (c *ManagedZonesListCall) Pages(ctx context.Context, f func(*ManagedZonesLi } } +// method id "dns.managedZones.patch": + +type ManagedZonesPatchCall struct { + s *Service + project string + managedZone string + managedzone *ManagedZone + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Patch: Apply a partial update to an existing ManagedZone. +func (r *ManagedZonesService) Patch(project string, managedZone string, managedzone *ManagedZone) *ManagedZonesPatchCall { + c := &ManagedZonesPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.project = project + c.managedZone = managedZone + c.managedzone = managedzone + return c +} + +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ManagedZonesPatchCall) ClientOperationId(clientOperationId string) *ManagedZonesPatchCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ManagedZonesPatchCall) Fields(s ...googleapi.Field) *ManagedZonesPatchCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ManagedZonesPatchCall) Context(ctx context.Context) *ManagedZonesPatchCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ManagedZonesPatchCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ManagedZonesPatchCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.managedzone) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/managedZones/{managedZone}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PATCH", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "project": c.project, + "managedZone": c.managedZone, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "dns.managedZones.patch" call. +// Exactly one of *Operation or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Operation.ServerResponse.Header or (if a response was returned at +// all) in error.(*googleapi.Error).Header. Use googleapi.IsNotModified +// to check whether the returned error was because +// http.StatusNotModified was returned. +func (c *ManagedZonesPatchCall) Do(opts ...googleapi.CallOption) (*Operation, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Operation{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Apply a partial update to an existing ManagedZone.", + // "httpMethod": "PATCH", + // "id": "dns.managedZones.patch", + // "parameterOrder": [ + // "project", + // "managedZone" + // ], + // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, + // "managedZone": { + // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "project": { + // "description": "Identifies the project addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "{project}/managedZones/{managedZone}", + // "request": { + // "$ref": "ManagedZone" + // }, + // "response": { + // "$ref": "Operation" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + // ] + // } + +} + +// method id "dns.managedZones.update": + +type ManagedZonesUpdateCall struct { + s *Service + project string + managedZone string + managedzone *ManagedZone + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Update: Update an existing ManagedZone. +func (r *ManagedZonesService) Update(project string, managedZone string, managedzone *ManagedZone) *ManagedZonesUpdateCall { + c := &ManagedZonesUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.project = project + c.managedZone = managedZone + c.managedzone = managedzone + return c +} + +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ManagedZonesUpdateCall) ClientOperationId(clientOperationId string) *ManagedZonesUpdateCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ManagedZonesUpdateCall) Fields(s ...googleapi.Field) *ManagedZonesUpdateCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ManagedZonesUpdateCall) Context(ctx context.Context) *ManagedZonesUpdateCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ManagedZonesUpdateCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ManagedZonesUpdateCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.managedzone) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/managedZones/{managedZone}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "project": c.project, + "managedZone": c.managedZone, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "dns.managedZones.update" call. +// Exactly one of *Operation or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Operation.ServerResponse.Header or (if a response was returned at +// all) in error.(*googleapi.Error).Header. Use googleapi.IsNotModified +// to check whether the returned error was because +// http.StatusNotModified was returned. +func (c *ManagedZonesUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Operation{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Update an existing ManagedZone.", + // "httpMethod": "PUT", + // "id": "dns.managedZones.update", + // "parameterOrder": [ + // "project", + // "managedZone" + // ], + // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, + // "managedZone": { + // "description": "Identifies the managed zone addressed by this request. Can be the managed zone name or id.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "project": { + // "description": "Identifies the project addressed by this request.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "{project}/managedZones/{managedZone}", + // "request": { + // "$ref": "ManagedZone" + // }, + // "response": { + // "$ref": "Operation" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/ndev.clouddns.readwrite" + // ] + // } + +} + // method id "dns.projects.get": type ProjectsGetCall struct { @@ -1681,6 +3417,15 @@ func (r *ProjectsService) Get(project string) *ProjectsGetCall { return c } +// ClientOperationId sets the optional parameter "clientOperationId": +// For mutating operation requests only. An optional identifier +// specified by the client. Must be unique for operation resources in +// the Operations collection. +func (c *ProjectsGetCall) ClientOperationId(clientOperationId string) *ProjectsGetCall { + c.urlParams_.Set("clientOperationId", clientOperationId) + return c +} + // Fields allows partial responses to be retrieved. See // https://developers.google.com/gdata/docs/2.0/basics#PartialResponse // for more information. @@ -1770,7 +3515,7 @@ func (c *ProjectsGetCall) Do(opts ...googleapi.CallOption) (*Project, error) { }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil @@ -1782,6 +3527,11 @@ func (c *ProjectsGetCall) Do(opts ...googleapi.CallOption) (*Project, error) { // "project" // ], // "parameters": { + // "clientOperationId": { + // "description": "For mutating operation requests only. An optional identifier specified by the client. Must be unique for operation resources in the Operations collection.", + // "location": "query", + // "type": "string" + // }, // "project": { // "description": "Identifies the project addressed by this request.", // "location": "path", @@ -1945,7 +3695,7 @@ func (c *ResourceRecordSetsListCall) Do(opts ...googleapi.CallOption) (*Resource }, } target := &ret - if err := json.NewDecoder(res.Body).Decode(target); err != nil { + if err := gensupport.DecodeResponse(target, res); err != nil { return nil, err } return ret, nil diff --git a/vendor/vendor.json b/vendor/vendor.json index 3296411b..5f14858f 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -1346,10 +1346,10 @@ "revisionTime": "2017-07-18T13:06:16Z" }, { - "checksumSHA1": "JYl35km48fLrIx7YUtzcgd4J7Rk=", + "checksumSHA1": "UyrBKKpY9lX1LW5SpqJ9QKOAOjk=", "path": "google.golang.org/api/dns/v1", - "revision": "3cc2e591b550923a2c5f0ab5a803feda924d5823", - "revisionTime": "2016-11-27T23:54:21Z" + "revision": "0ad5a633fea1d4b64bf5e6a01e30d1fc466038e5", + "revisionTime": "2018-09-04T00:04:47Z" }, { "checksumSHA1": "nU4Iv1WFYka13VAT8ffBzgguGZ0=", From 404e7297f059bff9ccb4a37f5129588b3edd6042 Mon Sep 17 00:00:00 2001 From: Chris Stephens Date: Wed, 5 Sep 2018 14:22:13 -0700 Subject: [PATCH 17/31] Remove vendor code for UUIDs This doesn't appear to be used anywhere within this project. Additionally Hashicorp has their own [uuid library](https://github.com/hashicorp/go-uuid) which is used internally. --- vendor/github.com/satori/go.uuid/LICENSE | 20 - vendor/github.com/satori/go.uuid/README.md | 65 --- vendor/github.com/satori/go.uuid/uuid.go | 481 --------------------- vendor/vendor.json | 6 - 4 files changed, 572 deletions(-) delete mode 100644 vendor/github.com/satori/go.uuid/LICENSE delete mode 100644 vendor/github.com/satori/go.uuid/README.md delete mode 100644 vendor/github.com/satori/go.uuid/uuid.go diff --git a/vendor/github.com/satori/go.uuid/LICENSE b/vendor/github.com/satori/go.uuid/LICENSE deleted file mode 100644 index 488357b8..00000000 --- a/vendor/github.com/satori/go.uuid/LICENSE +++ /dev/null @@ -1,20 +0,0 @@ -Copyright (C) 2013-2016 by Maxim Bublis - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/satori/go.uuid/README.md b/vendor/github.com/satori/go.uuid/README.md deleted file mode 100644 index b6aad1c8..00000000 --- a/vendor/github.com/satori/go.uuid/README.md +++ /dev/null @@ -1,65 +0,0 @@ -# UUID package for Go language - -[![Build Status](https://travis-ci.org/satori/go.uuid.png?branch=master)](https://travis-ci.org/satori/go.uuid) -[![Coverage Status](https://coveralls.io/repos/github/satori/go.uuid/badge.svg?branch=master)](https://coveralls.io/github/satori/go.uuid) -[![GoDoc](http://godoc.org/github.com/satori/go.uuid?status.png)](http://godoc.org/github.com/satori/go.uuid) - -This package provides pure Go implementation of Universally Unique Identifier (UUID). Supported both creation and parsing of UUIDs. - -With 100% test coverage and benchmarks out of box. - -Supported versions: -* Version 1, based on timestamp and MAC address (RFC 4122) -* Version 2, based on timestamp, MAC address and POSIX UID/GID (DCE 1.1) -* Version 3, based on MD5 hashing (RFC 4122) -* Version 4, based on random numbers (RFC 4122) -* Version 5, based on SHA-1 hashing (RFC 4122) - -## Installation - -Use the `go` command: - - $ go get github.com/satori/go.uuid - -## Requirements - -UUID package requires Go >= 1.2. - -## Example - -```go -package main - -import ( - "fmt" - "github.com/satori/go.uuid" -) - -func main() { - // Creating UUID Version 4 - u1 := uuid.NewV4() - fmt.Printf("UUIDv4: %s\n", u1) - - // Parsing UUID from string input - u2, err := uuid.FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8") - if err != nil { - fmt.Printf("Something gone wrong: %s", err) - } - fmt.Printf("Successfully parsed: %s", u2) -} -``` - -## Documentation - -[Documentation](http://godoc.org/github.com/satori/go.uuid) is hosted at GoDoc project. - -## Links -* [RFC 4122](http://tools.ietf.org/html/rfc4122) -* [DCE 1.1: Authentication and Security Services](http://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01) - -## Copyright - -Copyright (C) 2013-2016 by Maxim Bublis . - -UUID package released under MIT License. -See [LICENSE](https://github.com/satori/go.uuid/blob/master/LICENSE) for details. diff --git a/vendor/github.com/satori/go.uuid/uuid.go b/vendor/github.com/satori/go.uuid/uuid.go deleted file mode 100644 index 295f3fc2..00000000 --- a/vendor/github.com/satori/go.uuid/uuid.go +++ /dev/null @@ -1,481 +0,0 @@ -// Copyright (C) 2013-2015 by Maxim Bublis -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package uuid provides implementation of Universally Unique Identifier (UUID). -// Supported versions are 1, 3, 4 and 5 (as specified in RFC 4122) and -// version 2 (as specified in DCE 1.1). -package uuid - -import ( - "bytes" - "crypto/md5" - "crypto/rand" - "crypto/sha1" - "database/sql/driver" - "encoding/binary" - "encoding/hex" - "fmt" - "hash" - "net" - "os" - "sync" - "time" -) - -// UUID layout variants. -const ( - VariantNCS = iota - VariantRFC4122 - VariantMicrosoft - VariantFuture -) - -// UUID DCE domains. -const ( - DomainPerson = iota - DomainGroup - DomainOrg -) - -// Difference in 100-nanosecond intervals between -// UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970). -const epochStart = 122192928000000000 - -// Used in string method conversion -const dash byte = '-' - -// UUID v1/v2 storage. -var ( - storageMutex sync.Mutex - storageOnce sync.Once - epochFunc = unixTimeFunc - clockSequence uint16 - lastTime uint64 - hardwareAddr [6]byte - posixUID = uint32(os.Getuid()) - posixGID = uint32(os.Getgid()) -) - -// String parse helpers. -var ( - urnPrefix = []byte("urn:uuid:") - byteGroups = []int{8, 4, 4, 4, 12} -) - -func initClockSequence() { - buf := make([]byte, 2) - safeRandom(buf) - clockSequence = binary.BigEndian.Uint16(buf) -} - -func initHardwareAddr() { - interfaces, err := net.Interfaces() - if err == nil { - for _, iface := range interfaces { - if len(iface.HardwareAddr) >= 6 { - copy(hardwareAddr[:], iface.HardwareAddr) - return - } - } - } - - // Initialize hardwareAddr randomly in case - // of real network interfaces absence - safeRandom(hardwareAddr[:]) - - // Set multicast bit as recommended in RFC 4122 - hardwareAddr[0] |= 0x01 -} - -func initStorage() { - initClockSequence() - initHardwareAddr() -} - -func safeRandom(dest []byte) { - if _, err := rand.Read(dest); err != nil { - panic(err) - } -} - -// Returns difference in 100-nanosecond intervals between -// UUID epoch (October 15, 1582) and current time. -// This is default epoch calculation function. -func unixTimeFunc() uint64 { - return epochStart + uint64(time.Now().UnixNano()/100) -} - -// UUID representation compliant with specification -// described in RFC 4122. -type UUID [16]byte - -// NullUUID can be used with the standard sql package to represent a -// UUID value that can be NULL in the database -type NullUUID struct { - UUID UUID - Valid bool -} - -// The nil UUID is special form of UUID that is specified to have all -// 128 bits set to zero. -var Nil = UUID{} - -// Predefined namespace UUIDs. -var ( - NamespaceDNS, _ = FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8") - NamespaceURL, _ = FromString("6ba7b811-9dad-11d1-80b4-00c04fd430c8") - NamespaceOID, _ = FromString("6ba7b812-9dad-11d1-80b4-00c04fd430c8") - NamespaceX500, _ = FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8") -) - -// And returns result of binary AND of two UUIDs. -func And(u1 UUID, u2 UUID) UUID { - u := UUID{} - for i := 0; i < 16; i++ { - u[i] = u1[i] & u2[i] - } - return u -} - -// Or returns result of binary OR of two UUIDs. -func Or(u1 UUID, u2 UUID) UUID { - u := UUID{} - for i := 0; i < 16; i++ { - u[i] = u1[i] | u2[i] - } - return u -} - -// Equal returns true if u1 and u2 equals, otherwise returns false. -func Equal(u1 UUID, u2 UUID) bool { - return bytes.Equal(u1[:], u2[:]) -} - -// Version returns algorithm version used to generate UUID. -func (u UUID) Version() uint { - return uint(u[6] >> 4) -} - -// Variant returns UUID layout variant. -func (u UUID) Variant() uint { - switch { - case (u[8] & 0x80) == 0x00: - return VariantNCS - case (u[8]&0xc0)|0x80 == 0x80: - return VariantRFC4122 - case (u[8]&0xe0)|0xc0 == 0xc0: - return VariantMicrosoft - } - return VariantFuture -} - -// Bytes returns bytes slice representation of UUID. -func (u UUID) Bytes() []byte { - return u[:] -} - -// Returns canonical string representation of UUID: -// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. -func (u UUID) String() string { - buf := make([]byte, 36) - - hex.Encode(buf[0:8], u[0:4]) - buf[8] = dash - hex.Encode(buf[9:13], u[4:6]) - buf[13] = dash - hex.Encode(buf[14:18], u[6:8]) - buf[18] = dash - hex.Encode(buf[19:23], u[8:10]) - buf[23] = dash - hex.Encode(buf[24:], u[10:]) - - return string(buf) -} - -// SetVersion sets version bits. -func (u *UUID) SetVersion(v byte) { - u[6] = (u[6] & 0x0f) | (v << 4) -} - -// SetVariant sets variant bits as described in RFC 4122. -func (u *UUID) SetVariant() { - u[8] = (u[8] & 0xbf) | 0x80 -} - -// MarshalText implements the encoding.TextMarshaler interface. -// The encoding is the same as returned by String. -func (u UUID) MarshalText() (text []byte, err error) { - text = []byte(u.String()) - return -} - -// UnmarshalText implements the encoding.TextUnmarshaler interface. -// Following formats are supported: -// "6ba7b810-9dad-11d1-80b4-00c04fd430c8", -// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}", -// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" -func (u *UUID) UnmarshalText(text []byte) (err error) { - if len(text) < 32 { - err = fmt.Errorf("uuid: UUID string too short: %s", text) - return - } - - t := text[:] - braced := false - - if bytes.Equal(t[:9], urnPrefix) { - t = t[9:] - } else if t[0] == '{' { - braced = true - t = t[1:] - } - - b := u[:] - - for i, byteGroup := range byteGroups { - if i > 0 { - if t[0] != '-' { - err = fmt.Errorf("uuid: invalid string format") - return - } - t = t[1:] - } - - if len(t) < byteGroup { - err = fmt.Errorf("uuid: UUID string too short: %s", text) - return - } - - if i == 4 && len(t) > byteGroup && - ((braced && t[byteGroup] != '}') || len(t[byteGroup:]) > 1 || !braced) { - err = fmt.Errorf("uuid: UUID string too long: %s", text) - return - } - - _, err = hex.Decode(b[:byteGroup/2], t[:byteGroup]) - if err != nil { - return - } - - t = t[byteGroup:] - b = b[byteGroup/2:] - } - - return -} - -// MarshalBinary implements the encoding.BinaryMarshaler interface. -func (u UUID) MarshalBinary() (data []byte, err error) { - data = u.Bytes() - return -} - -// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. -// It will return error if the slice isn't 16 bytes long. -func (u *UUID) UnmarshalBinary(data []byte) (err error) { - if len(data) != 16 { - err = fmt.Errorf("uuid: UUID must be exactly 16 bytes long, got %d bytes", len(data)) - return - } - copy(u[:], data) - - return -} - -// Value implements the driver.Valuer interface. -func (u UUID) Value() (driver.Value, error) { - return u.String(), nil -} - -// Scan implements the sql.Scanner interface. -// A 16-byte slice is handled by UnmarshalBinary, while -// a longer byte slice or a string is handled by UnmarshalText. -func (u *UUID) Scan(src interface{}) error { - switch src := src.(type) { - case []byte: - if len(src) == 16 { - return u.UnmarshalBinary(src) - } - return u.UnmarshalText(src) - - case string: - return u.UnmarshalText([]byte(src)) - } - - return fmt.Errorf("uuid: cannot convert %T to UUID", src) -} - -// Value implements the driver.Valuer interface. -func (u NullUUID) Value() (driver.Value, error) { - if !u.Valid { - return nil, nil - } - // Delegate to UUID Value function - return u.UUID.Value() -} - -// Scan implements the sql.Scanner interface. -func (u *NullUUID) Scan(src interface{}) error { - if src == nil { - u.UUID, u.Valid = Nil, false - return nil - } - - // Delegate to UUID Scan function - u.Valid = true - return u.UUID.Scan(src) -} - -// FromBytes returns UUID converted from raw byte slice input. -// It will return error if the slice isn't 16 bytes long. -func FromBytes(input []byte) (u UUID, err error) { - err = u.UnmarshalBinary(input) - return -} - -// FromBytesOrNil returns UUID converted from raw byte slice input. -// Same behavior as FromBytes, but returns a Nil UUID on error. -func FromBytesOrNil(input []byte) UUID { - uuid, err := FromBytes(input) - if err != nil { - return Nil - } - return uuid -} - -// FromString returns UUID parsed from string input. -// Input is expected in a form accepted by UnmarshalText. -func FromString(input string) (u UUID, err error) { - err = u.UnmarshalText([]byte(input)) - return -} - -// FromStringOrNil returns UUID parsed from string input. -// Same behavior as FromString, but returns a Nil UUID on error. -func FromStringOrNil(input string) UUID { - uuid, err := FromString(input) - if err != nil { - return Nil - } - return uuid -} - -// Returns UUID v1/v2 storage state. -// Returns epoch timestamp, clock sequence, and hardware address. -func getStorage() (uint64, uint16, []byte) { - storageOnce.Do(initStorage) - - storageMutex.Lock() - defer storageMutex.Unlock() - - timeNow := epochFunc() - // Clock changed backwards since last UUID generation. - // Should increase clock sequence. - if timeNow <= lastTime { - clockSequence++ - } - lastTime = timeNow - - return timeNow, clockSequence, hardwareAddr[:] -} - -// NewV1 returns UUID based on current timestamp and MAC address. -func NewV1() UUID { - u := UUID{} - - timeNow, clockSeq, hardwareAddr := getStorage() - - binary.BigEndian.PutUint32(u[0:], uint32(timeNow)) - binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) - binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) - binary.BigEndian.PutUint16(u[8:], clockSeq) - - copy(u[10:], hardwareAddr) - - u.SetVersion(1) - u.SetVariant() - - return u -} - -// NewV2 returns DCE Security UUID based on POSIX UID/GID. -func NewV2(domain byte) UUID { - u := UUID{} - - timeNow, clockSeq, hardwareAddr := getStorage() - - switch domain { - case DomainPerson: - binary.BigEndian.PutUint32(u[0:], posixUID) - case DomainGroup: - binary.BigEndian.PutUint32(u[0:], posixGID) - } - - binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) - binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) - binary.BigEndian.PutUint16(u[8:], clockSeq) - u[9] = domain - - copy(u[10:], hardwareAddr) - - u.SetVersion(2) - u.SetVariant() - - return u -} - -// NewV3 returns UUID based on MD5 hash of namespace UUID and name. -func NewV3(ns UUID, name string) UUID { - u := newFromHash(md5.New(), ns, name) - u.SetVersion(3) - u.SetVariant() - - return u -} - -// NewV4 returns random generated UUID. -func NewV4() UUID { - u := UUID{} - safeRandom(u[:]) - u.SetVersion(4) - u.SetVariant() - - return u -} - -// NewV5 returns UUID based on SHA-1 hash of namespace UUID and name. -func NewV5(ns UUID, name string) UUID { - u := newFromHash(sha1.New(), ns, name) - u.SetVersion(5) - u.SetVariant() - - return u -} - -// Returns UUID based on hashing of namespace UUID and name. -func newFromHash(h hash.Hash, ns UUID, name string) UUID { - u := UUID{} - h.Write(ns[:]) - h.Write([]byte(name)) - copy(u[:], h.Sum(nil)) - - return u -} diff --git a/vendor/vendor.json b/vendor/vendor.json index 5f14858f..f9328fdf 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -968,12 +968,6 @@ "revision": "ec24b7f12fca9f78fbfcd62a0ea8bce14ade8792", "revisionTime": "2017-04-07T04:09:43Z" }, - { - "checksumSHA1": "zmC8/3V4ls53DJlNTKDZwPSC/dA=", - "path": "github.com/satori/go.uuid", - "revision": "b061729afc07e77a8aa4fad0a2fd840958f1942a", - "revisionTime": "2016-09-27T10:08:44Z" - }, { "checksumSHA1": "t/Hcc8jNXkH58QfnotLNtpLh+qc=", "path": "github.com/stoewer/go-strcase", From 70e55920d60371f662b0a2ff51cae7c0e0fbcf41 Mon Sep 17 00:00:00 2001 From: Chris Stephens Date: Wed, 5 Sep 2018 09:05:39 -0700 Subject: [PATCH 18/31] Improving billing logging sink docs --- website/docs/r/logging_billing_account_sink.html.markdown | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/website/docs/r/logging_billing_account_sink.html.markdown b/website/docs/r/logging_billing_account_sink.html.markdown index eafc88e7..92e902d0 100644 --- a/website/docs/r/logging_billing_account_sink.html.markdown +++ b/website/docs/r/logging_billing_account_sink.html.markdown @@ -12,8 +12,10 @@ Manages a billing account logging sink. For more information see [the official documentation](https://cloud.google.com/logging/docs/) and [Exporting Logs in the API](https://cloud.google.com/logging/docs/api/tasks/exporting-logs). -Note that you must have the "Logs Configuration Writer" IAM role (`roles/logging.configWriter`) -granted to the credentials used with terraform. +~> **Note** You must have the "Logs Configuration Writer" IAM role (`roles/logging.configWriter`) +[granted on the billing account](https://cloud.google.com/billing/reference/rest/v1/billingAccounts/getIamPolicy) to +the credentials used with Terraform. [IAM roles granted on a billing account](https://cloud.google.com/billing/docs/how-to/billing-access) are separate from the +typical IAM roles granted on a project. ## Example Usage From e2a7bf1cf797b7242ca9928c9e3b0714d49531b5 Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 5 Sep 2018 15:20:00 -0700 Subject: [PATCH 19/31] Use Go v1.11.0 --- .travis.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.travis.yml b/.travis.yml index 88dc4eec..04c0d65c 100644 --- a/.travis.yml +++ b/.travis.yml @@ -4,7 +4,7 @@ services: - docker language: go go: -- 1.9.1 +- "1.11" install: # This script is used by the Travis build to install a cookie for From 010c0f3288820748f980d178024dfebfcbe64417 Mon Sep 17 00:00:00 2001 From: Paddy Carver Date: Wed, 5 Sep 2018 15:24:22 -0700 Subject: [PATCH 20/31] Update README and run fmt. --- README.md | 6 ++--- google/image_test.go | 12 +++++----- .../resource_binaryauthorization_attestor.go | 4 ++-- .../resource_compute_instance_migrate_test.go | 24 +++++++++---------- google/resource_compute_network.go | 2 +- google/resource_sql_database_instance.go | 2 +- google/validation_test.go | 6 ++--- 7 files changed, 28 insertions(+), 28 deletions(-) diff --git a/README.md b/README.md index 5d7d2103..dd0f5def 100644 --- a/README.md +++ b/README.md @@ -18,8 +18,8 @@ This provider plugin is maintained by: Requirements ------------ -- [Terraform](https://www.terraform.io/downloads.html) 0.10.x -- [Go](https://golang.org/doc/install) 1.9 (to build the provider plugin) +- [Terraform](https://www.terraform.io/downloads.html) 0.10+ +- [Go](https://golang.org/doc/install) 1.11.0 or higher Building The Provider --------------------- @@ -51,7 +51,7 @@ To upgrade to the latest stable version of the Google provider run `terraform in Developing the Provider --------------------------- -If you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.9+ is *required*). You'll also need to correctly setup a [GOPATH](http://golang.org/doc/code.html#GOPATH), as well as adding `$GOPATH/bin` to your `$PATH`. +If you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.11+ is *required*). You'll also need to correctly setup a [GOPATH](http://golang.org/doc/code.html#GOPATH), as well as adding `$GOPATH/bin` to your `$PATH`. To compile the provider, run `make build`. This will build the provider and put the provider binary in the `$GOPATH/bin` directory. diff --git a/google/image_test.go b/google/image_test.go index 6eb9cfe0..026d02e5 100644 --- a/google/image_test.go +++ b/google/image_test.go @@ -77,12 +77,12 @@ func testAccCheckComputeImageResolution(n string) resource.TestCheckFunc { "global/images/" + name: "global/images/" + name, "global/images/family/" + family: "global/images/family/" + family, - name: "global/images/" + name, - family: "global/images/family/" + family, - "family/" + family: "global/images/family/" + family, - project + "/" + name: "projects/" + project + "/global/images/" + name, - project + "/" + family: "projects/" + project + "/global/images/family/" + family, - link: link, + name: "global/images/" + name, + family: "global/images/family/" + family, + "family/" + family: "global/images/family/" + family, + project + "/" + name: "projects/" + project + "/global/images/" + name, + project + "/" + family: "projects/" + project + "/global/images/family/" + family, + link: link, } for input, expectation := range images { diff --git a/google/resource_binaryauthorization_attestor.go b/google/resource_binaryauthorization_attestor.go index 9b02f402..33a26b69 100644 --- a/google/resource_binaryauthorization_attestor.go +++ b/google/resource_binaryauthorization_attestor.go @@ -281,8 +281,8 @@ func flattenBinaryAuthorizationAttestorAttestationAuthorityNotePublicKeys(v inte for _, raw := range l { original := raw.(map[string]interface{}) transformed = append(transformed, map[string]interface{}{ - "comment": flattenBinaryAuthorizationAttestorAttestationAuthorityNotePublicKeysComment(original["comment"]), - "id": flattenBinaryAuthorizationAttestorAttestationAuthorityNotePublicKeysId(original["id"]), + "comment": flattenBinaryAuthorizationAttestorAttestationAuthorityNotePublicKeysComment(original["comment"]), + "id": flattenBinaryAuthorizationAttestorAttestationAuthorityNotePublicKeysId(original["id"]), "ascii_armored_pgp_public_key": flattenBinaryAuthorizationAttestorAttestationAuthorityNotePublicKeysAsciiArmoredPgpPublicKey(original["asciiArmoredPgpPublicKey"]), }) } diff --git a/google/resource_compute_instance_migrate_test.go b/google/resource_compute_instance_migrate_test.go index e0c8c80f..871e22b0 100644 --- a/google/resource_compute_instance_migrate_test.go +++ b/google/resource_compute_instance_migrate_test.go @@ -156,7 +156,7 @@ func TestAccComputeInstanceMigrateState_bootDisk(t *testing.T) { "disk.0.device_name": "persistent-disk-0", "disk.0.disk_encryption_key_raw": "encrypt-key", "disk.0.disk_encryption_key_sha256": "encrypt-key-sha", - "zone": zone, + "zone": zone, } expected := map[string]string{ "boot_disk.#": "1", @@ -223,7 +223,7 @@ func TestAccComputeInstanceMigrateState_v4FixBootDisk(t *testing.T) { "disk.0.device_name": "persistent-disk-0", "disk.0.disk_encryption_key_raw": "encrypt-key", "disk.0.disk_encryption_key_sha256": "encrypt-key-sha", - "zone": zone, + "zone": zone, } expected := map[string]string{ "boot_disk.#": "1", @@ -305,7 +305,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromSource(t *testing.T) { "disk.0.device_name": "persistent-disk-1", "disk.0.disk_encryption_key_raw": "encrypt-key", "disk.0.disk_encryption_key_sha256": "encrypt-key-sha", - "zone": zone, + "zone": zone, } expected := map[string]string{ "boot_disk.#": "1", @@ -385,7 +385,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromSource(t *testing.T "disk.0.device_name": "persistent-disk-1", "disk.0.disk_encryption_key_raw": "encrypt-key", "disk.0.disk_encryption_key_sha256": "encrypt-key-sha", - "zone": zone, + "zone": zone, } expected := map[string]string{ "boot_disk.#": "1", @@ -452,7 +452,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromEncryptionKey(t *testing "disk.0.image": "projects/debian-cloud/global/images/family/debian-9", "disk.0.disk_encryption_key_raw": "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=", "disk.0.disk_encryption_key_sha256": "esTuF7d4eatX4cnc4JsiEiaI+Rff78JgPhA/v1zxX9E=", - "zone": zone, + "zone": zone, } expected := map[string]string{ "boot_disk.#": "1", @@ -520,7 +520,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromEncryptionKey(t *te "disk.0.image": "projects/debian-cloud/global/images/family/debian-9", "disk.0.disk_encryption_key_raw": "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=", "disk.0.disk_encryption_key_sha256": "esTuF7d4eatX4cnc4JsiEiaI+Rff78JgPhA/v1zxX9E=", - "zone": zone, + "zone": zone, } expected := map[string]string{ "boot_disk.#": "1", @@ -600,8 +600,8 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromAutoDeleteAndImage(t *te "attached_disk.0.device_name": "persistent-disk-2", "attached_disk.1.source": "https://www.googleapis.com/compute/v1/projects/" + config.Project + "/zones/" + zone + "/disks/" + instanceName + "-1", "attached_disk.1.device_name": "persistent-disk-1", - "zone": zone, - "create_timeout": "4", + "zone": zone, + "create_timeout": "4", } runInstanceMigrateTest(t, instanceName, "migrate disk to attached disk", 2 /* state version */, attributes, expected, config) @@ -672,7 +672,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromAutoDeleteAndImage( "attached_disk.0.device_name": "persistent-disk-2", "attached_disk.1.source": "https://www.googleapis.com/compute/v1/projects/" + config.Project + "/zones/" + zone + "/disks/" + instanceName + "-1", "attached_disk.1.device_name": "persistent-disk-1", - "zone": zone, + "zone": zone, } runInstanceMigrateTest(t, instanceName, "migrate disk to attached disk", 4 /* state version */, attributes, expected, config) @@ -735,8 +735,8 @@ func TestAccComputeInstanceMigrateState_scratchDisk(t *testing.T) { "boot_disk.#": "1", "scratch_disk.#": "1", "scratch_disk.0.interface": "SCSI", - "zone": zone, - "create_timeout": "4", + "zone": zone, + "create_timeout": "4", } runInstanceMigrateTest(t, instanceName, "migrate disk to scratch disk", 2 /* state version */, attributes, expected, config) @@ -799,7 +799,7 @@ func TestAccComputeInstanceMigrateState_v4FixScratchDisk(t *testing.T) { "boot_disk.#": "1", "scratch_disk.#": "1", "scratch_disk.0.interface": "SCSI", - "zone": zone, + "zone": zone, } runInstanceMigrateTest(t, instanceName, "migrate disk to scratch disk", 4 /* state version */, attributes, expected, config) diff --git a/google/resource_compute_network.go b/google/resource_compute_network.go index c615bc34..c30054e9 100644 --- a/google/resource_compute_network.go +++ b/google/resource_compute_network.go @@ -95,7 +95,7 @@ func resourceComputeNetworkCreate(d *schema.ResourceData, meta interface{}) erro // Build the network parameter network := &compute.Network{ - Name: d.Get("name").(string), + Name: d.Get("name").(string), AutoCreateSubnetworks: autoCreateSubnetworks, Description: d.Get("description").(string), } diff --git a/google/resource_sql_database_instance.go b/google/resource_sql_database_instance.go index 5770e92f..aa8890fc 100644 --- a/google/resource_sql_database_instance.go +++ b/google/resource_sql_database_instance.go @@ -1225,7 +1225,7 @@ func flattenAuthorizedNetworks(entries []*sqladmin.AclEntry) interface{} { func flattenLocationPreference(locationPreference *sqladmin.LocationPreference) interface{} { data := map[string]interface{}{ "follow_gae_application": locationPreference.FollowGaeApplication, - "zone": locationPreference.Zone, + "zone": locationPreference.Zone, } return []map[string]interface{}{data} diff --git a/google/validation_test.go b/google/validation_test.go index 967ee5f7..5a67a77d 100644 --- a/google/validation_test.go +++ b/google/validation_test.go @@ -247,15 +247,15 @@ func TestOrEmpty(t *testing.T) { ExpectValidationErrors bool }{ "accept empty value": { - Value: "", + Value: "", ExpectValidationErrors: false, }, "non empty value is accepted when valid": { - Value: "valid", + Value: "valid", ExpectValidationErrors: false, }, "non empty value is rejected if invalid": { - Value: "invalid", + Value: "invalid", ExpectValidationErrors: true, }, } From 6cf87b40147f4d4ad99e10181ac3c8375e6f1272 Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 5 Sep 2018 15:34:09 -0700 Subject: [PATCH 21/31] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 913a851f..8b655161 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,7 @@ BACKWARDS INCOMPATIBILITIES: * compute: instance templates used to not set any disks in the template in state unless they were in the config, as well. It also only stored the image name in state. Both of these were bugs, and have been fixed. They should not cause any disruption. If you were interpolating an image name from a disk in an instance template, you'll need to update your config to strip out everything before the last `/`. If you imported an instance template, and did not add all the disks in the template to your config, you'll see a diff; add those disks to your config, and it will go away. Those are the only two instances where this change should effect you. We apologise for the inconvenience. [GH-1916] +* provider: This is the first release tested against and built with Go 1.11, which required go fmt changes to the code. If you are building a custom version of this provider or running tests using the repository Make targets (e.g. make build) when using a previous version of Go, you will receive errors. You can use the underlying go commands (e.g. go build) to workaround the go fmt check in the Make targets until you are able to upgrade Go. IMPROVEMENTS: * compute: `google_compute_health_check` is autogenerated, exposing the `type` attribute and accepting more import formats. [GH-1941] From 526fb518247ce5d6b0589c2cec6aaff9f00ad218 Mon Sep 17 00:00:00 2001 From: David Alger Date: Wed, 5 Sep 2018 17:34:47 -0500 Subject: [PATCH 22/31] Correct google_compute_ssl_certificate import example (#1970) --- website/docs/r/compute_ssl_certificate.html.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/r/compute_ssl_certificate.html.markdown b/website/docs/r/compute_ssl_certificate.html.markdown index 0f3ee490..77213bce 100644 --- a/website/docs/r/compute_ssl_certificate.html.markdown +++ b/website/docs/r/compute_ssl_certificate.html.markdown @@ -126,5 +126,5 @@ exported: SSL certificate can be imported using the `name`, e.g. ``` -$ terraform import compute_ssl_certificate.html.foobar foobar +$ terraform import google_compute_ssl_certificate.default my-certificate ``` From fc197d4321ea9c0f1395c2c0e6e2946211270a2a Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 5 Sep 2018 15:38:24 -0700 Subject: [PATCH 23/31] Update bug.md --- .github/ISSUE_TEMPLATE/bug.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/.github/ISSUE_TEMPLATE/bug.md b/.github/ISSUE_TEMPLATE/bug.md index ca9424df..d7f48126 100644 --- a/.github/ISSUE_TEMPLATE/bug.md +++ b/.github/ISSUE_TEMPLATE/bug.md @@ -12,7 +12,7 @@ about: For when something is there, but doesn't work how it should. * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment -* If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. +* If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already. @@ -34,6 +34,9 @@ about: For when something is there, but doesn't work how it should. # Copy-paste your Terraform configurations here - for large Terraform configs, # please use a service like Dropbox and share a link to the ZIP file. For # security, you can also encrypt the files using our GPG public key: https://www.hashicorp.com/security +# If reproducing the bug involves modifying the config file (e.g., apply a config, +# change a value, apply the config again, see the bug) then please include both the +# version of the config before the change, and the version of the config after the change. ``` ### Debug Output From ae3bf9a84fd447bacaa2ed7bd7ece9ed44191ec2 Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 5 Sep 2018 15:38:51 -0700 Subject: [PATCH 24/31] Update enhancement.md --- .github/ISSUE_TEMPLATE/enhancement.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/ISSUE_TEMPLATE/enhancement.md b/.github/ISSUE_TEMPLATE/enhancement.md index 30279323..28908017 100644 --- a/.github/ISSUE_TEMPLATE/enhancement.md +++ b/.github/ISSUE_TEMPLATE/enhancement.md @@ -1,6 +1,6 @@ --- name: Enhancement -about: For when something (a resource, field, etc.) is missing, but should be added. +about: For when something (a resource, field, etc.) is missing, and should be added. --- From aa040a0b42717914df1fb40291a46bef4a499d4c Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 5 Sep 2018 15:39:12 -0700 Subject: [PATCH 25/31] Update enhancement.md --- .github/ISSUE_TEMPLATE/enhancement.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/ISSUE_TEMPLATE/enhancement.md b/.github/ISSUE_TEMPLATE/enhancement.md index 28908017..f2aa62e4 100644 --- a/.github/ISSUE_TEMPLATE/enhancement.md +++ b/.github/ISSUE_TEMPLATE/enhancement.md @@ -11,7 +11,7 @@ about: For when something (a resource, field, etc.) is missing, and should be ad * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request -* If you are interested in working on this issue or have submitted a pull request, please leave a commentIf the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. +* If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already. From 88cfca000f642b90a3503afe0bd60204c43cd66a Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 5 Sep 2018 15:42:23 -0700 Subject: [PATCH 26/31] Delete success-story.md --- .github/ISSUE_TEMPLATE/success-story.md | 26 ------------------------- 1 file changed, 26 deletions(-) delete mode 100644 .github/ISSUE_TEMPLATE/success-story.md diff --git a/.github/ISSUE_TEMPLATE/success-story.md b/.github/ISSUE_TEMPLATE/success-story.md deleted file mode 100644 index 0d18e985..00000000 --- a/.github/ISSUE_TEMPLATE/success-story.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -name: Success story -about: Tell us about how the provider worked out well for you and things you love - about it. - ---- - - - - -**Company**: -**Project**: - -## How the Google Provider Helped - -## Things I Really Enjoyed About Using the Provider From 3597b8ddf6334beeb1386d6446236f7d9eef346d Mon Sep 17 00:00:00 2001 From: Paddy Date: Wed, 5 Sep 2018 15:43:58 -0700 Subject: [PATCH 27/31] Update question.md --- .github/ISSUE_TEMPLATE/question.md | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/ISSUE_TEMPLATE/question.md b/.github/ISSUE_TEMPLATE/question.md index 7322f1e6..60cba9f0 100644 --- a/.github/ISSUE_TEMPLATE/question.md +++ b/.github/ISSUE_TEMPLATE/question.md @@ -15,3 +15,4 @@ If you have a support request or question please submit them to one of these res * [Terraform community resources](https://www.terraform.io/docs/extend/community/index.html) * [HashiCorp support](https://support.hashicorp.com) (Terraform Enterprise customers) +* [Google Cloud Platform Slack](https://gcp-slack.appspot.com/) From 035a581b9f0c1db7c328b4ad043c57f6c5bfcc5d Mon Sep 17 00:00:00 2001 From: Riley Karson Date: Thu, 6 Sep 2018 08:14:55 -0700 Subject: [PATCH 28/31] Add import support for org, folder, billing logging sinks (#1860) Fixes #1494. * Add import support for `google_logging_organization_sink`, `google_logging_folder_sink`, `google_logging_billing_account_sink`. Using `StateFunc` over `DiffSuppressFunc` should only affect tests; for some reason `TestAccLoggingFolderSink_folderAcceptsFullFolderPath` expected a `folder` value of `folders/{{id}}` vs expecting `{{id}}` when only `DiffSuppressFunc` was used, when in real use `DiffSuppressFunc` should be sufficient. --- google/logging_utils.go | 2 +- google/resource_logging_billing_account_sink.go | 3 +++ ...esource_logging_billing_account_sink_test.go | 12 ++++++++++++ google/resource_logging_folder_sink.go | 14 ++++++++++---- google/resource_logging_folder_sink_test.go | 16 ++++++++++++++++ google/resource_logging_organization_sink.go | 13 ++++++++++--- .../resource_logging_organization_sink_test.go | 12 ++++++++++++ google/resource_logging_project_sink.go | 17 +---------------- google/resource_logging_sink.go | 13 +++++++++++++ .../logging_billing_account_sink.html.markdown | 8 ++++++++ .../docs/r/logging_folder_sink.html.markdown | 8 ++++++++ .../r/logging_organization_sink.html.markdown | 8 ++++++++ 12 files changed, 102 insertions(+), 24 deletions(-) diff --git a/google/logging_utils.go b/google/logging_utils.go index a94ab70e..577dc575 100644 --- a/google/logging_utils.go +++ b/google/logging_utils.go @@ -7,7 +7,7 @@ import ( // loggingSinkResourceTypes contains all the possible Stackdriver Logging resource types. Used to parse ids safely. var loggingSinkResourceTypes = []string{ - "billingAccount", + "billingAccounts", "folders", "organizations", "projects", diff --git a/google/resource_logging_billing_account_sink.go b/google/resource_logging_billing_account_sink.go index 53cbcb49..60b9d700 100644 --- a/google/resource_logging_billing_account_sink.go +++ b/google/resource_logging_billing_account_sink.go @@ -12,6 +12,9 @@ func resourceLoggingBillingAccountSink() *schema.Resource { Delete: resourceLoggingBillingAccountSinkDelete, Update: resourceLoggingBillingAccountSinkUpdate, Schema: resourceLoggingSinkSchema(), + Importer: &schema.ResourceImporter{ + State: resourceLoggingSinkImportState("billing_account"), + }, } schm.Schema["billing_account"] = &schema.Schema{ Type: schema.TypeString, diff --git a/google/resource_logging_billing_account_sink_test.go b/google/resource_logging_billing_account_sink_test.go index df284b13..4ce2e789 100644 --- a/google/resource_logging_billing_account_sink_test.go +++ b/google/resource_logging_billing_account_sink_test.go @@ -30,6 +30,10 @@ func TestAccLoggingBillingAccountSink_basic(t *testing.T) { testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.basic", &sink), testAccCheckLoggingBillingAccountSink(&sink, "google_logging_billing_account_sink.basic"), ), + }, { + ResourceName: "google_logging_billing_account_sink.basic", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -62,6 +66,10 @@ func TestAccLoggingBillingAccountSink_update(t *testing.T) { testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.update", &sinkAfter), testAccCheckLoggingBillingAccountSink(&sinkAfter, "google_logging_billing_account_sink.update"), ), + }, { + ResourceName: "google_logging_billing_account_sink.update", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -96,6 +104,10 @@ func TestAccLoggingBillingAccountSink_heredoc(t *testing.T) { testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.heredoc", &sink), testAccCheckLoggingBillingAccountSink(&sink, "google_logging_billing_account_sink.heredoc"), ), + }, { + ResourceName: "google_logging_billing_account_sink.heredoc", + ImportState: true, + ImportStateVerify: true, }, }, }) diff --git a/google/resource_logging_folder_sink.go b/google/resource_logging_folder_sink.go index f4798842..da34899c 100644 --- a/google/resource_logging_folder_sink.go +++ b/google/resource_logging_folder_sink.go @@ -2,6 +2,7 @@ package google import ( "fmt" + "strings" "github.com/hashicorp/terraform/helper/schema" ) @@ -13,12 +14,17 @@ func resourceLoggingFolderSink() *schema.Resource { Delete: resourceLoggingFolderSinkDelete, Update: resourceLoggingFolderSinkUpdate, Schema: resourceLoggingSinkSchema(), + Importer: &schema.ResourceImporter{ + State: resourceLoggingSinkImportState("folder"), + }, } schm.Schema["folder"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - DiffSuppressFunc: optionalPrefixSuppress("folders/"), + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: func(v interface{}) string { + return strings.Replace(v.(string), "folders/", "", 1) + }, } schm.Schema["include_children"] = &schema.Schema{ Type: schema.TypeBool, diff --git a/google/resource_logging_folder_sink_test.go b/google/resource_logging_folder_sink_test.go index e47fe2ce..490749ac 100644 --- a/google/resource_logging_folder_sink_test.go +++ b/google/resource_logging_folder_sink_test.go @@ -32,6 +32,10 @@ func TestAccLoggingFolderSink_basic(t *testing.T) { testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sink), testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.basic"), ), + }, { + ResourceName: "google_logging_folder_sink.basic", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -58,6 +62,10 @@ func TestAccLoggingFolderSink_folderAcceptsFullFolderPath(t *testing.T) { testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sink), testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.basic"), ), + }, { + ResourceName: "google_logging_folder_sink.basic", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -92,6 +100,10 @@ func TestAccLoggingFolderSink_update(t *testing.T) { testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sinkAfter), testAccCheckLoggingFolderSink(&sinkAfter, "google_logging_folder_sink.basic"), ), + }, { + ResourceName: "google_logging_folder_sink.basic", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -127,6 +139,10 @@ func TestAccLoggingFolderSink_heredoc(t *testing.T) { testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.heredoc", &sink), testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.heredoc"), ), + }, { + ResourceName: "google_logging_folder_sink.heredoc", + ImportState: true, + ImportStateVerify: true, }, }, }) diff --git a/google/resource_logging_organization_sink.go b/google/resource_logging_organization_sink.go index a1660ed5..9d539ded 100644 --- a/google/resource_logging_organization_sink.go +++ b/google/resource_logging_organization_sink.go @@ -2,6 +2,8 @@ package google import ( "fmt" + "strings" + "github.com/hashicorp/terraform/helper/schema" ) @@ -12,11 +14,16 @@ func resourceLoggingOrganizationSink() *schema.Resource { Delete: resourceLoggingOrganizationSinkDelete, Update: resourceLoggingOrganizationSinkUpdate, Schema: resourceLoggingSinkSchema(), + Importer: &schema.ResourceImporter{ + State: resourceLoggingSinkImportState("org_id"), + }, } schm.Schema["org_id"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: optionalPrefixSuppress("organizations/"), + Type: schema.TypeString, + Required: true, + StateFunc: func(v interface{}) string { + return strings.Replace(v.(string), "organizations/", "", 1) + }, } schm.Schema["include_children"] = &schema.Schema{ Type: schema.TypeBool, diff --git a/google/resource_logging_organization_sink_test.go b/google/resource_logging_organization_sink_test.go index 22949e40..a5149cec 100644 --- a/google/resource_logging_organization_sink_test.go +++ b/google/resource_logging_organization_sink_test.go @@ -31,6 +31,10 @@ func TestAccLoggingOrganizationSink_basic(t *testing.T) { testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.basic", &sink), testAccCheckLoggingOrganizationSink(&sink, "google_logging_organization_sink.basic"), ), + }, { + ResourceName: "google_logging_organization_sink.basic", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -63,6 +67,10 @@ func TestAccLoggingOrganizationSink_update(t *testing.T) { testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.update", &sinkAfter), testAccCheckLoggingOrganizationSink(&sinkAfter, "google_logging_organization_sink.update"), ), + }, { + ResourceName: "google_logging_organization_sink.update", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -97,6 +105,10 @@ func TestAccLoggingOrganizationSink_heredoc(t *testing.T) { testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.heredoc", &sink), testAccCheckLoggingOrganizationSink(&sink, "google_logging_organization_sink.heredoc"), ), + }, { + ResourceName: "google_logging_organization_sink.heredoc", + ImportState: true, + ImportStateVerify: true, }, }, }) diff --git a/google/resource_logging_project_sink.go b/google/resource_logging_project_sink.go index f91403d4..5e6d03da 100644 --- a/google/resource_logging_project_sink.go +++ b/google/resource_logging_project_sink.go @@ -16,7 +16,7 @@ func resourceLoggingProjectSink() *schema.Resource { Update: resourceLoggingProjectSinkUpdate, Schema: resourceLoggingSinkSchema(), Importer: &schema.ResourceImporter{ - State: resourceLoggingProjectSinkImportState, + State: resourceLoggingSinkImportState("project"), }, } schm.Schema["project"] = &schema.Schema{ @@ -103,18 +103,3 @@ func resourceLoggingProjectSinkDelete(d *schema.ResourceData, meta interface{}) d.SetId("") return nil } - -func resourceLoggingProjectSinkImportState(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*Config) - - loggingSinkId, err := parseLoggingSinkId(d.Id()) - if err != nil { - return nil, err - } - - if config.Project != loggingSinkId.resourceId { - d.Set("project", loggingSinkId.resourceId) - } - - return []*schema.ResourceData{d}, nil -} diff --git a/google/resource_logging_sink.go b/google/resource_logging_sink.go index 4f7e8987..4367a97f 100644 --- a/google/resource_logging_sink.go +++ b/google/resource_logging_sink.go @@ -69,3 +69,16 @@ func expandResourceLoggingSinkForUpdate(d *schema.ResourceData) *logging.LogSink } return &sink } + +func resourceLoggingSinkImportState(sinkType string) schema.StateFunc { + return func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + loggingSinkId, err := parseLoggingSinkId(d.Id()) + if err != nil { + return nil, err + } + + d.Set(sinkType, loggingSinkId.resourceId) + + return []*schema.ResourceData{d}, nil + } +} diff --git a/website/docs/r/logging_billing_account_sink.html.markdown b/website/docs/r/logging_billing_account_sink.html.markdown index 92e902d0..66204280 100644 --- a/website/docs/r/logging_billing_account_sink.html.markdown +++ b/website/docs/r/logging_billing_account_sink.html.markdown @@ -69,3 +69,11 @@ exported: * `writer_identity` - The identity associated with this sink. This identity must be granted write access to the configured `destination`. + +## Import + +Billing account logging sinks can be imported using this format: + +``` +$ terraform import google_logging_billing_account_sink.my_sink billingAccounts/{{billing_account_id}}/sinks/{{sink_id}} +``` diff --git a/website/docs/r/logging_folder_sink.html.markdown b/website/docs/r/logging_folder_sink.html.markdown index a7fd4748..08484dba 100644 --- a/website/docs/r/logging_folder_sink.html.markdown +++ b/website/docs/r/logging_folder_sink.html.markdown @@ -79,3 +79,11 @@ exported: * `writer_identity` - The identity associated with this sink. This identity must be granted write access to the configured `destination`. + +## Import + +Folder-level logging sinks can be imported using this format: + +``` +$ terraform import google_logging_folder_sink.my_sink folders/{{folder_id}}/sinks/{{sink_id}} +``` diff --git a/website/docs/r/logging_organization_sink.html.markdown b/website/docs/r/logging_organization_sink.html.markdown index 2d6a6a43..ee396edb 100644 --- a/website/docs/r/logging_organization_sink.html.markdown +++ b/website/docs/r/logging_organization_sink.html.markdown @@ -73,3 +73,11 @@ exported: * `writer_identity` - The identity associated with this sink. This identity must be granted write access to the configured `destination`. + +## Import + +Organization-level logging sinks can be imported using this format: + +``` +$ terraform import google_logging_organization_sink.my_sink organizations/{{organization_id}}/sinks/{{sink_id}} +``` From 3f906b3c8db90b16239f3b00086ad1f541b95522 Mon Sep 17 00:00:00 2001 From: Riley Karson Date: Thu, 6 Sep 2018 08:16:30 -0700 Subject: [PATCH 29/31] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8b655161..a5df97a2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,7 @@ BACKWARDS INCOMPATIBILITIES: IMPROVEMENTS: * compute: `google_compute_health_check` is autogenerated, exposing the `type` attribute and accepting more import formats. [GH-1941] * container: Addition of create_subnetwork and other fields relevant for Alias IPs [GH-1921] +* logging: Add import support for `google_logging_organization_sink`, `google_logging_folder_sink`, `google_logging_billing_account_sink` [GH-1860] ## 1.17.1 (August 22, 2018) From 30773a784ddfad7ea25034fe8a14ff76be4ccd2c Mon Sep 17 00:00:00 2001 From: Chris Stephens Date: Thu, 30 Aug 2018 13:54:54 -0700 Subject: [PATCH 30/31] Setting a default update_mask for all log sinks [According to log sink documentation](https://cloud.google.com/logging/docs/reference/v2/rest/v2/sinks/update) the api will eventually throw an error if update mask isn't provided. The default specified here is what the api is currently using when an empty mask is passed. --- google/resource_logging_billing_account_sink.go | 4 +++- google/resource_logging_folder_sink.go | 3 ++- google/resource_logging_organization_sink.go | 3 ++- google/resource_logging_project_sink.go | 3 ++- google/resource_logging_sink.go | 3 +++ website/docs/r/logging_project_sink.html.markdown | 5 +++-- 6 files changed, 15 insertions(+), 6 deletions(-) diff --git a/google/resource_logging_billing_account_sink.go b/google/resource_logging_billing_account_sink.go index 60b9d700..ecde951e 100644 --- a/google/resource_logging_billing_account_sink.go +++ b/google/resource_logging_billing_account_sink.go @@ -2,6 +2,7 @@ package google import ( "fmt" + "github.com/hashicorp/terraform/helper/schema" ) @@ -58,7 +59,8 @@ func resourceLoggingBillingAccountSinkUpdate(d *schema.ResourceData, meta interf sink := expandResourceLoggingSinkForUpdate(d) // The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true. - _, err := config.clientLogging.BillingAccounts.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(true).Do() + _, err := config.clientLogging.BillingAccounts.Sinks.Patch(d.Id(), sink). + UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(true).Do() if err != nil { return err } diff --git a/google/resource_logging_folder_sink.go b/google/resource_logging_folder_sink.go index da34899c..a4ecd3b4 100644 --- a/google/resource_logging_folder_sink.go +++ b/google/resource_logging_folder_sink.go @@ -77,7 +77,8 @@ func resourceLoggingFolderSinkUpdate(d *schema.ResourceData, meta interface{}) e sink.ForceSendFields = append(sink.ForceSendFields, "IncludeChildren") // The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true. - _, err := config.clientLogging.Folders.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(true).Do() + _, err := config.clientLogging.Folders.Sinks.Patch(d.Id(), sink). + UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(true).Do() if err != nil { return err } diff --git a/google/resource_logging_organization_sink.go b/google/resource_logging_organization_sink.go index 9d539ded..9063345f 100644 --- a/google/resource_logging_organization_sink.go +++ b/google/resource_logging_organization_sink.go @@ -77,7 +77,8 @@ func resourceLoggingOrganizationSinkUpdate(d *schema.ResourceData, meta interfac sink.ForceSendFields = append(sink.ForceSendFields, "IncludeChildren") // The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true. - _, err := config.clientLogging.Organizations.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(true).Do() + _, err := config.clientLogging.Organizations.Sinks.Patch(d.Id(), sink). + UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(true).Do() if err != nil { return err } diff --git a/google/resource_logging_project_sink.go b/google/resource_logging_project_sink.go index 5e6d03da..501fb899 100644 --- a/google/resource_logging_project_sink.go +++ b/google/resource_logging_project_sink.go @@ -84,7 +84,8 @@ func resourceLoggingProjectSinkUpdate(d *schema.ResourceData, meta interface{}) sink := expandResourceLoggingSinkForUpdate(d) uniqueWriterIdentity := d.Get("unique_writer_identity").(bool) - _, err := config.clientLogging.Projects.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(uniqueWriterIdentity).Do() + _, err := config.clientLogging.Projects.Sinks.Patch(d.Id(), sink). + UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(uniqueWriterIdentity).Do() if err != nil { return err } diff --git a/google/resource_logging_sink.go b/google/resource_logging_sink.go index 4367a97f..45cd2bda 100644 --- a/google/resource_logging_sink.go +++ b/google/resource_logging_sink.go @@ -5,6 +5,9 @@ import ( "google.golang.org/api/logging/v2" ) +// Empty update masks will eventually cause updates to fail, currently empty masks default to this string +const defaultLogSinkUpdateMask = "destination,filter,includeChildren" + func resourceLoggingSinkSchema() map[string]*schema.Schema { return map[string]*schema.Schema{ "name": { diff --git a/website/docs/r/logging_project_sink.html.markdown b/website/docs/r/logging_project_sink.html.markdown index 7ab405cc..0ae1fe33 100644 --- a/website/docs/r/logging_project_sink.html.markdown +++ b/website/docs/r/logging_project_sink.html.markdown @@ -14,8 +14,9 @@ Manages a project-level logging sink. For more information see and [API](https://cloud.google.com/logging/docs/reference/v2/rest/). -Note that you must have the "Logs Configuration Writer" IAM role (`roles/logging.configWriter`) -granted to the credentials used with terraform. +~> **Note:** You must have [granted the "Logs Configuration Writer"](https://cloud.google.com/logging/docs/access-control) IAM role (`roles/logging.configWriter`) to the credentials used with terraform. + +~> **Note** You must [enable the Cloud Resource Manager API](https://console.cloud.google.com/apis/library/cloudresourcemanager.googleapis.com) ## Example Usage From 62cee9bceab2e908b19cb2d14bfc173616c7a5de Mon Sep 17 00:00:00 2001 From: Chris Stephens Date: Thu, 6 Sep 2018 10:11:13 -0700 Subject: [PATCH 31/31] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index a5df97a2..0b9df611 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,6 +8,7 @@ IMPROVEMENTS: * compute: `google_compute_health_check` is autogenerated, exposing the `type` attribute and accepting more import formats. [GH-1941] * container: Addition of create_subnetwork and other fields relevant for Alias IPs [GH-1921] * logging: Add import support for `google_logging_organization_sink`, `google_logging_folder_sink`, `google_logging_billing_account_sink` [GH-1860] +* logging: Sending a default update mask for all logging sinks to prevent future breakages [#991](https://github.com/terraform-providers/terraform-provider-google/issues/991) ## 1.17.1 (August 22, 2018)