This commit is contained in:
Ralf Waldvogel 2018-09-07 06:26:08 +02:00
commit 90980c1b24
62 changed files with 5103 additions and 2106 deletions

80
.github/ISSUE_TEMPLATE/bug.md vendored Normal file
View File

@ -0,0 +1,80 @@
---
name: Bug
about: For when something is there, but doesn't work how it should.
---
<!--- Please leave this line, it helps our automation: [issue-type:bug-report] --->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
* If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* google_XXXXX
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```tf
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://www.hashicorp.com/security
# If reproducing the bug involves modifying the config file (e.g., apply a config,
# change a value, apply the config again, see the bug) then please include both the
# version of the config before the change, and the version of the config after the change.
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
<!--- What should have happened? --->
### Actual Behavior
<!--- What actually happened? --->
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: authenticating as a user instead of a service account? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #0000

45
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@ -0,0 +1,45 @@
---
name: Enhancement
about: For when something (a resource, field, etc.) is missing, and should be added.
---
<!--- Please leave this line, it helps our automation: [issue-type:enhancement] --->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. Including use cases and why it would help you is a great way to convince maintainers to spend time on it. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* google_XXXXX
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```tf
# Propose what you think the configuration to take advantage of this feature should look like.
# We may not use it verbatim, but it's helpful in understanding your intent.
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
* #0000

18
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@ -0,0 +1,18 @@
---
name: Question
about: If you have a question, please check out our community resources!
---
---
Issues on GitHub are intended to be related to bugs or feature requests with provider codebase,
so we recommend using our other community resources instead of asking here 👍.
---
If you have a support request or question please submit them to one of these resources:
* [Terraform community resources](https://www.terraform.io/docs/extend/community/index.html)
* [HashiCorp support](https://support.hashicorp.com) (Terraform Enterprise customers)
* [Google Cloud Platform Slack](https://gcp-slack.appspot.com/)

View File

@ -4,7 +4,7 @@ services:
- docker
language: go
go:
- 1.9.1
- "1.11"
install:
# This script is used by the Travis build to install a cookie for

View File

@ -2,6 +2,13 @@
BACKWARDS INCOMPATIBILITIES:
* compute: instance templates used to not set any disks in the template in state unless they were in the config, as well. It also only stored the image name in state. Both of these were bugs, and have been fixed. They should not cause any disruption. If you were interpolating an image name from a disk in an instance template, you'll need to update your config to strip out everything before the last `/`. If you imported an instance template, and did not add all the disks in the template to your config, you'll see a diff; add those disks to your config, and it will go away. Those are the only two instances where this change should effect you. We apologise for the inconvenience. [GH-1916]
* provider: This is the first release tested against and built with Go 1.11, which required go fmt changes to the code. If you are building a custom version of this provider or running tests using the repository Make targets (e.g. make build) when using a previous version of Go, you will receive errors. You can use the underlying go commands (e.g. go build) to workaround the go fmt check in the Make targets until you are able to upgrade Go.
IMPROVEMENTS:
* compute: `google_compute_health_check` is autogenerated, exposing the `type` attribute and accepting more import formats. [GH-1941]
* container: Addition of create_subnetwork and other fields relevant for Alias IPs [GH-1921]
* logging: Add import support for `google_logging_organization_sink`, `google_logging_folder_sink`, `google_logging_billing_account_sink` [GH-1860]
* logging: Sending a default update mask for all logging sinks to prevent future breakages [#991](https://github.com/terraform-providers/terraform-provider-google/issues/991)
## 1.17.1 (August 22, 2018)

View File

@ -18,8 +18,8 @@ This provider plugin is maintained by:
Requirements
------------
- [Terraform](https://www.terraform.io/downloads.html) 0.10.x
- [Go](https://golang.org/doc/install) 1.9 (to build the provider plugin)
- [Terraform](https://www.terraform.io/downloads.html) 0.10+
- [Go](https://golang.org/doc/install) 1.11.0 or higher
Building The Provider
---------------------
@ -51,7 +51,7 @@ To upgrade to the latest stable version of the Google provider run `terraform in
Developing the Provider
---------------------------
If you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.9+ is *required*). You'll also need to correctly setup a [GOPATH](http://golang.org/doc/code.html#GOPATH), as well as adding `$GOPATH/bin` to your `$PATH`.
If you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.11+ is *required*). You'll also need to correctly setup a [GOPATH](http://golang.org/doc/code.html#GOPATH), as well as adding `$GOPATH/bin` to your `$PATH`.
To compile the provider, run `make build`. This will build the provider and put the provider binary in the `$GOPATH/bin` directory.

View File

@ -19,6 +19,11 @@ func dataSourceGoogleContainerEngineVersions() *schema.Resource {
Type: schema.TypeString,
Optional: true,
},
"region": {
Type: schema.TypeString,
Optional: true,
ConflictsWith: []string{"zone"},
},
"default_cluster_version": {
Type: schema.TypeString,
Computed: true,
@ -53,12 +58,16 @@ func dataSourceGoogleContainerEngineVersionsRead(d *schema.ResourceData, meta in
return err
}
zone, err := getZone(d, meta.(*Config))
location, err := getLocation(d, config)
if err != nil {
return err
}
if len(location) == 0 {
return fmt.Errorf("Cannot determine location: set zone or region in this data source or at provider-level")
}
resp, err := config.clientContainer.Projects.Zones.GetServerconfig(project, zone).Do()
location = fmt.Sprintf("projects/%s/locations/%s", project, location)
resp, err := config.clientContainerBeta.Projects.Locations.GetServerConfig(location).Do()
if err != nil {
return fmt.Errorf("Error retrieving available container cluster versions: %s", err.Error())
}
@ -66,10 +75,13 @@ func dataSourceGoogleContainerEngineVersionsRead(d *schema.ResourceData, meta in
d.Set("valid_master_versions", resp.ValidMasterVersions)
d.Set("default_cluster_version", resp.DefaultClusterVersion)
d.Set("valid_node_versions", resp.ValidNodeVersions)
if len(resp.ValidMasterVersions) > 0 {
d.Set("latest_master_version", resp.ValidMasterVersions[0])
}
if len(resp.ValidNodeVersions) > 0 {
d.Set("latest_node_version", resp.ValidNodeVersions[0])
}
d.SetId(time.Now().UTC().String())
return nil
}

View File

@ -27,6 +27,23 @@ func TestAccContainerEngineVersions_basic(t *testing.T) {
})
}
func TestAccContainerEngineVersions_regional(t *testing.T) {
t.Parallel()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccCheckGoogleContainerEngineVersionsRegionalConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckGoogleContainerEngineVersionsMeta("data.google_container_engine_versions.versions"),
),
},
},
})
}
func testAccCheckGoogleContainerEngineVersionsMeta(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
@ -102,3 +119,9 @@ data "google_container_engine_versions" "versions" {
zone = "us-central1-b"
}
`
var testAccCheckGoogleContainerEngineVersionsRegionalConfig = `
data "google_container_engine_versions" "versions" {
region = "us-central1"
}
`

View File

@ -1,61 +0,0 @@
package google
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
// Test importing a first generation database
func TestAccSqlDatabaseInstance_importBasic(t *testing.T) {
t.Parallel()
resourceName := "google_sql_database_instance.instance"
databaseID := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccSqlDatabaseInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(
testGoogleSqlDatabaseInstance_basic, databaseID),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}
// Test importing a second generation database
func TestAccSqlDatabaseInstance_importBasic3(t *testing.T) {
t.Parallel()
resourceName := "google_sql_database_instance.instance"
databaseID := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccSqlDatabaseInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(
testGoogleSqlDatabaseInstance_basic3, databaseID),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -1,33 +0,0 @@
package google
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccSqlDatabase_importBasic(t *testing.T) {
t.Parallel()
resourceName := "google_sql_database.database"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccSqlDatabaseInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(
testGoogleSqlDatabase_basic, acctest.RandString(10), acctest.RandString(10)),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -7,7 +7,7 @@ import (
// loggingSinkResourceTypes contains all the possible Stackdriver Logging resource types. Used to parse ids safely.
var loggingSinkResourceTypes = []string{
"billingAccount",
"billingAccounts",
"folders",
"organizations",
"projects",

View File

@ -26,6 +26,7 @@ var GeneratedComputeResourcesMap = map[string]*schema.Resource{
"google_compute_global_address": resourceComputeGlobalAddress(),
"google_compute_http_health_check": resourceComputeHttpHealthCheck(),
"google_compute_https_health_check": resourceComputeHttpsHealthCheck(),
"google_compute_health_check": resourceComputeHealthCheck(),
"google_compute_region_autoscaler": resourceComputeRegionAutoscaler(),
"google_compute_region_disk": resourceComputeRegionDisk(),
"google_compute_route": resourceComputeRoute(),

View File

@ -145,6 +145,10 @@ func resourceComputeFirewall() *schema.Resource {
Type: schema.TypeBool,
Optional: true,
},
"enable_logging": {
Type: schema.TypeBool,
Optional: true,
},
"priority": {
Type: schema.TypeInt,
Optional: true,
@ -254,6 +258,12 @@ func resourceComputeFirewallCreate(d *schema.ResourceData, meta interface{}) err
} else if v, ok := d.GetOkExists("disabled"); ok || !reflect.DeepEqual(v, disabledProp) {
obj["disabled"] = disabledProp
}
enableLoggingProp, err := expandComputeFirewallEnableLogging(d.Get("enable_logging"), d, config)
if err != nil {
return err
} else if v, ok := d.GetOkExists("enable_logging"); ok || !reflect.DeepEqual(v, enableLoggingProp) {
obj["enableLogging"] = enableLoggingProp
}
nameProp, err := expandComputeFirewallName(d.Get("name"), d, config)
if err != nil {
return err
@ -380,6 +390,9 @@ func resourceComputeFirewallRead(d *schema.ResourceData, meta interface{}) error
if err := d.Set("disabled", flattenComputeFirewallDisabled(res["disabled"])); err != nil {
return fmt.Errorf("Error reading Firewall: %s", err)
}
if err := d.Set("enable_logging", flattenComputeFirewallEnableLogging(res["enableLogging"])); err != nil {
return fmt.Errorf("Error reading Firewall: %s", err)
}
if err := d.Set("name", flattenComputeFirewallName(res["name"])); err != nil {
return fmt.Errorf("Error reading Firewall: %s", err)
}
@ -458,6 +471,12 @@ func resourceComputeFirewallUpdate(d *schema.ResourceData, meta interface{}) err
} else if v, ok := d.GetOkExists("disabled"); ok || !reflect.DeepEqual(v, disabledProp) {
obj["disabled"] = disabledProp
}
enableLoggingProp, err := expandComputeFirewallEnableLogging(d.Get("enable_logging"), d, config)
if err != nil {
return err
} else if v, ok := d.GetOkExists("enable_logging"); ok || !reflect.DeepEqual(v, enableLoggingProp) {
obj["enableLogging"] = enableLoggingProp
}
nameProp, err := expandComputeFirewallName(d.Get("name"), d, config)
if err != nil {
return err
@ -660,6 +679,10 @@ func flattenComputeFirewallDisabled(v interface{}) interface{} {
return v
}
func flattenComputeFirewallEnableLogging(v interface{}) interface{} {
return v
}
func flattenComputeFirewallName(v interface{}) interface{} {
return v
}
@ -795,6 +818,10 @@ func expandComputeFirewallDisabled(v interface{}, d *schema.ResourceData, config
return v, nil
}
func expandComputeFirewallEnableLogging(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) {
return v, nil
}
func expandComputeFirewallName(v interface{}, d *schema.ResourceData, config *Config) (interface{}, error) {
return v, nil
}

View File

@ -283,6 +283,48 @@ func TestAccComputeFirewall_disabled(t *testing.T) {
})
}
func TestAccComputeFirewall_enableLogging(t *testing.T) {
t.Parallel()
var firewall computeBeta.Firewall
networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10))
firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeFirewallDestroy,
Steps: []resource.TestStep{
{
Config: testAccComputeFirewall_enableLogging(networkName, firewallName, false),
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeBetaFirewallExists("google_compute_firewall.foobar", &firewall),
testAccCheckComputeFirewallLoggingEnabled(&firewall, false),
),
},
{
ResourceName: "google_compute_firewall.foobar",
ImportState: true,
ImportStateVerify: true,
},
{
Config: testAccComputeFirewall_enableLogging(networkName, firewallName, true),
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeBetaFirewallExists("google_compute_firewall.foobar", &firewall),
testAccCheckComputeFirewallLoggingEnabled(&firewall, true),
),
},
{
Config: testAccComputeFirewall_enableLogging(networkName, firewallName, false),
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeBetaFirewallExists("google_compute_firewall.foobar", &firewall),
testAccCheckComputeFirewallLoggingEnabled(&firewall, false),
),
},
},
})
}
func testAccCheckComputeFirewallDestroy(s *terraform.State) error {
config := testAccProvider.Meta().(*Config)
@ -330,15 +372,6 @@ func testAccCheckComputeFirewallExists(n string, firewall *compute.Firewall) res
}
}
func testAccCheckComputeFirewallHasPriority(firewall *compute.Firewall, priority int) resource.TestCheckFunc {
return func(s *terraform.State) error {
if firewall.Priority != int64(priority) {
return fmt.Errorf("Priority for firewall does not match: expected %d, found %d", priority, firewall.Priority)
}
return nil
}
}
func testAccCheckComputeBetaFirewallExists(n string, firewall *computeBeta.Firewall) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
@ -368,6 +401,15 @@ func testAccCheckComputeBetaFirewallExists(n string, firewall *computeBeta.Firew
}
}
func testAccCheckComputeFirewallHasPriority(firewall *compute.Firewall, priority int) resource.TestCheckFunc {
return func(s *terraform.State) error {
if firewall.Priority != int64(priority) {
return fmt.Errorf("Priority for firewall does not match: expected %d, found %d", priority, firewall.Priority)
}
return nil
}
}
func testAccCheckComputeFirewallPorts(
firewall *compute.Firewall, ports string) resource.TestCheckFunc {
return func(s *terraform.State) error {
@ -444,6 +486,15 @@ func testAccCheckComputeFirewallApiVersion(firewall *compute.Firewall) resource.
}
}
func testAccCheckComputeFirewallLoggingEnabled(firewall *computeBeta.Firewall, enabled bool) resource.TestCheckFunc {
return func(s *terraform.State) error {
if firewall == nil || firewall.EnableLogging != enabled {
return fmt.Errorf("expected firewall enable_logging to be %t, got %t", enabled, firewall.EnableLogging)
}
return nil
}
}
func testAccComputeFirewall_basic(network, firewall string) string {
return fmt.Sprintf(`
resource "google_compute_network" "foobar" {
@ -618,3 +669,29 @@ func testAccComputeFirewall_disabled(network, firewall string) string {
disabled = true
}`, network, firewall)
}
func testAccComputeFirewall_enableLogging(network, firewall string, enableLogging bool) string {
enableLoggingCfg := ""
if enableLogging {
enableLoggingCfg = "enable_logging= true"
}
return fmt.Sprintf(`
resource "google_compute_network" "foobar" {
name = "%s"
auto_create_subnetworks = false
ipv4_range = "10.0.0.0/16"
}
resource "google_compute_firewall" "foobar" {
name = "firewall-test-%s"
description = "Resource created for Terraform acceptance testing"
network = "${google_compute_network.foobar.name}"
source_tags = ["foo"]
allow {
protocol = "icmp"
}
%s
}`, network, firewall, enableLoggingCfg)
}

File diff suppressed because it is too large Load Diff

View File

@ -1379,7 +1379,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err
if d.HasChange("service_account.0.email") || scopesChange {
sa := d.Get("service_account").([]interface{})
req := &compute.InstancesSetServiceAccountRequest{ForceSendFields: []string{"email"}}
if len(sa) > 0 {
if len(sa) > 0 && sa[0] != nil {
saMap := sa[0].(map[string]interface{})
req.Email = saMap["email"].(string)
req.Scopes = canonicalizeServiceScopes(convertStringSet(saMap["scopes"].(*schema.Set)))

View File

@ -42,6 +42,10 @@ var (
},
},
}
ipAllocationSubnetFields = []string{"ip_allocation_policy.0.create_subnetwork", "ip_allocation_policy.0.subnetwork_name"}
ipAllocationCidrBlockFields = []string{"ip_allocation_policy.0.cluster_ipv4_cidr_block", "ip_allocation_policy.0.services_ipv4_cidr_block"}
ipAllocationRangeFields = []string{"ip_allocation_policy.0.cluster_secondary_range_name", "ip_allocation_policy.0.services_secondary_range_name"}
)
func resourceContainerCluster() *schema.Resource {
@ -433,15 +437,52 @@ func resourceContainerCluster() *schema.Resource {
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"cluster_secondary_range_name": {
// GKE creates subnetwork automatically
"create_subnetwork": {
Type: schema.TypeBool,
Optional: true,
ForceNew: true,
ConflictsWith: append(ipAllocationCidrBlockFields, ipAllocationRangeFields...),
},
"subnetwork_name": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ConflictsWith: append(ipAllocationCidrBlockFields, ipAllocationRangeFields...),
},
// GKE creates/deletes secondary ranges in VPC
"cluster_ipv4_cidr_block": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: append(ipAllocationSubnetFields, ipAllocationRangeFields...),
DiffSuppressFunc: cidrOrSizeDiffSuppress,
},
"services_ipv4_cidr_block": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: append(ipAllocationSubnetFields, ipAllocationRangeFields...),
DiffSuppressFunc: cidrOrSizeDiffSuppress,
},
// User manages secondary ranges manually
"cluster_secondary_range_name": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: append(ipAllocationSubnetFields, ipAllocationCidrBlockFields...),
},
"services_secondary_range_name": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: append(ipAllocationSubnetFields, ipAllocationCidrBlockFields...),
},
},
},
@ -475,6 +516,11 @@ func resourceContainerCluster() *schema.Resource {
}
}
func cidrOrSizeDiffSuppress(k, old, new string, d *schema.ResourceData) bool {
// If the user specified a size and the API returned a full cidr block, suppress.
return strings.HasPrefix(new, "/") && strings.HasSuffix(old, new)
}
func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
@ -1409,24 +1455,24 @@ func expandClusterAddonsConfig(configured interface{}) *containerBeta.AddonsConf
}
func expandIPAllocationPolicy(configured interface{}) (*containerBeta.IPAllocationPolicy, error) {
ap := &containerBeta.IPAllocationPolicy{}
l := configured.([]interface{})
if len(l) > 0 {
if config, ok := l[0].(map[string]interface{}); ok {
ap.UseIpAliases = true
if v, ok := config["cluster_secondary_range_name"]; ok {
ap.ClusterSecondaryRangeName = v.(string)
if len(l) == 0 {
return &containerBeta.IPAllocationPolicy{}, nil
}
config := l[0].(map[string]interface{})
if v, ok := config["services_secondary_range_name"]; ok {
ap.ServicesSecondaryRangeName = v.(string)
}
} else {
return nil, fmt.Errorf("clusters using IP aliases must specify secondary ranges")
}
}
return &containerBeta.IPAllocationPolicy{
UseIpAliases: true,
return ap, nil
CreateSubnetwork: config["create_subnetwork"].(bool),
SubnetworkName: config["subnetwork_name"].(string),
ClusterIpv4CidrBlock: config["cluster_ipv4_cidr_block"].(string),
ServicesIpv4CidrBlock: config["services_ipv4_cidr_block"].(string),
ClusterSecondaryRangeName: config["cluster_secondary_range_name"].(string),
ServicesSecondaryRangeName: config["services_secondary_range_name"].(string),
}, nil
}
func expandMaintenancePolicy(configured interface{}) *containerBeta.MaintenancePolicy {
@ -1583,6 +1629,12 @@ func flattenIPAllocationPolicy(c *containerBeta.IPAllocationPolicy) []map[string
}
return []map[string]interface{}{
{
"create_subnetwork": c.CreateSubnetwork,
"subnetwork_name": c.SubnetworkName,
"cluster_ipv4_cidr_block": c.ClusterIpv4CidrBlock,
"services_ipv4_cidr_block": c.ServicesIpv4CidrBlock,
"cluster_secondary_range_name": c.ClusterSecondaryRangeName,
"services_secondary_range_name": c.ServicesSecondaryRangeName,
},

View File

@ -1093,7 +1093,7 @@ func TestAccContainerCluster_withMaintenanceWindow(t *testing.T) {
})
}
func TestAccContainerCluster_withIPAllocationPolicy(t *testing.T) {
func TestAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(t *testing.T) {
t.Parallel()
cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10))
@ -1103,23 +1103,7 @@ func TestAccContainerCluster_withIPAllocationPolicy(t *testing.T) {
CheckDestroy: testAccCheckContainerClusterDestroy,
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withIPAllocationPolicy(
cluster,
map[string]string{
"pods": "10.1.0.0/16",
"services": "10.2.0.0/20",
},
map[string]string{
"cluster_secondary_range_name": "pods",
"services_secondary_range_name": "services",
},
),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttr("google_container_cluster.with_ip_allocation_policy",
"ip_allocation_policy.0.cluster_secondary_range_name", "pods"),
resource.TestCheckResourceAttr("google_container_cluster.with_ip_allocation_policy",
"ip_allocation_policy.0.services_secondary_range_name", "services"),
),
Config: testAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(cluster),
},
{
ResourceName: "google_container_cluster.with_ip_allocation_policy",
@ -1127,29 +1111,71 @@ func TestAccContainerCluster_withIPAllocationPolicy(t *testing.T) {
ImportState: true,
ImportStateVerify: true,
},
{
Config: testAccContainerCluster_withIPAllocationPolicy(
cluster,
map[string]string{
"pods": "10.1.0.0/16",
"services": "10.2.0.0/20",
},
map[string]string{},
),
ExpectError: regexp.MustCompile("clusters using IP aliases must specify secondary ranges"),
})
}
func TestAccContainerCluster_withIPAllocationPolicy_specificIPRanges(t *testing.T) {
t.Parallel()
cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckContainerClusterDestroy,
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withIPAllocationPolicy_specificIPRanges(cluster),
},
{
Config: testAccContainerCluster_withIPAllocationPolicy(
cluster,
map[string]string{
"pods": "10.1.0.0/16",
ResourceName: "google_container_cluster.with_ip_allocation_policy",
ImportStateIdPrefix: "us-central1-a/",
ImportState: true,
ImportStateVerify: true,
},
map[string]string{
"cluster_secondary_range_name": "pods",
"services_secondary_range_name": "services",
},
),
ExpectError: regexp.MustCompile("secondary range \"services\" does not exist in network"),
})
}
func TestAccContainerCluster_withIPAllocationPolicy_specificSizes(t *testing.T) {
t.Parallel()
cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckContainerClusterDestroy,
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withIPAllocationPolicy_specificSizes(cluster),
},
{
ResourceName: "google_container_cluster.with_ip_allocation_policy",
ImportStateIdPrefix: "us-central1-a/",
ImportState: true,
ImportStateVerify: true,
},
},
})
}
func TestAccContainerCluster_withIPAllocationPolicy_createSubnetwork(t *testing.T) {
t.Parallel()
cluster := fmt.Sprintf("cluster-test-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckContainerClusterDestroy,
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withIPAllocationPolicy_createSubnetwork(cluster),
},
{
ResourceName: "google_container_cluster.with_ip_allocation_policy",
ImportStateIdPrefix: "us-central1-a/",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -2233,23 +2259,7 @@ resource "google_container_cluster" "with_maintenance_window" {
}`, clusterName, maintenancePolicy)
}
func testAccContainerCluster_withIPAllocationPolicy(cluster string, ranges, policy map[string]string) string {
var secondaryRanges bytes.Buffer
for rangeName, cidr := range ranges {
secondaryRanges.WriteString(fmt.Sprintf(`
secondary_ip_range {
range_name = "%s"
ip_cidr_range = "%s"
}`, rangeName, cidr))
}
var ipAllocationPolicy bytes.Buffer
for key, value := range policy {
ipAllocationPolicy.WriteString(fmt.Sprintf(`
%s = "%s"`, key, value))
}
func testAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(cluster string) string {
return fmt.Sprintf(`
resource "google_compute_network" "container_network" {
name = "container-net-%s"
@ -2262,7 +2272,14 @@ resource "google_compute_subnetwork" "container_subnetwork" {
ip_cidr_range = "10.0.0.0/24"
region = "us-central1"
%s
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "10.1.0.0/16"
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.2.0.0/20"
}
}
resource "google_container_cluster" "with_ip_allocation_policy" {
@ -2274,9 +2291,66 @@ resource "google_container_cluster" "with_ip_allocation_policy" {
initial_node_count = 1
ip_allocation_policy {
%s
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
}`, acctest.RandString(10), secondaryRanges.String(), cluster, ipAllocationPolicy.String())
}`, cluster, cluster)
}
func testAccContainerCluster_withIPAllocationPolicy_specificIPRanges(cluster string) string {
return fmt.Sprintf(`
resource "google_container_cluster" "with_ip_allocation_policy" {
name = "%s"
zone = "us-central1-a"
initial_node_count = 1
ip_allocation_policy {
cluster_ipv4_cidr_block = "10.90.0.0/19"
services_ipv4_cidr_block = "10.40.0.0/19"
}
}`, cluster)
}
func testAccContainerCluster_withIPAllocationPolicy_specificSizes(cluster string) string {
return fmt.Sprintf(`
resource "google_compute_network" "container_network" {
name = "container-net-%s"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "container_subnetwork" {
name = "${google_compute_network.container_network.name}"
network = "${google_compute_network.container_network.name}"
ip_cidr_range = "10.0.0.0/24"
region = "us-central1"
}
resource "google_container_cluster" "with_ip_allocation_policy" {
name = "%s"
zone = "us-central1-a"
network = "${google_compute_network.container_network.name}"
subnetwork = "${google_compute_subnetwork.container_subnetwork.name}"
initial_node_count = 1
ip_allocation_policy {
cluster_ipv4_cidr_block = "/16"
services_ipv4_cidr_block = "/22"
}
}`, cluster, cluster)
}
func testAccContainerCluster_withIPAllocationPolicy_createSubnetwork(cluster string) string {
return fmt.Sprintf(`
resource "google_container_cluster" "with_ip_allocation_policy" {
name = "%s"
zone = "us-central1-a"
initial_node_count = 1
ip_allocation_policy {
create_subnetwork = true
}
}`, cluster)
}
func testAccContainerCluster_withPodSecurityPolicy(clusterName string, enabled bool) string {

View File

@ -2,6 +2,7 @@ package google
import (
"fmt"
"github.com/hashicorp/terraform/helper/schema"
)
@ -12,6 +13,9 @@ func resourceLoggingBillingAccountSink() *schema.Resource {
Delete: resourceLoggingBillingAccountSinkDelete,
Update: resourceLoggingBillingAccountSinkUpdate,
Schema: resourceLoggingSinkSchema(),
Importer: &schema.ResourceImporter{
State: resourceLoggingSinkImportState("billing_account"),
},
}
schm.Schema["billing_account"] = &schema.Schema{
Type: schema.TypeString,
@ -55,7 +59,8 @@ func resourceLoggingBillingAccountSinkUpdate(d *schema.ResourceData, meta interf
sink := expandResourceLoggingSinkForUpdate(d)
// The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true.
_, err := config.clientLogging.BillingAccounts.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(true).Do()
_, err := config.clientLogging.BillingAccounts.Sinks.Patch(d.Id(), sink).
UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(true).Do()
if err != nil {
return err
}

View File

@ -30,6 +30,10 @@ func TestAccLoggingBillingAccountSink_basic(t *testing.T) {
testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.basic", &sink),
testAccCheckLoggingBillingAccountSink(&sink, "google_logging_billing_account_sink.basic"),
),
}, {
ResourceName: "google_logging_billing_account_sink.basic",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -62,6 +66,10 @@ func TestAccLoggingBillingAccountSink_update(t *testing.T) {
testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.update", &sinkAfter),
testAccCheckLoggingBillingAccountSink(&sinkAfter, "google_logging_billing_account_sink.update"),
),
}, {
ResourceName: "google_logging_billing_account_sink.update",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -96,6 +104,10 @@ func TestAccLoggingBillingAccountSink_heredoc(t *testing.T) {
testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.heredoc", &sink),
testAccCheckLoggingBillingAccountSink(&sink, "google_logging_billing_account_sink.heredoc"),
),
}, {
ResourceName: "google_logging_billing_account_sink.heredoc",
ImportState: true,
ImportStateVerify: true,
},
},
})

View File

@ -2,6 +2,7 @@ package google
import (
"fmt"
"strings"
"github.com/hashicorp/terraform/helper/schema"
)
@ -13,12 +14,17 @@ func resourceLoggingFolderSink() *schema.Resource {
Delete: resourceLoggingFolderSinkDelete,
Update: resourceLoggingFolderSinkUpdate,
Schema: resourceLoggingSinkSchema(),
Importer: &schema.ResourceImporter{
State: resourceLoggingSinkImportState("folder"),
},
}
schm.Schema["folder"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
DiffSuppressFunc: optionalPrefixSuppress("folders/"),
StateFunc: func(v interface{}) string {
return strings.Replace(v.(string), "folders/", "", 1)
},
}
schm.Schema["include_children"] = &schema.Schema{
Type: schema.TypeBool,
@ -71,7 +77,8 @@ func resourceLoggingFolderSinkUpdate(d *schema.ResourceData, meta interface{}) e
sink.ForceSendFields = append(sink.ForceSendFields, "IncludeChildren")
// The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true.
_, err := config.clientLogging.Folders.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(true).Do()
_, err := config.clientLogging.Folders.Sinks.Patch(d.Id(), sink).
UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(true).Do()
if err != nil {
return err
}

View File

@ -32,6 +32,10 @@ func TestAccLoggingFolderSink_basic(t *testing.T) {
testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sink),
testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.basic"),
),
}, {
ResourceName: "google_logging_folder_sink.basic",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -58,6 +62,10 @@ func TestAccLoggingFolderSink_folderAcceptsFullFolderPath(t *testing.T) {
testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sink),
testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.basic"),
),
}, {
ResourceName: "google_logging_folder_sink.basic",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -92,6 +100,10 @@ func TestAccLoggingFolderSink_update(t *testing.T) {
testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sinkAfter),
testAccCheckLoggingFolderSink(&sinkAfter, "google_logging_folder_sink.basic"),
),
}, {
ResourceName: "google_logging_folder_sink.basic",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -127,6 +139,10 @@ func TestAccLoggingFolderSink_heredoc(t *testing.T) {
testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.heredoc", &sink),
testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.heredoc"),
),
}, {
ResourceName: "google_logging_folder_sink.heredoc",
ImportState: true,
ImportStateVerify: true,
},
},
})

View File

@ -2,6 +2,8 @@ package google
import (
"fmt"
"strings"
"github.com/hashicorp/terraform/helper/schema"
)
@ -12,11 +14,16 @@ func resourceLoggingOrganizationSink() *schema.Resource {
Delete: resourceLoggingOrganizationSinkDelete,
Update: resourceLoggingOrganizationSinkUpdate,
Schema: resourceLoggingSinkSchema(),
Importer: &schema.ResourceImporter{
State: resourceLoggingSinkImportState("org_id"),
},
}
schm.Schema["org_id"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
DiffSuppressFunc: optionalPrefixSuppress("organizations/"),
StateFunc: func(v interface{}) string {
return strings.Replace(v.(string), "organizations/", "", 1)
},
}
schm.Schema["include_children"] = &schema.Schema{
Type: schema.TypeBool,
@ -70,7 +77,8 @@ func resourceLoggingOrganizationSinkUpdate(d *schema.ResourceData, meta interfac
sink.ForceSendFields = append(sink.ForceSendFields, "IncludeChildren")
// The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true.
_, err := config.clientLogging.Organizations.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(true).Do()
_, err := config.clientLogging.Organizations.Sinks.Patch(d.Id(), sink).
UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(true).Do()
if err != nil {
return err
}

View File

@ -31,6 +31,10 @@ func TestAccLoggingOrganizationSink_basic(t *testing.T) {
testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.basic", &sink),
testAccCheckLoggingOrganizationSink(&sink, "google_logging_organization_sink.basic"),
),
}, {
ResourceName: "google_logging_organization_sink.basic",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -63,6 +67,10 @@ func TestAccLoggingOrganizationSink_update(t *testing.T) {
testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.update", &sinkAfter),
testAccCheckLoggingOrganizationSink(&sinkAfter, "google_logging_organization_sink.update"),
),
}, {
ResourceName: "google_logging_organization_sink.update",
ImportState: true,
ImportStateVerify: true,
},
},
})
@ -97,6 +105,10 @@ func TestAccLoggingOrganizationSink_heredoc(t *testing.T) {
testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.heredoc", &sink),
testAccCheckLoggingOrganizationSink(&sink, "google_logging_organization_sink.heredoc"),
),
}, {
ResourceName: "google_logging_organization_sink.heredoc",
ImportState: true,
ImportStateVerify: true,
},
},
})

View File

@ -16,7 +16,7 @@ func resourceLoggingProjectSink() *schema.Resource {
Update: resourceLoggingProjectSinkUpdate,
Schema: resourceLoggingSinkSchema(),
Importer: &schema.ResourceImporter{
State: resourceLoggingProjectSinkImportState,
State: resourceLoggingSinkImportState("project"),
},
}
schm.Schema["project"] = &schema.Schema{
@ -84,7 +84,8 @@ func resourceLoggingProjectSinkUpdate(d *schema.ResourceData, meta interface{})
sink := expandResourceLoggingSinkForUpdate(d)
uniqueWriterIdentity := d.Get("unique_writer_identity").(bool)
_, err := config.clientLogging.Projects.Sinks.Patch(d.Id(), sink).UniqueWriterIdentity(uniqueWriterIdentity).Do()
_, err := config.clientLogging.Projects.Sinks.Patch(d.Id(), sink).
UpdateMask(defaultLogSinkUpdateMask).UniqueWriterIdentity(uniqueWriterIdentity).Do()
if err != nil {
return err
}
@ -103,18 +104,3 @@ func resourceLoggingProjectSinkDelete(d *schema.ResourceData, meta interface{})
d.SetId("")
return nil
}
func resourceLoggingProjectSinkImportState(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
config := meta.(*Config)
loggingSinkId, err := parseLoggingSinkId(d.Id())
if err != nil {
return nil, err
}
if config.Project != loggingSinkId.resourceId {
d.Set("project", loggingSinkId.resourceId)
}
return []*schema.ResourceData{d}, nil
}

View File

@ -5,6 +5,9 @@ import (
"google.golang.org/api/logging/v2"
)
// Empty update masks will eventually cause updates to fail, currently empty masks default to this string
const defaultLogSinkUpdateMask = "destination,filter,includeChildren"
func resourceLoggingSinkSchema() map[string]*schema.Schema {
return map[string]*schema.Schema{
"name": {
@ -69,3 +72,16 @@ func expandResourceLoggingSinkForUpdate(d *schema.ResourceData) *logging.LogSink
}
return &sink
}
func resourceLoggingSinkImportState(sinkType string) schema.StateFunc {
return func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
loggingSinkId, err := parseLoggingSinkId(d.Id())
if err != nil {
return nil, err
}
d.Set(sinkType, loggingSinkId.resourceId)
return []*schema.ResourceData{d}, nil
}
}

View File

@ -17,7 +17,7 @@ func resourceSqlDatabase() *schema.Resource {
Update: resourceSqlDatabaseUpdate,
Delete: resourceSqlDatabaseDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
State: resourceSqlDatabaseImport,
},
Schema: map[string]*schema.Schema{
@ -211,3 +211,23 @@ func resourceSqlDatabaseDelete(d *schema.ResourceData, meta interface{}) error {
return nil
}
func resourceSqlDatabaseImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
config := meta.(*Config)
parseImportId([]string{
"projects/(?P<project>[^/]+)/instances/(?P<instance>[^/]+)/databases/(?P<name>[^/]+)",
"instances/(?P<instance>[^/]+)/databases/(?P<name>[^/]+)",
"(?P<project>[^/]+)/(?P<instance>[^/]+)/(?P<name>[^/]+)",
"(?P<instance>[^/]+)/(?P<name>[^/]+)",
"(?P<instance>[^/]+):(?P<name>[^/]+)",
}, d, config)
// Replace import id for the resource id
id, err := replaceVars(d, config, "{{instance}}:{{name}}")
if err != nil {
return nil, fmt.Errorf("Error constructing id: %s", err)
}
d.SetId(id)
return []*schema.ResourceData{d}, nil
}

View File

@ -40,7 +40,7 @@ func resourceSqlDatabaseInstance() *schema.Resource {
Update: resourceSqlDatabaseInstanceUpdate,
Delete: resourceSqlDatabaseInstanceDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
State: resourceSqlDatabaseInstanceImport,
},
Timeouts: &schema.ResourceTimeout{
@ -1105,6 +1105,23 @@ func resourceSqlDatabaseInstanceDelete(d *schema.ResourceData, meta interface{})
return nil
}
func resourceSqlDatabaseInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
config := meta.(*Config)
parseImportId([]string{
"projects/(?P<project>[^/]+)/instances/(?P<name>[^/]+)",
"(?P<project>[^/]+)/(?P<name>[^/]+)",
"(?P<name>[^/]+)"}, d, config)
// Replace import id for the resource id
id, err := replaceVars(d, config, "{{name}}")
if err != nil {
return nil, fmt.Errorf("Error constructing id: %s", err)
}
d.SetId(id)
return []*schema.ResourceData{d}, nil
}
func flattenSettings(settings *sqladmin.Settings) []map[string]interface{} {
data := map[string]interface{}{
"version": settings.SettingsVersion,

View File

@ -154,11 +154,13 @@ func testSweepDatabases(region string) error {
return nil
}
func TestAccSqlDatabaseInstance_basic(t *testing.T) {
func TestAccSqlDatabaseInstance_basicFirstGen(t *testing.T) {
t.Parallel()
var instance sqladmin.DatabaseInstance
databaseID := acctest.RandInt()
instanceID := acctest.RandInt()
instanceName := fmt.Sprintf("tf-lw-%d", instanceID)
resourceName := "google_sql_database_instance.instance"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -167,19 +169,34 @@ func TestAccSqlDatabaseInstance_basic(t *testing.T) {
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(
testGoogleSqlDatabaseInstance_basic, databaseID),
testGoogleSqlDatabaseInstance_basic, instanceID),
Check: resource.ComposeTestCheckFunc(
testAccCheckGoogleSqlDatabaseInstanceExists(
"google_sql_database_instance.instance", &instance),
testAccCheckGoogleSqlDatabaseInstanceEquals(
"google_sql_database_instance.instance", &instance),
testAccCheckGoogleSqlDatabaseInstanceExists(resourceName, &instance),
testAccCheckGoogleSqlDatabaseInstanceEquals(resourceName, &instance),
),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
resource.TestStep{
ResourceName: resourceName,
ImportStateId: fmt.Sprintf("projects/%s/instances/%s", getTestProjectFromEnv(), instanceName),
ImportState: true,
ImportStateVerify: true,
},
resource.TestStep{
ResourceName: resourceName,
ImportStateId: fmt.Sprintf("%s/%s", getTestProjectFromEnv(), instanceName),
ImportState: true,
ImportStateVerify: true,
},
},
})
}
func TestAccSqlDatabaseInstance_basic2(t *testing.T) {
func TestAccSqlDatabaseInstance_basicInferredName(t *testing.T) {
t.Parallel()
var instance sqladmin.DatabaseInstance
@ -202,7 +219,7 @@ func TestAccSqlDatabaseInstance_basic2(t *testing.T) {
})
}
func TestAccSqlDatabaseInstance_basic3(t *testing.T) {
func TestAccSqlDatabaseInstance_basicSecondGen(t *testing.T) {
t.Parallel()
var instance sqladmin.DatabaseInstance

View File

@ -16,21 +16,52 @@ func TestAccSqlDatabase_basic(t *testing.T) {
var database sqladmin.Database
resourceName := "google_sql_database.database"
instanceName := fmt.Sprintf("sqldatabasetest%s", acctest.RandString(10))
dbName := fmt.Sprintf("sqldatabasetest%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccSqlDatabaseDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(
testGoogleSqlDatabase_basic, acctest.RandString(10), acctest.RandString(10)),
Config: fmt.Sprintf(testGoogleSqlDatabase_basic, instanceName, dbName),
Check: resource.ComposeTestCheckFunc(
testAccCheckGoogleSqlDatabaseExists(
"google_sql_database.database", &database),
testAccCheckGoogleSqlDatabaseEquals(
"google_sql_database.database", &database),
testAccCheckGoogleSqlDatabaseExists(resourceName, &database),
testAccCheckGoogleSqlDatabaseEquals(resourceName, &database),
),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
resource.TestStep{
ResourceName: resourceName,
ImportStateId: fmt.Sprintf("%s/%s", instanceName, dbName),
ImportState: true,
ImportStateVerify: true,
},
resource.TestStep{
ResourceName: resourceName,
ImportStateId: fmt.Sprintf("instances/%s/databases/%s", instanceName, dbName),
ImportState: true,
ImportStateVerify: true,
},
resource.TestStep{
ResourceName: resourceName,
ImportStateId: fmt.Sprintf("%s/%s/%s", getTestProjectFromEnv(), instanceName, dbName),
ImportState: true,
ImportStateVerify: true,
},
resource.TestStep{
ResourceName: resourceName,
ImportStateId: fmt.Sprintf("projects/%s/instances/%s/databases/%s", getTestProjectFromEnv(), instanceName, dbName),
ImportState: true,
ImportStateVerify: true,
},
},
})
}
@ -151,7 +182,7 @@ func testAccSqlDatabaseDestroy(s *terraform.State) error {
var testGoogleSqlDatabase_basic = `
resource "google_sql_database_instance" "instance" {
name = "sqldatabasetest%s"
name = "%s"
region = "us-central"
settings {
tier = "D0"
@ -159,13 +190,13 @@ resource "google_sql_database_instance" "instance" {
}
resource "google_sql_database" "database" {
name = "sqldatabasetest%s"
name = "%s"
instance = "${google_sql_database_instance.instance.name}"
}
`
var testGoogleSqlDatabase_latin1 = `
resource "google_sql_database_instance" "instance" {
name = "sqldatabasetest%s"
name = "%s"
region = "us-central"
settings {
tier = "D0"
@ -173,7 +204,7 @@ resource "google_sql_database_instance" "instance" {
}
resource "google_sql_database" "database" {
name = "sqldatabasetest%s"
name = "%s"
instance = "${google_sql_database_instance.instance.name}"
charset = "latin1"
collation = "latin1_swedish_ci"

View File

@ -14,7 +14,7 @@ func resourceProjectUsageBucket() *schema.Resource {
Read: resourceProjectUsageBucketRead,
Delete: resourceProjectUsageBucketDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
State: resourceProjectUsageBucketImportState,
},
Schema: map[string]*schema.Schema{
@ -40,7 +40,11 @@ func resourceProjectUsageBucket() *schema.Resource {
func resourceProjectUsageBucketRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
project := d.Id()
project, err := getProject(d, config)
if err != nil {
return err
}
p, err := config.clientCompute.Projects.Get(project).Do()
if err != nil {
@ -60,6 +64,7 @@ func resourceProjectUsageBucketRead(d *schema.ResourceData, meta interface{}) er
func resourceProjectUsageBucketCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
project, err := getProject(d, config)
if err != nil {
return err
@ -86,14 +91,19 @@ func resourceProjectUsageBucketCreate(d *schema.ResourceData, meta interface{})
func resourceProjectUsageBucketDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
project := d.Id()
project, err := getProject(d, config)
if err != nil {
return err
}
op, err := config.clientCompute.Projects.SetUsageExportBucket(project, nil).Do()
if err != nil {
return err
}
d.SetId(project)
err = computeOperationWait(config.clientCompute, op, project, "Setting usage export bucket.")
err = computeOperationWait(config.clientCompute, op, project,
"Setting usage export bucket to nil, automatically disabling usage export.")
if err != nil {
return err
}
@ -101,3 +111,9 @@ func resourceProjectUsageBucketDelete(d *schema.ResourceData, meta interface{})
return nil
}
func resourceProjectUsageBucketImportState(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
project := d.Id()
d.Set("project", project)
return []*schema.ResourceData{d}, nil
}

View File

@ -1,9 +1,12 @@
package logging
import (
"bytes"
"encoding/json"
"log"
"net/http"
"net/http/httputil"
"strings"
)
type transport struct {
@ -15,7 +18,7 @@ func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) {
if IsDebugOrHigher() {
reqData, err := httputil.DumpRequestOut(req, true)
if err == nil {
log.Printf("[DEBUG] "+logReqMsg, t.name, string(reqData))
log.Printf("[DEBUG] "+logReqMsg, t.name, prettyPrintJsonLines(reqData))
} else {
log.Printf("[ERROR] %s API Request error: %#v", t.name, err)
}
@ -29,7 +32,7 @@ func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) {
if IsDebugOrHigher() {
respData, err := httputil.DumpResponse(resp, true)
if err == nil {
log.Printf("[DEBUG] "+logRespMsg, t.name, string(respData))
log.Printf("[DEBUG] "+logRespMsg, t.name, prettyPrintJsonLines(respData))
} else {
log.Printf("[ERROR] %s API Response error: %#v", t.name, err)
}
@ -42,6 +45,20 @@ func NewTransport(name string, t http.RoundTripper) *transport {
return &transport{name, t}
}
// prettyPrintJsonLines iterates through a []byte line-by-line,
// transforming any lines that are complete json into pretty-printed json.
func prettyPrintJsonLines(b []byte) string {
parts := strings.Split(string(b), "\n")
for i, p := range parts {
if b := []byte(p); json.Valid(b) {
var out bytes.Buffer
json.Indent(&out, b, "", " ")
parts[i] = out.String()
}
}
return strings.Join(parts, "\n")
}
const logReqMsg = `%s API Request Details:
---[ REQUEST ]---------------------------------------
%s

View File

@ -1,20 +0,0 @@
Copyright (C) 2013-2016 by Maxim Bublis <b@codemonkey.ru>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -1,65 +0,0 @@
# UUID package for Go language
[![Build Status](https://travis-ci.org/satori/go.uuid.png?branch=master)](https://travis-ci.org/satori/go.uuid)
[![Coverage Status](https://coveralls.io/repos/github/satori/go.uuid/badge.svg?branch=master)](https://coveralls.io/github/satori/go.uuid)
[![GoDoc](http://godoc.org/github.com/satori/go.uuid?status.png)](http://godoc.org/github.com/satori/go.uuid)
This package provides pure Go implementation of Universally Unique Identifier (UUID). Supported both creation and parsing of UUIDs.
With 100% test coverage and benchmarks out of box.
Supported versions:
* Version 1, based on timestamp and MAC address (RFC 4122)
* Version 2, based on timestamp, MAC address and POSIX UID/GID (DCE 1.1)
* Version 3, based on MD5 hashing (RFC 4122)
* Version 4, based on random numbers (RFC 4122)
* Version 5, based on SHA-1 hashing (RFC 4122)
## Installation
Use the `go` command:
$ go get github.com/satori/go.uuid
## Requirements
UUID package requires Go >= 1.2.
## Example
```go
package main
import (
"fmt"
"github.com/satori/go.uuid"
)
func main() {
// Creating UUID Version 4
u1 := uuid.NewV4()
fmt.Printf("UUIDv4: %s\n", u1)
// Parsing UUID from string input
u2, err := uuid.FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8")
if err != nil {
fmt.Printf("Something gone wrong: %s", err)
}
fmt.Printf("Successfully parsed: %s", u2)
}
```
## Documentation
[Documentation](http://godoc.org/github.com/satori/go.uuid) is hosted at GoDoc project.
## Links
* [RFC 4122](http://tools.ietf.org/html/rfc4122)
* [DCE 1.1: Authentication and Security Services](http://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01)
## Copyright
Copyright (C) 2013-2016 by Maxim Bublis <b@codemonkey.ru>.
UUID package released under MIT License.
See [LICENSE](https://github.com/satori/go.uuid/blob/master/LICENSE) for details.

View File

@ -1,481 +0,0 @@
// Copyright (C) 2013-2015 by Maxim Bublis <b@codemonkey.ru>
//
// Permission is hereby granted, free of charge, to any person obtaining
// a copy of this software and associated documentation files (the
// "Software"), to deal in the Software without restriction, including
// without limitation the rights to use, copy, modify, merge, publish,
// distribute, sublicense, and/or sell copies of the Software, and to
// permit persons to whom the Software is furnished to do so, subject to
// the following conditions:
//
// The above copyright notice and this permission notice shall be
// included in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
// Package uuid provides implementation of Universally Unique Identifier (UUID).
// Supported versions are 1, 3, 4 and 5 (as specified in RFC 4122) and
// version 2 (as specified in DCE 1.1).
package uuid
import (
"bytes"
"crypto/md5"
"crypto/rand"
"crypto/sha1"
"database/sql/driver"
"encoding/binary"
"encoding/hex"
"fmt"
"hash"
"net"
"os"
"sync"
"time"
)
// UUID layout variants.
const (
VariantNCS = iota
VariantRFC4122
VariantMicrosoft
VariantFuture
)
// UUID DCE domains.
const (
DomainPerson = iota
DomainGroup
DomainOrg
)
// Difference in 100-nanosecond intervals between
// UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970).
const epochStart = 122192928000000000
// Used in string method conversion
const dash byte = '-'
// UUID v1/v2 storage.
var (
storageMutex sync.Mutex
storageOnce sync.Once
epochFunc = unixTimeFunc
clockSequence uint16
lastTime uint64
hardwareAddr [6]byte
posixUID = uint32(os.Getuid())
posixGID = uint32(os.Getgid())
)
// String parse helpers.
var (
urnPrefix = []byte("urn:uuid:")
byteGroups = []int{8, 4, 4, 4, 12}
)
func initClockSequence() {
buf := make([]byte, 2)
safeRandom(buf)
clockSequence = binary.BigEndian.Uint16(buf)
}
func initHardwareAddr() {
interfaces, err := net.Interfaces()
if err == nil {
for _, iface := range interfaces {
if len(iface.HardwareAddr) >= 6 {
copy(hardwareAddr[:], iface.HardwareAddr)
return
}
}
}
// Initialize hardwareAddr randomly in case
// of real network interfaces absence
safeRandom(hardwareAddr[:])
// Set multicast bit as recommended in RFC 4122
hardwareAddr[0] |= 0x01
}
func initStorage() {
initClockSequence()
initHardwareAddr()
}
func safeRandom(dest []byte) {
if _, err := rand.Read(dest); err != nil {
panic(err)
}
}
// Returns difference in 100-nanosecond intervals between
// UUID epoch (October 15, 1582) and current time.
// This is default epoch calculation function.
func unixTimeFunc() uint64 {
return epochStart + uint64(time.Now().UnixNano()/100)
}
// UUID representation compliant with specification
// described in RFC 4122.
type UUID [16]byte
// NullUUID can be used with the standard sql package to represent a
// UUID value that can be NULL in the database
type NullUUID struct {
UUID UUID
Valid bool
}
// The nil UUID is special form of UUID that is specified to have all
// 128 bits set to zero.
var Nil = UUID{}
// Predefined namespace UUIDs.
var (
NamespaceDNS, _ = FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8")
NamespaceURL, _ = FromString("6ba7b811-9dad-11d1-80b4-00c04fd430c8")
NamespaceOID, _ = FromString("6ba7b812-9dad-11d1-80b4-00c04fd430c8")
NamespaceX500, _ = FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8")
)
// And returns result of binary AND of two UUIDs.
func And(u1 UUID, u2 UUID) UUID {
u := UUID{}
for i := 0; i < 16; i++ {
u[i] = u1[i] & u2[i]
}
return u
}
// Or returns result of binary OR of two UUIDs.
func Or(u1 UUID, u2 UUID) UUID {
u := UUID{}
for i := 0; i < 16; i++ {
u[i] = u1[i] | u2[i]
}
return u
}
// Equal returns true if u1 and u2 equals, otherwise returns false.
func Equal(u1 UUID, u2 UUID) bool {
return bytes.Equal(u1[:], u2[:])
}
// Version returns algorithm version used to generate UUID.
func (u UUID) Version() uint {
return uint(u[6] >> 4)
}
// Variant returns UUID layout variant.
func (u UUID) Variant() uint {
switch {
case (u[8] & 0x80) == 0x00:
return VariantNCS
case (u[8]&0xc0)|0x80 == 0x80:
return VariantRFC4122
case (u[8]&0xe0)|0xc0 == 0xc0:
return VariantMicrosoft
}
return VariantFuture
}
// Bytes returns bytes slice representation of UUID.
func (u UUID) Bytes() []byte {
return u[:]
}
// Returns canonical string representation of UUID:
// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
func (u UUID) String() string {
buf := make([]byte, 36)
hex.Encode(buf[0:8], u[0:4])
buf[8] = dash
hex.Encode(buf[9:13], u[4:6])
buf[13] = dash
hex.Encode(buf[14:18], u[6:8])
buf[18] = dash
hex.Encode(buf[19:23], u[8:10])
buf[23] = dash
hex.Encode(buf[24:], u[10:])
return string(buf)
}
// SetVersion sets version bits.
func (u *UUID) SetVersion(v byte) {
u[6] = (u[6] & 0x0f) | (v << 4)
}
// SetVariant sets variant bits as described in RFC 4122.
func (u *UUID) SetVariant() {
u[8] = (u[8] & 0xbf) | 0x80
}
// MarshalText implements the encoding.TextMarshaler interface.
// The encoding is the same as returned by String.
func (u UUID) MarshalText() (text []byte, err error) {
text = []byte(u.String())
return
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// Following formats are supported:
// "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}",
// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8"
func (u *UUID) UnmarshalText(text []byte) (err error) {
if len(text) < 32 {
err = fmt.Errorf("uuid: UUID string too short: %s", text)
return
}
t := text[:]
braced := false
if bytes.Equal(t[:9], urnPrefix) {
t = t[9:]
} else if t[0] == '{' {
braced = true
t = t[1:]
}
b := u[:]
for i, byteGroup := range byteGroups {
if i > 0 {
if t[0] != '-' {
err = fmt.Errorf("uuid: invalid string format")
return
}
t = t[1:]
}
if len(t) < byteGroup {
err = fmt.Errorf("uuid: UUID string too short: %s", text)
return
}
if i == 4 && len(t) > byteGroup &&
((braced && t[byteGroup] != '}') || len(t[byteGroup:]) > 1 || !braced) {
err = fmt.Errorf("uuid: UUID string too long: %s", text)
return
}
_, err = hex.Decode(b[:byteGroup/2], t[:byteGroup])
if err != nil {
return
}
t = t[byteGroup:]
b = b[byteGroup/2:]
}
return
}
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (u UUID) MarshalBinary() (data []byte, err error) {
data = u.Bytes()
return
}
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
// It will return error if the slice isn't 16 bytes long.
func (u *UUID) UnmarshalBinary(data []byte) (err error) {
if len(data) != 16 {
err = fmt.Errorf("uuid: UUID must be exactly 16 bytes long, got %d bytes", len(data))
return
}
copy(u[:], data)
return
}
// Value implements the driver.Valuer interface.
func (u UUID) Value() (driver.Value, error) {
return u.String(), nil
}
// Scan implements the sql.Scanner interface.
// A 16-byte slice is handled by UnmarshalBinary, while
// a longer byte slice or a string is handled by UnmarshalText.
func (u *UUID) Scan(src interface{}) error {
switch src := src.(type) {
case []byte:
if len(src) == 16 {
return u.UnmarshalBinary(src)
}
return u.UnmarshalText(src)
case string:
return u.UnmarshalText([]byte(src))
}
return fmt.Errorf("uuid: cannot convert %T to UUID", src)
}
// Value implements the driver.Valuer interface.
func (u NullUUID) Value() (driver.Value, error) {
if !u.Valid {
return nil, nil
}
// Delegate to UUID Value function
return u.UUID.Value()
}
// Scan implements the sql.Scanner interface.
func (u *NullUUID) Scan(src interface{}) error {
if src == nil {
u.UUID, u.Valid = Nil, false
return nil
}
// Delegate to UUID Scan function
u.Valid = true
return u.UUID.Scan(src)
}
// FromBytes returns UUID converted from raw byte slice input.
// It will return error if the slice isn't 16 bytes long.
func FromBytes(input []byte) (u UUID, err error) {
err = u.UnmarshalBinary(input)
return
}
// FromBytesOrNil returns UUID converted from raw byte slice input.
// Same behavior as FromBytes, but returns a Nil UUID on error.
func FromBytesOrNil(input []byte) UUID {
uuid, err := FromBytes(input)
if err != nil {
return Nil
}
return uuid
}
// FromString returns UUID parsed from string input.
// Input is expected in a form accepted by UnmarshalText.
func FromString(input string) (u UUID, err error) {
err = u.UnmarshalText([]byte(input))
return
}
// FromStringOrNil returns UUID parsed from string input.
// Same behavior as FromString, but returns a Nil UUID on error.
func FromStringOrNil(input string) UUID {
uuid, err := FromString(input)
if err != nil {
return Nil
}
return uuid
}
// Returns UUID v1/v2 storage state.
// Returns epoch timestamp, clock sequence, and hardware address.
func getStorage() (uint64, uint16, []byte) {
storageOnce.Do(initStorage)
storageMutex.Lock()
defer storageMutex.Unlock()
timeNow := epochFunc()
// Clock changed backwards since last UUID generation.
// Should increase clock sequence.
if timeNow <= lastTime {
clockSequence++
}
lastTime = timeNow
return timeNow, clockSequence, hardwareAddr[:]
}
// NewV1 returns UUID based on current timestamp and MAC address.
func NewV1() UUID {
u := UUID{}
timeNow, clockSeq, hardwareAddr := getStorage()
binary.BigEndian.PutUint32(u[0:], uint32(timeNow))
binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32))
binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48))
binary.BigEndian.PutUint16(u[8:], clockSeq)
copy(u[10:], hardwareAddr)
u.SetVersion(1)
u.SetVariant()
return u
}
// NewV2 returns DCE Security UUID based on POSIX UID/GID.
func NewV2(domain byte) UUID {
u := UUID{}
timeNow, clockSeq, hardwareAddr := getStorage()
switch domain {
case DomainPerson:
binary.BigEndian.PutUint32(u[0:], posixUID)
case DomainGroup:
binary.BigEndian.PutUint32(u[0:], posixGID)
}
binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32))
binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48))
binary.BigEndian.PutUint16(u[8:], clockSeq)
u[9] = domain
copy(u[10:], hardwareAddr)
u.SetVersion(2)
u.SetVariant()
return u
}
// NewV3 returns UUID based on MD5 hash of namespace UUID and name.
func NewV3(ns UUID, name string) UUID {
u := newFromHash(md5.New(), ns, name)
u.SetVersion(3)
u.SetVariant()
return u
}
// NewV4 returns random generated UUID.
func NewV4() UUID {
u := UUID{}
safeRandom(u[:])
u.SetVersion(4)
u.SetVariant()
return u
}
// NewV5 returns UUID based on SHA-1 hash of namespace UUID and name.
func NewV5(ns UUID, name string) UUID {
u := newFromHash(sha1.New(), ns, name)
u.SetVersion(5)
u.SetVariant()
return u
}
// Returns UUID based on hashing of namespace UUID and name.
func newFromHash(h hash.Hash, ns UUID, name string) UUID {
u := UUID{}
h.Write(ns[:])
h.Write([]byte(name))
copy(u[:], h.Sum(nil))
return u
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

18
vendor/vendor.json vendored
View File

@ -633,10 +633,10 @@
"versionExact": "v0.11.2"
},
{
"checksumSHA1": "BAXV9ruAyno3aFgwYI2/wWzB2Gc=",
"checksumSHA1": "j8XqkwLh2W3r3i6wnCRmve07BgI=",
"path": "github.com/hashicorp/terraform/helper/logging",
"revision": "41e50bd32a8825a84535e353c3674af8ce799161",
"revisionTime": "2018-04-10T16:50:42Z",
"revision": "6dfc4d748de9cda23835bc5704307ed45e839622",
"revisionTime": "2018-08-15T22:00:39Z",
"version": "v0.11.2",
"versionExact": "v0.11.2"
},
@ -968,12 +968,6 @@
"revision": "ec24b7f12fca9f78fbfcd62a0ea8bce14ade8792",
"revisionTime": "2017-04-07T04:09:43Z"
},
{
"checksumSHA1": "zmC8/3V4ls53DJlNTKDZwPSC/dA=",
"path": "github.com/satori/go.uuid",
"revision": "b061729afc07e77a8aa4fad0a2fd840958f1942a",
"revisionTime": "2016-09-27T10:08:44Z"
},
{
"checksumSHA1": "t/Hcc8jNXkH58QfnotLNtpLh+qc=",
"path": "github.com/stoewer/go-strcase",
@ -1346,10 +1340,10 @@
"revisionTime": "2017-07-18T13:06:16Z"
},
{
"checksumSHA1": "JYl35km48fLrIx7YUtzcgd4J7Rk=",
"checksumSHA1": "UyrBKKpY9lX1LW5SpqJ9QKOAOjk=",
"path": "google.golang.org/api/dns/v1",
"revision": "3cc2e591b550923a2c5f0ab5a803feda924d5823",
"revisionTime": "2016-11-27T23:54:21Z"
"revision": "0ad5a633fea1d4b64bf5e6a01e30d1fc466038e5",
"revisionTime": "2018-09-04T00:04:47Z"
},
{
"checksumSHA1": "nU4Iv1WFYka13VAT8ffBzgguGZ0=",

View File

@ -8,7 +8,7 @@ description: |-
# google\_container\_engine\_versions
Provides access to available Google Container Engine versions in a zone for a given project.
Provides access to available Google Container Engine versions in a zone or region for a given project.
```hcl
data "google_container_engine_versions" "central1b" {
@ -32,7 +32,13 @@ resource "google_container_cluster" "foo" {
The following arguments are supported:
* `zone` (required) - Zone to list available cluster versions for. Should match the zone the cluster will be deployed in.
* `zone` (optional) - Zone to list available cluster versions for. Should match the zone the cluster will be deployed in.
If not specified, the provider-level zone is used. One of zone, region, or provider-level zone is required.
* `region` (optional) - Region to list available cluster versions for. Should match the region the cluster will be deployed in.
For regional clusters, this value must be specified and cannot be inferred from provider-level region. One of zone,
region, or provider-level zone is required.
* `project` (optional) - ID of the project to list available cluster versions for. Should match the project the cluster will be deployed to.
Defaults to the project that the provider is authenticated with.

View File

@ -49,7 +49,7 @@ resource "google_compute_instance_template" "webserver" {
}
disk {
source_image = "debian-cloud/debian-8"
source_image = "debian-cloud/debian-9"
auto_delete = true
boot = true
}

View File

@ -124,6 +124,12 @@ The following arguments are supported:
not enforced and the network behaves as if it did not exist. If this
is unspecified, the firewall rule will be enabled.
* `enable_logging` -
(Optional)
This field denotes whether to enable logging for a particular
firewall rule. If logging is enabled, logs will be exported to
Stackdriver.
* `priority` -
(Optional)
Priority for this rule. This is an integer between 0 and 65535, both

View File

@ -1,24 +1,48 @@
---
# ----------------------------------------------------------------------------
#
# *** AUTO GENERATED CODE *** AUTO GENERATED CODE ***
#
# ----------------------------------------------------------------------------
#
# This file is automatically generated by Magic Modules and manual
# changes will be clobbered when the file is regenerated.
#
# Please read more about how to change this file in
# .github/CONTRIBUTING.md.
#
# ----------------------------------------------------------------------------
layout: "google"
page_title: "Google: google_compute_health_check"
sidebar_current: "docs-google-compute-health-check"
description: |-
Manages a Health Check within GCE.
Health Checks determine whether instances are responsive and able to do work.
---
# google\_compute\_health\_check
Manages a health check within GCE. This is used to monitor instances
behind load balancers. Timeouts or HTTP errors cause the instance to be
removed from the pool. For more information, see [the official
documentation](https://cloud.google.com/compute/docs/load-balancing/health-checks)
and
[API](https://cloud.google.com/compute/docs/reference/latest/healthChecks).
Health Checks determine whether instances are responsive and able to do work.
They are an important part of a comprehensive load balancing configuration,
as they enable monitoring instances behind load balancers.
Health Checks poll instances at a specified interval. Instances that
do not respond successfully to some number of probes in a row are marked
as unhealthy. No new connections are sent to unhealthy instances,
though existing connections will continue. The health check will
continue to poll unhealthy instances. If an instance later responds
successfully to some number of consecutive probes, it is marked
healthy again and can receive new connections.
To get more information about HealthCheck, see:
* [API documentation](https://cloud.google.com/compute/docs/reference/rest/latest/healthChecks)
* How-to Guides
* [Official Documentation](https://cloud.google.com/load-balancing/docs/health-checks)
## Example Usage
```tf
resource "google_compute_health_check" "default" {
```hcl
resource "google_compute_health_check" "internal-health-check" {
name = "internal-service-health-check"
timeout_sec = 1
@ -34,100 +58,190 @@ resource "google_compute_health_check" "default" {
The following arguments are supported:
* `name` - (Required) A unique name for the resource, required by GCE.
Changing this forces a new resource to be created.
* `name` -
(Required)
Name of the resource. Provided by the client when the resource is
created. The name must be 1-63 characters long, and comply with
RFC1035. Specifically, the name must be 1-63 characters long and
match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means
the first character must be a lowercase letter, and all following
characters must be a dash, lowercase letter, or digit, except the
last character, which cannot be a dash.
- - -
* `check_interval_sec` - (Optional) The number of seconds between each poll of
the instance instance (default 5).
* `description` - (Optional) Textual description field.
* `check_interval_sec` -
(Optional)
How often (in seconds) to send a health check. The default value is 5
seconds.
* `healthy_threshold` - (Optional) Consecutive successes required (default 2).
* `description` -
(Optional)
An optional description of this resource. Provide this property when
you create the resource.
* `http_health_check` - (Optional) An HTTP Health Check. Only one kind of Health Check can be added.
Structure is documented below.
* `healthy_threshold` -
(Optional)
A so-far unhealthy instance will be marked healthy after this many
consecutive successes. The default value is 2.
* `https_health_check` - (Optional) An HTTPS Health Check. Only one kind of Health Check can be added.
Structure is documented below.
* `timeout_sec` -
(Optional)
How long (in seconds) to wait before claiming failure.
The default value is 5 seconds. It is invalid for timeoutSec to have
greater value than checkIntervalSec.
* `ssl_health_check` - (Optional) An SSL Health Check. Only one kind of Health Check can be added.
Structure is documented below.
* `unhealthy_threshold` -
(Optional)
A so-far healthy instance will be marked unhealthy after this many
consecutive failures. The default value is 2.
* `tcp_health_check` - (Optional) A TCP Health Check. Only one kind of Health Check can be added.
Structure is documented below.
* `http_health_check` -
(Optional)
A nested object resource Structure is documented below.
* `project` - (Optional) The project in which the resource belongs. If it
is not provided, the provider project is used.
* `https_health_check` -
(Optional)
A nested object resource Structure is documented below.
* `timeout_sec` - (Optional) The number of seconds to wait before declaring
failure (default 5).
* `tcp_health_check` -
(Optional)
A nested object resource Structure is documented below.
* `unhealthy_threshold` - (Optional) Consecutive failures required (default 2).
* `ssl_health_check` -
(Optional)
A nested object resource Structure is documented below.
* `project` - (Optional) The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
The `http_health_check` block supports:
* `host` - (Optional) HTTP host header field (default instance's public ip).
* `host` -
(Optional)
The value of the host header in the HTTP health check request.
If left empty (default value), the public IP on behalf of which this health
check is performed will be used.
* `port` - (Optional) TCP port to connect to (default 80).
* `request_path` -
(Optional)
The request path of the HTTP health check request.
The default value is /.
* `proxy_header` - (Optional) Type of proxy header to append before sending
data to the backend, either NONE or PROXY_V1 (default NONE).
* `request_path` - (Optional) URL path to query (default /).
* `port` -
(Optional)
The TCP port number for the HTTP health check request.
The default value is 80.
* `proxy_header` -
(Optional)
Specifies the type of proxy header to append before sending data to the
backend, either NONE or PROXY_V1. The default is NONE.
The `https_health_check` block supports:
* `host` - (Optional) HTTPS host header field (default instance's public ip).
* `host` -
(Optional)
The value of the host header in the HTTPS health check request.
If left empty (default value), the public IP on behalf of which this health
check is performed will be used.
* `port` - (Optional) TCP port to connect to (default 443).
* `request_path` -
(Optional)
The request path of the HTTPS health check request.
The default value is /.
* `proxy_header` - (Optional) Type of proxy header to append before sending
data to the backend, either NONE or PROXY_V1 (default NONE).
* `request_path` - (Optional) URL path to query (default /).
The `ssl_health_check` block supports:
* `port` - (Optional) TCP port to connect to (default 443).
* `proxy_header` - (Optional) Type of proxy header to append before sending
data to the backend, either NONE or PROXY_V1 (default NONE).
* `request` - (Optional) Application data to send once the SSL connection has
been established (default "").
* `response` - (Optional) The response that indicates health (default "")
* `port` -
(Optional)
The TCP port number for the HTTPS health check request.
The default value is 443.
* `proxy_header` -
(Optional)
Specifies the type of proxy header to append before sending data to the
backend, either NONE or PROXY_V1. The default is NONE.
The `tcp_health_check` block supports:
* `port` - (Optional) TCP port to connect to (default 80).
* `request` -
(Optional)
The application data to send once the TCP connection has been
established (default value is empty). If both request and response are
empty, the connection establishment alone will indicate health. The request
data can only be ASCII.
* `proxy_header` - (Optional) Type of proxy header to append before sending
data to the backend, either NONE or PROXY_V1 (default NONE).
* `response` -
(Optional)
The bytes to match against the beginning of the response data. If left empty
(the default value), any response will indicate health. The response data
can only be ASCII.
* `request` - (Optional) Application data to send once the TCP connection has
been established (default "").
* `port` -
(Optional)
The TCP port number for the TCP health check request.
The default value is 443.
* `response` - (Optional) The response that indicates health (default "")
* `proxy_header` -
(Optional)
Specifies the type of proxy header to append before sending data to the
backend, either NONE or PROXY_V1. The default is NONE.
The `ssl_health_check` block supports:
* `request` -
(Optional)
The application data to send once the SSL connection has been
established (default value is empty). If both request and response are
empty, the connection establishment alone will indicate health. The request
data can only be ASCII.
* `response` -
(Optional)
The bytes to match against the beginning of the response data. If left empty
(the default value), any response will indicate health. The response data
can only be ASCII.
* `port` -
(Optional)
The TCP port number for the SSL health check request.
The default value is 443.
* `proxy_header` -
(Optional)
Specifies the type of proxy header to append before sending data to the
backend, either NONE or PROXY_V1. The default is NONE.
## Attributes Reference
In addition to the arguments listed above, the following computed attributes are
exported:
In addition to the arguments listed above, the following computed attributes are exported:
* `creation_timestamp` -
Creation timestamp in RFC3339 text format.
* `type` -
The type of the health check. One of HTTP, HTTPS, TCP, or SSL.
* `self_link` - The URI of the created resource.
## Timeouts
This resource provides the following
[Timeouts](/docs/configuration/resources.html#timeouts) configuration options:
- `create` - Default is 4 minutes.
- `update` - Default is 4 minutes.
- `delete` - Default is 4 minutes.
## Import
Health checks can be imported using the `name`, e.g.
HealthCheck can be imported using any of these accepted formats:
```
$ terraform import google_compute_health_check.default internal-service-health-check
$ terraform import google_compute_health_check.default projects/{{project}}/global/healthChecks/{{name}}
$ terraform import google_compute_health_check.default {{project}}/{{name}}
$ terraform import google_compute_health_check.default {{name}}
```

View File

@ -26,7 +26,7 @@ resource "google_compute_instance" "default" {
boot_disk {
initialize_params {
image = "debian-cloud/debian-8"
image = "debian-cloud/debian-9"
}
}

View File

@ -26,7 +26,7 @@ resource "google_compute_instance_template" "tpl" {
machine_type = "n1-standard-1"
disk {
source_image = "debian-cloud/debian-8"
source_image = "debian-cloud/debian-9"
auto_delete = true
disk_size_gb = 100
boot = true

View File

@ -38,7 +38,7 @@ resource "google_compute_instance_template" "default" {
// Create a new boot disk from an image
disk {
source_image = "debian-cloud/debian-8"
source_image = "debian-cloud/debian-9"
auto_delete = true
boot = true
}

View File

@ -49,7 +49,7 @@ resource "google_compute_instance_template" "foobar" {
}
disk {
source_image = "debian-cloud/debian-8"
source_image = "debian-cloud/debian-9"
auto_delete = true
boot = true
}

View File

@ -126,5 +126,5 @@ exported:
SSL certificate can be imported using the `name`, e.g.
```
$ terraform import compute_ssl_certificate.html.foobar foobar
$ terraform import google_compute_ssl_certificate.default my-certificate
```

View File

@ -238,6 +238,23 @@ The `ip_allocation_policy` block supports:
ClusterIPs. This must be an existing secondary range associated with the cluster
subnetwork.
* `cluster_ipv4_cidr_block` - (Optional) The IP address range for the cluster pod IPs.
Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14)
to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14)
from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to
pick a specific range to use.
* `services_ipv4_cidr_block` - (Optional) The IP address range of the services IPs in this cluster.
Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14)
to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14)
from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to
pick a specific range to use.
* `create_subnetwork`- (Optional) Whether a new subnetwork will be created automatically for the cluster.
* `subnetwork_name` - (Optional) A custom subnetwork name to be used if create_subnetwork is true.
If this field is empty, then an automatic name will be chosen for the new subnetwork.
The `master_auth` block supports:
* `password` - (Required) The password to use for HTTP basic authentication when accessing

View File

@ -39,7 +39,7 @@ resource "google_compute_instance" "frontend" {
boot_disk {
initialize_params {
image = "debian-cloud/debian-8"
image = "debian-cloud/debian-9"
}
}

View File

@ -16,8 +16,13 @@ and
A CryptoKey is an interface to key material which can be used to encrypt and decrypt data. A CryptoKey belongs to a
Google Cloud KMS KeyRing.
~> Note: CryptoKeys cannot be deleted from Google Cloud Platform. Destroying a Terraform-managed CryptoKey will remove it
from state and delete all CryptoKeyVersions, rendering the key unusable, but **will not delete the resource on the server**.
~> Note: CryptoKeys cannot be deleted from Google Cloud Platform. Destroying a
Terraform-managed CryptoKey will remove it from state and delete all
CryptoKeyVersions, rendering the key unusable, but **will not delete the
resource on the server**. When Terraform destroys these keys, any data
previously encrypted with these keys will be irrecoverable. For this reason, it
is strongly recommended that you add lifecycle hooks to the resource to prevent
accidental destruction.
## Example Usage
@ -32,6 +37,10 @@ resource "google_kms_crypto_key" "my_crypto_key" {
name = "my-crypto-key"
key_ring = "${google_kms_key_ring.my_key_ring.self_link}"
rotation_period = "100000s"
lifecycle {
prevent_destroy = true
}
}
```

View File

@ -12,8 +12,10 @@ Manages a billing account logging sink. For more information see
[the official documentation](https://cloud.google.com/logging/docs/) and
[Exporting Logs in the API](https://cloud.google.com/logging/docs/api/tasks/exporting-logs).
Note that you must have the "Logs Configuration Writer" IAM role (`roles/logging.configWriter`)
granted to the credentials used with terraform.
~> **Note** You must have the "Logs Configuration Writer" IAM role (`roles/logging.configWriter`)
[granted on the billing account](https://cloud.google.com/billing/reference/rest/v1/billingAccounts/getIamPolicy) to
the credentials used with Terraform. [IAM roles granted on a billing account](https://cloud.google.com/billing/docs/how-to/billing-access) are separate from the
typical IAM roles granted on a project.
## Example Usage
@ -67,3 +69,11 @@ exported:
* `writer_identity` - The identity associated with this sink. This identity must be granted write access to the
configured `destination`.
## Import
Billing account logging sinks can be imported using this format:
```
$ terraform import google_logging_billing_account_sink.my_sink billingAccounts/{{billing_account_id}}/sinks/{{sink_id}}
```

View File

@ -79,3 +79,11 @@ exported:
* `writer_identity` - The identity associated with this sink. This identity must be granted write access to the
configured `destination`.
## Import
Folder-level logging sinks can be imported using this format:
```
$ terraform import google_logging_folder_sink.my_sink folders/{{folder_id}}/sinks/{{sink_id}}
```

View File

@ -73,3 +73,11 @@ exported:
* `writer_identity` - The identity associated with this sink. This identity must be granted write access to the
configured `destination`.
## Import
Organization-level logging sinks can be imported using this format:
```
$ terraform import google_logging_organization_sink.my_sink organizations/{{organization_id}}/sinks/{{sink_id}}
```

View File

@ -14,8 +14,9 @@ Manages a project-level logging sink. For more information see
and
[API](https://cloud.google.com/logging/docs/reference/v2/rest/).
Note that you must have the "Logs Configuration Writer" IAM role (`roles/logging.configWriter`)
granted to the credentials used with terraform.
~> **Note:** You must have [granted the "Logs Configuration Writer"](https://cloud.google.com/logging/docs/access-control) IAM role (`roles/logging.configWriter`) to the credentials used with terraform.
~> **Note** You must [enable the Cloud Resource Manager API](https://console.cloud.google.com/apis/library/cloudresourcemanager.googleapis.com)
## Example Usage
@ -48,7 +49,7 @@ resource "google_compute_instance" "my-logged-instance" {
boot_disk {
initialize_params {
image = "debian-cloud/debian-8"
image = "debian-cloud/debian-9"
}
}

View File

@ -67,8 +67,13 @@ exported:
## Import
SQL databases can be imported using the `instance` and `name`, e.g.
SQL databases can be imported using one of any of these accepted formats:
```
$ terraform import google_sql_database.database master-instance:users-db
$ terraform import google_sql_database.database projects/{{project}}/instances/{{instance}}/databases/{{name}}
$ terraform import google_sql_database.database {{project}}/{{instance}}/{{name}}
$ terraform import google_sql_database.database instances/{{name}}/databases/{{name}}
$ terraform import google_sql_database.database {{instance}}/{{name}}
$ terraform import google_sql_database.database {{name}}
```

View File

@ -313,8 +313,11 @@ when the resource is configured with a `count`.
## Import
Database instances can be imported using the `name`, e.g.
Database instances can be imported using one of any of these accepted formats:
```
$ terraform import google_sql_database_instance.master master-instance
$ terraform import google_sql_database_instance.master projects/{{project}}/instances/{{name}}
$ terraform import google_sql_database_instance.master {{project}}/{{name}}
$ terraform import google_sql_database_instance.master {{name}}
```

View File

@ -3,7 +3,7 @@ layout: "google"
page_title: "Google: google_project_usage_export_bucket"
sidebar_current: "docs-google-project-usage-export-bucket"
description: |-
Creates a dataset resource for Google BigQuery.
Manages a project's usage export bucket.
---
# google_project_usage_export_bucket
@ -16,23 +16,32 @@ For more information see the [Docs](https://cloud.google.com/compute/docs/usage-
and for further details, the
[API Documentation](https://cloud.google.com/compute/docs/reference/rest/beta/projects/setUsageExportBucket).
~> **Note:** You should specify only one of these per project. If there are two or more
they will fight over which bucket the reports should be stored in. It is
safe to have multiple resources with the same backing bucket.
## Example Usage
```hcl
resource "google_project_usage_export_bucket" "export" {
project = "foo"
bucket_name = "bar"
resource "google_project_usage_export_bucket" "usage_export" {
project = "development-project"
bucket_name = "usage-tracking-bucket"
}
```
## Argument Reference
* `project`: (Required) The project to set the export bucket on.
* `bucket_name`: (Required) The bucket to store reports in.
- - -
* `prefix`: (Optional) A prefix for the reports, for instance, the project name.
## Note
* `project`: (Optional) The project to set the export bucket on. If it is not provided, the provider project is used.
You should specify only one of these per project. If there are two or more
they will fight over which bucket the reports should be stored in. It is
safe to have multiple resources with the same backing bucket.
## Import
A project's Usage Export Bucket can be imported using this format:
```
$ terraform import google_project_usage_export_bucket.usage_export {{project}}
```