Deploy a landing zone blueprint

You can use Config Controller to declaratively deploy and manage a landing zone in Google Cloud.

This tutorial helps organization administrators to set up Google Cloud for scalable, production-ready enterprise workloads by deploying a landing zone blueprint, which includes best practices for networking, security, and resource management.

Objectives

In the process of deploying your landing zone, you will:

  1. Set up Config Controller and configure it to sync resources from a Git repository to your Google Cloud environment.
  2. Discover blueprints which capture common design patterns for Google Cloud, such as deploying a Shared VPC.
  3. Create an opinionated deployment pipeline (based on Cloud Build) that lets you customize blueprints, transform them with kpt functions, and deploy resources using Config Connector.

These components are combined into the workflow depicted in the following diagram:

A platform administrator deploying blueprints through Config Controller

Costs

Before you begin

Before setting up your landing zone, you will need to prepare a few details.

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Install and initialize the Cloud SDK.
  5. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  7. Install and initialize the Cloud SDK.
  8. Create Google Groups that will be used to grant access to your organization. In particular, you should have a group for organization administrators and a group for billing administrators.
  9. Install the Kubernetes tools (nomos, kubectl, and kpt) used for this tutorial. You can also install the components directly from their respective pages.
    gcloud components install pkg

Preparing the environment

Before deploying the landing zone blueprint, you need to set a few environment variables:

export PROJECT_ID=PROJECT_ID
export CONFIG_CONTROLLER_NAME=config-controller-1
export BILLING_ACCOUNT=$(gcloud alpha billing projects describe $PROJECT_ID \
  '--format=value(billingAccountName)' | sed 's/.*\///')
export ORG_ID=$(gcloud projects get-ancestors ${PROJECT_ID} --format='get(id)' | tail -1)
gcloud config set project ${PROJECT_ID}

Replace the following:

  • PROJECT_ID: the project ID where Config Controller will be hosted

Setting up Config Controller

Config Controller provides a managed control plane, based on Kubernetes, for configuring cloud resources. It comes pre-installed with Policy Controller, Config Sync, and Config Connector.

  1. In your project, enable the Config Controller API and the GKE API that Config Controller depends on:
     gcloud services enable krmapihosting.googleapis.com container.googleapis.com
    
  2. Create your Config Controller. This operation might take over 15 minutes.
    gcloud alpha anthos config controller create ${CONFIG_CONTROLLER_NAME} \
      --location=us-central1
  3. Verify that Config Controller has been created successfully:
    gcloud alpha anthos config controller list --location=us-central1
  4. Authenticate to Config Controller so that you can begin applying configuration:
    gcloud alpha anthos config controller get-credentials ${CONFIG_CONTROLLER_NAME} \
      --location us-central1
  5. Give Config Controller permission to manage Google Cloud resource in the management project:
    export SA_EMAIL="$(kubectl get ConfigConnectorContext -n config-control \
      -o jsonpath='{.items[0].spec.googleServiceAccount}' 2> /dev/null)"
    gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
      --member "serviceAccount:${SA_EMAIL}" \
      --role "roles/owner" \
      --project "${PROJECT_ID}"
  6. Give Config Controller permission to manage org-level resources, for deploying your landing zone blueprint:
    gcloud organizations add-iam-policy-binding $ORG_ID \
      --role=roles/resourcemanager.organizationAdmin \
      --condition=None \
      --member="serviceAccount:${SA_EMAIL}"

Creating a GitOps pipeline

By setting up Config Controller to sync with a Git repository, you can collaborate on changes to your landing zone and maintain a robust audit trail.

In this step, you will deploy your first blueprint. This blueprint includes:

  • Two Cloud Source Repositories: a "source" repository where you will commit changes and a "deployment" repository which contains a copy of the final configuration applied to Config Controller.
  • A Cloud Build trigger which watches for changes in your source repository, runs any kpt functions included, and commits the final output to your deployment repository.
  • A Config Sync configuration which connects Config Controller to your deployment repository.

Deploy the blueprint

  1. Enable the Resource Manager service on your project:
    gcloud services enable cloudresourcemanager.googleapis.com
  2. Download the GitOps blueprint from GitHub to your local machine using kpt:
    kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/gitops@main gitops
  3. This blueprint can be customized using a number of setters. Open the gitops/setters.yaml file to modify them:
    # Copyright 2021 Google LLC
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: setters
      annotations:
        config.kubernetes.io/local-config: "true"
    data:
      # This should be the project where you deployed Config Controller
      project-id: project-id
      project-number: "1234567890123"
      # This should be the name of your Config Controller instance
      cluster-name: cluster-name
      # You can leave these defaults
      namespace: config-control
      deployment-repo: deployment-repo
      source-repo: source-repo
    
  4. Customize this blueprint by replacing the following setters in the preceding file:
    • cluster-name: the name you chose for your Config Controller. This can be retrieved with this command:
      echo ${CONFIG_CONTROLLER_NAME}
    • project-id: the project ID where Config Controller is deployed
    • project-number: the number for the project where Config Controller is deployed. This can be retrieved with this command:
      gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)'
  5. Render the blueprint to propagate your customizations to all resources:
    kpt fn render gitops/
  6. Initialize the blueprint to prepare it for your Config Controller cluster:
    kubectl apply --wait -f gitops/ --recursive
  7. Wait for the repositories to be created by the blueprint:
    kubectl wait --for=condition=READY -f gitops/source-repositories.yaml

Save the blueprint to Git

Now, save the blueprint back to the Git repository it created. This will allow you to save your customizations and track any future changes.

  1. Check out the Git repository created by the blueprint:
    gcloud source repos clone source-repo
  2. Move the GitOps blueprint into your Git repository:
    mv gitops source-repo/
  3. Open the Git repository:
    cd source-repo/
  4. Commit the blueprint to Git and push your changes:
    git add gitops/
    git commit -m "Add GitOps blueprint"
    git push

Verify success

You can verify the bootstrapping process above completed successfully by:

  • Checking that Config Connector was installed successfully:
    kubectl wait -n cnrm-system --for=condition=Ready pod --all
  • Confirm that your Cloud Source Repositories were successfully created:
    gcloud source repos list
  • Verify necessary resources have been created in Config Controller:
    kubectl get gcp -n config-control -o yaml \
      | grep "^    name: \\|message"
    Output should similar to this:
        name: source-repo-cicd-trigger
          message: The resource is up to date
        name: allow-configsync-sa-read-csr
          message: The resource is up to date
        name: configsync-sa-workload-identity-binding
          message: The resource is up to date
        name: deployment-repo-cloudbuild-write
          message: The resource is up to date
        name: source-repo-cloudbuild-read
          message: The resource is up to date
        name: config-sync-sa
          message: The resource is up to date
        name: cloudbuild.googleapis.com
          message: The resource is up to date
        name: sourcerepo.googleapis.com
          message: The resource is up to date
        name: deployment-repo
          message: The resource is up to date
        name: source-repo
          message: The resource is up to date
    
  • Confirm that your build was successful:
    gcloud builds list --project=${PROJECT_ID} \
      --filter="source.repo_source.commit_sha=$(git rev-parse HEAD)"
  • Confirm that your hydrated manifests were pushed to the deployment repository:
    BUILD_ID=$(gcloud builds list --project=${PROJECT_ID} --filter="source.repo_source.commit_sha=$(git rev-parse HEAD)" --format="value(id)")
    gcloud builds log --project=${PROJECT_ID} ${BUILD_ID}
    Example output:
    ...
    Step #2 - "Push Changes To Deployment Repo":  7 files changed, 297 insertions(+)
    Step #2 - "Push Changes To Deployment Repo":  create mode 100644 config/.gitkeep
    Step #2 - "Push Changes To Deployment Repo":  create mode 100644 config/gitops/cloudbuild-iam.yaml
    Step #2 - "Push Changes To Deployment Repo":  create mode 100644 config/gitops/configsync/config-management.yaml
    Step #2 - "Push Changes To Deployment Repo":  create mode 100644 config/gitops/configsync/configsync-iam.yaml
    Step #2 - "Push Changes To Deployment Repo":  create mode 100644 config/gitops/hydration-trigger.yaml
    Step #2 - "Push Changes To Deployment Repo":  create mode 100644 config/gitops/services.yaml
    Step #2 - "Push Changes To Deployment Repo":  create mode 100644 config/gitops/source-repositories.yaml
    Step #2 - "Push Changes To Deployment Repo": To https://source.developers.google.com/p/$PROJECT_ID/r/deployment-repo
    Step #2 - "Push Changes To Deployment Repo":  * [new branch]      main -> main
    Step #2 - "Push Changes To Deployment Repo":
    Step #2 - "Push Changes To Deployment Repo":
    Step #2 - "Push Changes To Deployment Repo": Latest deployment repo commit SHA: $SHA
    Finished Step #2 - "Push Changes To Deployment Repo"
    PUSH
    DONE
    

Initializing your landing zone

Now that Config Controller is connected to Git, it is time to initialize your landing zone blueprint.

This blueprint will prepare your landing zone by setting up the overall structure of your organization, including:

  • Creating separate namespaces within Config Controller for managing your hierarchy, logging, networking, projects, and policies. Each namespace is assigned a different, least-privilege Google Cloud service account.
  • Assigning the Billing Admin role to your billing administrator group and the Organization Admin role to your organization administrator group.
  • Activating best practice organization policies in your organization.

Deploy the blueprint

Starting from the source repository you cloned above, deploy your landing zone:

  1. Download the base landing zone blueprint and add it to your repo
    kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/landing-zone@main ./landing-zone
    git add landing-zone/
    git commit -m "Add landing zone"
  2. This blueprint includes a number of setters for configuring your landing zone. Open the landing-zone/setters.yaml file to modify them:
    # Copyright 2021 Google LLC
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: setters
    data:
      # Organization ID and billing account
      org-id: "123456789012"
      billing-account-id: AAAAAA-BBBBBB-CCCCCC
      # Groups to use for org-level roles
      group-org-admins: gcp-organization-admins@example.com
      group-billing-admins: gcp-billing-admins@example.com
      # The project where Config Controller is deployed
      management-project-id: management-project-id
      # This default is safe to keep
      management-namespace: config-control
    
  3. Customize this blueprint by replacing the following setters in the preceding file:
    • org-id: the ID of the organization for your landing zone
    • billing-account-id: the default billing account you want to use for new projects
    • management-project-id: the project ID where Config Controller is deployed
    • group-org-admins: the email for your organization administrator Google Group—for example, gcp-organization-admins@example.com
    • group-billing-admins: the email for your billing administrators Google Group—for example, gcp-billing-admins@example.com
  4. Review the customizations you made to the blueprint:
    git diff
  5. Push your changes to the repository, where they will automatically be synced to Config Controller and applied:
    git commit -a -m "Customize landing zone blueprint"
    git push
    

Manage organization policies

The landing zone blueprint includes a number of organization policy constraints which represent best practices to improve the security of your environments.

Constraint Description
compute.disableNestedVirtualization Disables hardware-accelerated nested virtualization for all Compute Engine VMs.
compute.disableSerialPortAccess Disables serial port access to Compute Engine VMs.
compute.disableGuestAttributesAccess Disables Compute Engine API access to the guest attributes of VMs.
compute.vmExternalIpAccess Limits the set of Compute Engine VM instances that are allowed to use external IP addresses.
compute.skipDefaultNetworkCreation Causes Google Cloud to skip the creation of the default network and related resources during project creation.
compute.restrictXpnProjectLienRemoval Restricts the set of users that can remove a Shared VPC project lien.
sql.restrictPublicIp Restricts public IP addresses on Cloud SQL instances.
iam.disableServiceAccountKeyCreation Disables the creation of downloadable service account keys.
storage.uniformBucketLevelAccess Requires buckets to use uniform IAM-based bucket-level access.

Removing constraints

If any of the default constraints are posing issues for your organization, simply delete the associated YAML file from your source repo to restore the default behavior.

For example, to remove the constraint preventing serial port access follow these steps:

  1. Remove the file:
    git rm ./landing-zone/policies/disable-serial-port.yaml
  2. Commit and push your change:
    git commit -m "Remove serial port org policy"
    git push
    

Verify success

You can verify your landing zone has been initialized by:

  • Checking the status of the Cloud Build execution pipeline.
  • Verifying the Git repository is synchronized with your Config Controller:
    nomos status
  • Confirming the hierarchy, projects, policies, logging, and networking namespaces exist:
    kubectl get ns
  • Verify necessary resources have been created in Config Controller:
    kubectl get gcp --all-namespaces -o yaml \
      | grep "^    name: \|message"
  • List the organization policies through the CLI:
    gcloud resource-manager org-policies list --organization=$ORG_ID

Setting up your resource hierarchy

In Google Cloud, resources are organized into folders and projects:

  • Projects contain cloud resources, such as virtual machines, databases, and storage buckets.
  • Folders are used to group projects together for easier policy management, including IAM. For example, they can represent the main departments in your organization such as finance or retail, or environments such as production versus non-production. Folders can be nested within each other to form a resource hierarchy.

We provide four different resource hierarchy blueprints to suit different organizational structures:

  • Simple: This is a simple blueprint with a single layer of folders representing environments.
  • Team: This blueprint has two layers of folders: teams -> environments
  • Business Units: This blueprint divides your organization into three levels of folders with a focus on autonomous business units: divisions -> teams -> environments
  • Environments: This blueprint has three levels of folders, with a focus on environment-centric policies: environments -> divisions -> teams

Deploy the blueprint

Once you have chosen your preferred resource hierarchy, follow the appropriate instructions to deploy it.

Simple

This blueprint is suitable for flat organizations where a simple set of policies for each environment is required for managing the cloud but all teams are treated uniformly.

  1. Download the blueprint:
    kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/hierarchy/simple@main ./landing-zone/hierarchy/
  2. Edit the landing-zone/hierarchy/hierarchy.yaml file to reflect your desired org structure.
    spec:
      config:
        - shared
        - dev
        - prod
        - qa
      parentRef:
        # This should match your organization ID
        external: '123456789012'

    These values need to be updated in hierarchy.yaml:

    • spec.config: your desired environments, which will become top-level folders
    • spec.parentRef.external: update this value to match your organization ID
  3. Edit the naming policy constraint included in landing-zone/hierarchy/policies/naming-constraint.yaml to reflect your preferred naming scheme for folders. The naming scheme is defined on as a regular expression. In particular, you should adjust this expression to include any additional environments you defined.
    spec:
      parameters:
        naming_rules:
          - kind: Folder
            patterns:
              # Matches words like "dev", "prod" or "staging"
              - ^(dev|prod|staging|qa|shared)$
  4. Add your hierarchy and push to the git repository:
    git add ./landing-zone/hierarchy/
    git commit -m "Add resource hierarchy and update folder naming convention."
    git push

Team

This blueprint is suitable for simpler organizations where each team is responsible for their own cloud operations and empowered to set custom policies in how they use the cloud.

  1. Download the blueprint:
    kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/hierarchy/team@main ./landing-zone/hierarchy/
  2. Edit the landing-zone/hierarchy/hierarchy.yaml file to reflect your desired org structure.
    spec:
      config:
        - retail:
            $subtree: environments
        - finance:
            $subtree: environments
      parentRef:
        # This should match your organization ID
        external: '123456789012'
      subtrees:
        environments:
          - dev
          - prod
          - qa

    These values need to be updated in hierarchy.yaml:

    • spec.config: your desired teams, which will become top-level folders
    • spec.subtrees.environments: your desired environments, which will become subfolders under each team
    • spec.parentRef.external: update this value to match your organization ID
  3. Edit the naming policy constraint included in landing-zone/hierarchy/policies/naming-constraint.yaml to reflect your preferred naming scheme for folders. The naming scheme is defined on as a regular expression. In particular, you should adjust this expression to include any additional environments you defined.
    spec:
      parameters:
        naming_rules:
          - kind: Folder
            patterns:
              # Matches words like "dev", "prod" or "staging"
              - ^(dev|prod|staging|qa|shared)$
  4. Add your hierarchy and push to the git repository:
    git add ./landing-zone/hierarchy/
    git commit -m "Add resource hierarchy and update folder naming convention."
    git push

Business Units

This blueprint is suitable for large, complex organizations where each business unit or division is largely responsible for their own cloud operations. It allows for each business unit to easily set policies that apply across all their teams, while individual teams are responsible for their own environments (within those top-level constraints).

  1. Download the blueprint:
    kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/hierarchy/bu@main ./landing-zone/hierarchy/
  2. Edit the landing-zone/hierarchy/hierarchy.yaml file to reflect your desired org structure.
    spec:
      config:
        - retail:
            - apps:
                $subtree: environments
            - data:
                $subtree: environments
        - finance:
            - commercial:
                $subtree: environments
      parentRef:
        # This should match your organization ID
        external: '123456789012'
      subtrees:
        environments:
          - dev
          - prod

    These values need to be updated in hierarchy.yaml:

    • spec.config: your desired organization structure, which can include teams nested under division—for example, the blueprint starts with a retail division that has apps and data subfolders
    • spec.subtrees.environments: your desired environments, which will become subfolders under each team
    • spec.parentRef.external: update this value to match your organization ID
  3. Edit the naming policy constraint included in landing-zone/hierarchy/policies/naming-constraint.yaml to reflect your preferred naming scheme for folders. The naming scheme is defined on as a regular expression. In particular, you should adjust this expression to include any additional environments you defined.
    spec:
      parameters:
        naming_rules:
          - kind: Folder
            patterns:
              # Matches words like "dev", "prod" or "staging"
              - ^(dev|prod|staging|qa|shared)$
  4. Add your hierarchy and push to the git repository:
    git add ./landing-zone/hierarchy/
    git commit -m "Add resource hierarchy and update folder naming convention."
    git push

Environments

This blueprint is suitable for larger organizations that need to assert strong, consistent control over environment policies (for example, restricting access to production resources) while providing flexibility for granular division and team-specific policies.

  1. Download the blueprint:
    kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/hierarchy/env-bu@main ./landing-zone/hierarchy/
  2. Edit the landing-zone/hierarchy/hierarchy.yaml file to reflect your desired org structure.
    spec:
      config:
        - dev:
            $subtree: environment
        - prod:
            $subtree: environment
      parentRef:
        # This should match your organization ID
        external: '123456789012'
      subtrees:
        environment:
          - retail:
              - apps
              - data
          - finance:
              - commercial

    These values need to be updated in hierarchy.yaml:

    • spec.config: your desired top-level environment folders, which will each include a full copy of your environment subtree
    • spec.subtrees.environment: your desired hierarchy of divisions and teams which will be mirrored in each environment—for example, the blueprint starts with a retail division that has apps and data subfolders
    • spec.parentRef.external: update this value to match your organization ID
  3. Add your hierarchy and push to the git repository:
    git add ./landing-zone/hierarchy/
    git commit -m "Add resource hierarchy and update folder naming convention."
    git push

Verify success

You can verify your resource hierarchy has been successfully created by:

  • Listing the folders within your organization:
    gcloud resource-manager folders list --organization=$ORG_ID
  • Retrieving the status of the folders directly from the hierarchy namespace in your cluster:
    kubectl get folders -n hierarchy -o yaml \
      | grep "^    name: \|message"

Establishing network connectivity

As part of your landing zone, we recommend deploying a Shared VPC networking architecture. The networking blueprint provided sets up a network and establishes a Cloud VPN for hybrid connectivity.

Create a host project

Before deploying a network, you should create a project to host it. This should be done once for each environment, by using the included project factory blueprint.

  1. Download the project blueprint into your landing zone to create a host project.
    export NET_PROJECT_ID="NETWORK_PROJECT_ID"
    export ENVIRONMENT="ENVIRONMENT"
    
    kpt pkg get \
      https://github.com/GoogleCloudPlatform/blueprints.git/catalog/project@main \
      ./landing-zone/projects/${NET_PROJECT_ID}
    

    Replace the following:

    • ENVIRONMENT: the environment you are configuring a Shared VPC for—for example, dev.
    • NETWORK_PROJECT_ID: the ID you want to assign to your networking project.
  2. Open the landing-zone/projects/NETWORK_PROJECT_ID/setters.yaml file. Customize this blueprint by replacing the following setters in:
    # Copyright 2021 Google LLC
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: setters
    data:
      folder-name: name.of.folder
      project-id: project-id
      # These defaults can be kept
      folder-namespace: hierarchy
      networking-namespace: networking
    
    Set the following values:
    • project-id: the ID you chose
    • folder-name the ID of the folder which should contain your network project. Note that folders are prefixed with their parent folder's name—for example, the shared folder within the dev environment would have the ID dev.shared.
  3. Make your project into a Shared VPC host project by adding an additional blueprint:
    kpt pkg get \
      https://github.com/GoogleCloudPlatform/blueprints.git/catalog/networking/shared-vpc@main \
      ./landing-zone/projects/${NET_PROJECT_ID}/host-project
    
  4. Commit the changes to trigger project creation:
    git add ./landing-zone/projects/${NET_PROJECT_ID}/
    git commit -m "Add networking host project"
    git push
    

Create the network

Now that you have a host project, you can create a Shared VPC using the networking blueprint:

  1. Add the network blueprint to your landing zone:
    kpt pkg get \
      https://github.com/GoogleCloudPlatform/blueprints.git/catalog/networking/network@main \
      ./landing-zone/network/${ENVIRONMENT}/
    
  2. Open the landing-zone/network/ENVIRONMENT/setters.yaml file. Customize this blueprint by replacing the following setters:
    # Copyright 2021 Google LLC
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: setters
    data:
      # Required setters
      network-name: network-name
      project-id: project-id
      region: us-central1
      vpn-tunnel-peer-ip-01: "15.1.0.120"
      vpn-tunnel-peer-ip-02: "15.1.1.120"
      # Optional setters
      namespace: networking
      vpn-secret-key: vpn-shared-secret
      vpn-secret-name: vpn-shared-secret
      prefix: ""
    
    Set the following values:
    • network-name: the name you want to use for your network—for example, dev-network-shared
    • region: the region to deploy your first subnet in—for example, us-east4
    • project-id: the project ID where your network will be hosted. (NETWORK_PROJECT_ID)
  3. Create a secret for the pre-shared key used by Cloud VPN. This value is sensitive and should not be committed to your Git repository. Instead, send it directly to Config Controller:
    kubectl create secret generic vpn-shared-secret \
      --from-literal=vpn-shared-secret="SECRET_VALUE" \
      -n networking
  4. Commit the customized blueprint to deploy your network:
    git add ./landing-zone/network/
    git commit -m "Add network setup"
    git push
    

Verify network deployment

You can verify that your network has been successfully created by:

  • Confirming the project was created successfully:
    kubectl describe project ${NET_PROJECT_ID} -n projects
  • Retrieving the status of the network directly from the networking namespace in your cluster:
    kubectl get gcp \
      -n networking -o yaml \
      | grep "^    name: \|message"
  • Inspecting your network through the Cloud Console.

Exporting logging data

One of the Google Cloud best practices for enterprise organizations is to closely monitor and retain audit logs.

The landing zone blueprints include multiple options for managing log export and retention.

Follow the steps below to deploy a blueprint which will export all logs from your organization into BigQuery for long-term retention and analysis.

Create a host project

If you would like to export logs to BigQuery, you will need a project to host the logs in. Since you will be exporting logs for your entire organization, you should place this project in the production environment. This can be created using a blueprint.

  1. Download the project blueprint into your landing zone to create a host project.
    export LOGGING_PROJECT_ID="LOGGING_PROJECT_ID"
    
    kpt pkg get \
      https://github.com/GoogleCloudPlatform/blueprints.git/catalog/project@main \
      ./landing-zone/projects/${LOGGING_PROJECT_ID}
    

    Replace the following:

    • LOGGING_PROJECT_ID: the ID you want to assign to your networking project.
  2. Open the landing-zone/projects/LOGGING_PROJECT_ID/setters.yaml file. Customize this blueprint by replacing the following setters:
    # Copyright 2021 Google LLC
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: setters
    data:
      folder-name: name.of.folder
      project-id: project-id
      # These defaults can be kept
      folder-namespace: hierarchy
      networking-namespace: networking
    
    Set the following values:
    • project-id: the project ID you chose for storing your log exports (LOGGING_PROJECT_ID)
    • folder-name the ID of the folder which should contain your logging project. Note that folders are prefixed with their parent folder's name—for example, the shared folder within the dev environment would have the ID dev.shared.
  3. Commit the changes to trigger project creation:
    git add ./landing-zone/projects/${LOGGING_PROJECT_ID}/
    git commit -m "Add logging project"
    git push
    

Export logs to BigQuery

By exporting logs to BigQuery, you can retain them for later analysis. This blueprint includes creating a BigQuery dataset to store the logs and configuring the export to that dataset.

  1. Add the logging blueprint to your landing zone:
    kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/log-export/org/bigquery-export@main ./landing-zone/logging/bigquery-export
  2. Open the landing-zone/logging/bigquery-export/setters.yaml file. Customize this blueprint by replacing the project ID:
    # Copyright 2021 Google LLC
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: setters
    data:
      # This is required
      project-id: my-project-id
      # These defaults can be left unchanged
      namespace: logging
      dataset-description: BigQuery audit logs for organization
      dataset-location: US
      dataset-name: audit-logs
      default-table-expiration-ms: "3600000"
      delete-contents-on-destroy: "false"
      filter: ""
    
    Set the following values:
    • project-id: the project ID you chose for storing your logs (LOGGING_PROJECT_ID)
  3. Commit the blueprint to create your log export.
    git add ./landing-zone/logging/
    git commit -m "Add log export to BigQuery"
    git push

Verify the log export

You can verify that your log export has been successfully configured by:

  • Confirming the project was created successfully:
    kubectl describe project ${LOGGING_PROJECT} -n projects
    gcloud projects describe ${LOGGING_PROJECT_ID}
    
  • Retrieving the status of resources in the logging namespace:
    kubectl get bigquerydatasets,iampolicymembers,logginglogsinks -n logging -o yaml \
      | grep "^    name: \|message"
  • Retrieving the status of the log export through the gcloud CLI (your new sink should appear in the list):
    gcloud logging sinks list --organization=${ORG_ID}
  • After the sink has been configured for a while, verify that logs are flowing into BigQuery:

    # List tables
    bq ls --project_id ${LOGGING_PROJECT_ID} bqlogexportdataset
    
    # Query a table
    # Change this to a result of "bq ls" above so that the name matches one of your tables
    TABLE_OF_INTEREST=git_sync_20201130
    bq query --project_id ${LOGGING_PROJECT_ID} "SELECT * FROM bqlogexportdataset.${TABLE_OF_INTEREST} LIMIT 2"
    

Troubleshooting

Default network

When creating your Config Controller, you might receive an error about the default network not being available:

Error 400: Project "" has no network named "default"., badRequest\n\n  on main.tf line 35, in resource "google_container_cluster" "acp_cluster"

This error occurs because Config Controller depends on the default network. To resolve this you should create a new default network:

gcloud compute networks create default --subnet-mode=auto

Hydration and Cloud Build

As part of the GitOps blueprint, you created a Cloud Build trigger which monitors your source repository for changes and:

  1. Validates the contents of those changes uses kpt functions
  2. Generates the final "hydrated" configuration which is saved to a deployment repository and applied to your cluster through Config Sync.

This Cloud Build pipeline can be monitored from the console or through gcloud. This gcloud command can be used to retrieve the latest build status:

FILTER="source.repo_source.commit_sha=$(git rev-parse HEAD)"
# You can poll on this command until status is either SUCCESS or FAILURE
gcloud builds list --project=${PROJECT_ID} --filter=${FILTER}

BUILD_ID=$(gcloud builds list --project=${PROJECT_ID} --filter=${FILTER} --format='get(id)' | head -n 1)
# View logs for your run. You can use this to debug errors
gcloud builds log --project=${PROJECT_ID} $BUILD_ID

Local execution

If you want faster feedback on issues than is provided for by the Cloud Build pipeline, you can also use kpt to execute the pipeline locally with this command (run from the root of the source repository):

kpt fn render ./landing-zone/

Immutable fields and resources

Some fields on the underlying Google Cloud resources are immutable, such as project IDs or the name of your VPC network. Config Connector will block edits to such fields and be unable to actuate changes. If you want to edit one of these immutable fields, you must first delete the original resource (through Git) before re-adding it with the new preferred values.

Deployment and Config Sync

The "deployment" repository contains fully hydrated resources defining your landing zone. These resources are synced to Config Controller using Config Sync. You can check for errors in this sync process by using the nomos cli command:

nomos status

Billing

The landing zone blueprint will automatically set up the correct billing permissions for managing billing within your organization. If your projects cannot be attached to your billing account, it might mean your billing account exists outside your organization. Therefore, you will need to directly grant the projects service account permissions to manage the billing account:

export PROJECTS_SA="$(kubectl get ConfigConnectorContext -n projects -o jsonpath='{.items[0].spec.googleServiceAccount}')"
gcloud alpha billing accounts add-iam-policy-binding $BILLING_ACCOUNT \
  --role=roles/billing.admin \
  --member="serviceAccount:${PROJECTS_SA}"

Clean up

If you decide to stop using your landing zone, you should clean up all resources created. You should first remove resources from Config Controller before deleting Config Controller itself.

Alternatively, if you want to retain the landing zone resources while abandoning the declarative workflow you can optionally jump straight to deleting the Config Controller (though this is not recommended).

Removing resources

Resources can be deleted by simply removing their associated files from the landing zones Git repository. This works for individual resources or for entire packages.

However, you cannot delete your entire landing zone at once (to prevent accidental deletions). Instead, you will need to teardown your landing zone resources in a few steps:

# Delete downstream resources
git rm -rf ./landing-zone/logging/
git rm -rf ./landing-zone/network/
git commit -m "Delete downstream resources"
git push 
# Confirm Config Sync successfully applies

# Delete projects
git rm -rf ./landing-zone/projects/
git commit -m "Delete projects"
git push
# Confirm Config Sync successfully applies

# Delete folders and organization policies, but leave the policy template (see below for why)
git rm -rf ./landing-zone/hierarchy/
find ./landing-zone/policies/ -type f -not \( -name 'folder-naming-constraint-template.yaml' \) -delete
git add ./landing-zone/
git commit -m "Delete hierarchy and organization policies"
git push
# Confirm Config Sync successfully applies

# Delete landing zone except for 1 cluster-scoped resource
# (folder-naming-constraint-template.yaml) and 1 empty namespace (projects.yaml).
# See /anthos-config-management/docs/reference/errors#knv2006
find ./landing-zone/ -type f -not \( -name 'folder-naming-constraint-template.yaml' -or -name 'projects.yaml' \) -delete
cat <<EOF > ./landing-zone/namespaces/projects.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: projects
EOF
git add ./landing-zone/
git commit -m "Delete all landing zone resources except 1 cluster-scoped resource and 1 namespace"
git push
# Confirm Config Sync successfully applies

# Delete remaining resources
git rm -rf ./landing-zone/
git commit -m "Delete remaining resources"
git push

Deleting Config Controller

gcloud alpha anthos config controller delete --location=us-central1 ${CONFIG_CONTROLLER_NAME}