托管静态网站


本教程介绍如何配置 Cloud Storage 存储桶以托管域名归您所有的静态网站。静态网页可使用 HTML、CSS 和 JavaScript 等客户端技术。静态网页不能包含动态内容,例如 PHP 等服务器端脚本。

由于 Cloud Storage 本身不支持将自定义网域与 HTTPS 搭配使用,因此本教程将 Cloud Storage 与外部应用负载均衡器搭配使用,以通过 HTTPS 从自定义网域传送内容。如需了解通过 HTTPS 从自定义网域传送内容的更多方式,请参阅排查 HTTPS 传送问题。您还可以使用 Cloud Storage 来通过 HTTP 传送自定义网域内容,这不需要负载均衡器。

如需查看有关静态网页的示例和提示(包括如何为动态网站托管静态资源),请参阅静态网站页面

目标

本教程将介绍如何执行以下操作:

  • 创建存储桶。
  • 上传和共享您的网站的文件。
  • 设置负载均衡器和 SSL 证书。
  • 将负载均衡器连接到您的存储桶。
  • 使用 A 记录将您的网域指向负载均衡器。
  • 测试网站。

费用

本教程使用 Google Cloud 的以下收费组件:

如需详细了解托管静态网站时可能产生的费用,请参阅监控费用提示。

准备工作

  1. 登录您的 Google Cloud 账号。如果您是 Google Cloud 新手,请创建一个账号来评估我们的产品在实际场景中的表现。新客户还可获享 $300 赠金,用于运行、测试和部署工作负载。
  2. 在 Google Cloud Console 中的项目选择器页面上,选择或创建一个 Google Cloud 项目

    转到“项目选择器”

  3. 确保您的 Google Cloud 项目已启用结算功能

  4. 在 Google Cloud Console 中的项目选择器页面上,选择或创建一个 Google Cloud 项目

    转到“项目选择器”

  5. 确保您的 Google Cloud 项目已启用结算功能

  6. 为您的项目启用 Compute Engine API
  7. 具有以下 Identity and Access Management 角色:Storage AdminCompute Network Admin
  8. 准备一个归您所有或管理的网域。如果您没有现成可用的域名,可以通过多项服务(例如 Cloud Domains)注册新网域。

    本教程使用的网域是 example.com

  9. 准备一些您要传送的网站文件。如果您至少有一个索引页面 (index.html) 和 404 页面 (404.html),那么最适合使用本教程。
  10. (可选)如果您希望 Cloud Storage 存储桶与您的网域同名,则必须确认您是要使用的网域的所有者或管理员。确保您所验证的网域是顶级网域(如 example.com),而不是子网域(如 www.example.com)。 如果您的网域是通过 Cloud Domains 购买的,那么系统会自动进行验证。

创建存储桶

要创建存储桶,请按以下步骤操作:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 点击 + 创建
  3. 创建存储桶页面上,输入您的存储桶信息。如需转到下一步,请点击继续
    • 指定存储桶的名称中,输入符合存储桶名称要求的名称。
    • 选择存储数据的位置部分,选择 位置类型位置,将永久存储存储桶数据。
    • 对于选择数据的存储类别,请为存储桶选择默认的存储类别,或者选择 Autoclass 对存储桶数据进行自动存储类别管理。

      注意:右侧窗格中的每月费用估算面板会根据您选择的存储类别和位置以及您预期的数据大小和操作,估算存储桶的每月费用。

    • 对于选择如何控制对象的访问权限,请选择您的存储桶是否强制执行禁止公开访问,然后选择存储桶对象的访问权限控制模型

      注意:如果项目的组织政策已实施禁止公开访问,则禁止公开访问切换开关处于锁定状态。

    • 对于选择如何保护对象数据,请根据需要配置 保护工具,然后选择 数据加密方法
  4. 点击创建

如需了解如何在 Google Cloud 控制台中获取有关失败的 Cloud Storage 操作的详细错误信息,请参阅问题排查

命令行

  1. 在 Google Cloud 控制台中,激活 Cloud Shell。

    激活 Cloud Shell

    Cloud Shell 会话随即会在 Google Cloud 控制台的底部启动,并显示命令行提示符。Cloud Shell 是一个已安装 Google Cloud CLI 且已为当前项目设置值的 Shell 环境。该会话可能需要几秒钟时间来完成初始化。

  2. 在开发环境中,运行 gcloud storage buckets create 命令:

    gcloud storage buckets create gs://BUCKET_NAME --location=BUCKET_LOCATION

    其中:

    • BUCKET_NAME 是您要为存储桶指定的名称(须遵循命名要求),例如 my-bucket
    • BUCKET_LOCATION 是存储桶的位置,例如 us-east1

    如果请求成功,该命令将返回以下消息:

    Creating gs://BUCKET_NAME/...

    设置以下标志,以便更好地控制存储桶的创建:

    • --project:指定将与您的存储桶相关联的项目 ID 或项目编号,例如 my-project
    • --default-storage-class:指定存储桶的默认存储类别,例如 STANDARD
    • --soft-delete-duration:指定存储桶的软删除保留时长。例如 2w1d
    • --uniform-bucket-level-access:为存储桶启用统一存储桶级访问权限
    • 如需查看 gcloud 存储桶创建的选项的完整列表,请参阅 buckets create 选项

    例如:

    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_ID --default-storage-class=STORAGE_CLASS --location=BUCKET_LOCATION --uniform-bucket-level-access

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name,
   std::string const& storage_class, std::string const& location) {
  StatusOr<gcs::BucketMetadata> bucket_metadata =
      client.CreateBucket(bucket_name, gcs::BucketMetadata()
                                           .set_storage_class(storage_class)
                                           .set_location(location));
  if (!bucket_metadata) throw std::move(bucket_metadata).status();

  std::cout << "Bucket " << bucket_metadata->name() << " created."
            << "\nFull Metadata: " << *bucket_metadata << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;

public class CreateRegionalBucketSample
{
    /// <summary>
    /// Creates a storage bucket with region.
    /// </summary>
    /// <param name="projectId">The ID of the project to create the buckets in.</param>
    /// <param name="location">The location of the bucket. Object data for objects in the bucket resides in
    /// physical storage within this region. Defaults to US.</param>
    /// <param name="bucketName">The name of the bucket to create.</param>
    /// <param name="storageClass">The bucket's default storage class, used whenever no storageClass is specified
    /// for a newly-created object. This defines how objects in the bucket are stored
    /// and determines the SLA and the cost of storage. Values include MULTI_REGIONAL,
    /// REGIONAL, STANDARD, NEARLINE, COLDLINE, ARCHIVE, and DURABLE_REDUCED_AVAILABILITY.
    /// If this value is not specified when the bucket is created, it will default to
    /// STANDARD.</param>
    public Bucket CreateRegionalBucket(
        string projectId = "your-project-id",
        string bucketName = "your-unique-bucket-name",
        string location = "us-west1",
        string storageClass = "REGIONAL")
    {
        var storage = StorageClient.Create();
        Bucket bucket = new Bucket
        {
            Location = location,
            Name = bucketName,
            StorageClass = storageClass
        };
        var newlyCreatedBucket = storage.CreateBucket(projectId, bucket);
        Console.WriteLine($"Created {bucketName}.");
        return newlyCreatedBucket;
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"
	"time"

	"cloud.google.com/go/storage"
)

// createBucketClassLocation creates a new bucket in the project with Storage class and
// location.
func createBucketClassLocation(w io.Writer, projectID, bucketName string) error {
	// projectID := "my-project-id"
	// bucketName := "bucket-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*30)
	defer cancel()

	storageClassAndLocation := &storage.BucketAttrs{
		StorageClass: "COLDLINE",
		Location:     "asia",
	}
	bucket := client.Bucket(bucketName)
	if err := bucket.Create(ctx, projectID, storageClassAndLocation); err != nil {
		return fmt.Errorf("Bucket(%q).Create: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Created bucket %v in %v with storage class %v\n", bucketName, storageClassAndLocation.Location, storageClassAndLocation.StorageClass)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.BucketInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageClass;
import com.google.cloud.storage.StorageOptions;

public class CreateBucketWithStorageClassAndLocation {
  public static void createBucketWithStorageClassAndLocation(String projectId, String bucketName) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID to give your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();

    // See the StorageClass documentation for other valid storage classes:
    // https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/storage/StorageClass.html
    StorageClass storageClass = StorageClass.COLDLINE;

    // See this documentation for other valid locations:
    // http://g.co/cloud/storage/docs/bucket-locations#location-mr
    String location = "ASIA";

    Bucket bucket =
        storage.create(
            BucketInfo.newBuilder(bucketName)
                .setStorageClass(storageClass)
                .setLocation(location)
                .build());

    System.out.println(
        "Created bucket "
            + bucket.getName()
            + " in "
            + bucket.getLocation()
            + " with storage class "
            + bucket.getStorageClass());
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The name of a storage class
// See the StorageClass documentation for other valid storage classes:
// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/storage/StorageClass.html
// const storageClass = 'coldline';

// The name of a location
// See this documentation for other valid locations:
// http://g.co/cloud/storage/docs/locations#location-mr
// const location = 'ASIA';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
// The bucket in the sample below will be created in the project associated with this client.
// For more information, please see https://cloud.google.com/docs/authentication/production or https://googleapis.dev/nodejs/storage/latest/Storage.html
const storage = new Storage();

async function createBucketWithStorageClassAndLocation() {
  // For default values see: https://cloud.google.com/storage/docs/locations and
  // https://cloud.google.com/storage/docs/storage-classes
  const [bucket] = await storage.createBucket(bucketName, {
    location,
    [storageClass]: true,
  });

  console.log(
    `${bucket.name} created with ${storageClass} class in ${location}`
  );
}

createBucketWithStorageClassAndLocation().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Create a new bucket with a custom default storage class and location.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 */
function create_bucket_class_location(string $bucketName): void
{
    $storage = new StorageClient();
    $storageClass = 'COLDLINE';
    $location = 'ASIA';
    $bucket = $storage->createBucket($bucketName, [
        'storageClass' => $storageClass,
        'location' => $location,
    ]);

    $objects = $bucket->objects([
        'encryption' => [
            'defaultKmsKeyName' => null,
        ]
    ]);

    printf('Created bucket %s in %s with storage class %s', $bucketName, $storageClass, $location);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import storage

def create_bucket_class_location(bucket_name):
    """
    Create a new bucket in the US region with the coldline storage
    class
    """
    # bucket_name = "your-new-bucket-name"

    storage_client = storage.Client()

    bucket = storage_client.bucket(bucket_name)
    bucket.storage_class = "COLDLINE"
    new_bucket = storage_client.create_bucket(bucket, location="us")

    print(
        "Created bucket {} in {} with storage class {}".format(
            new_bucket.name, new_bucket.location, new_bucket.storage_class
        )
    )
    return new_bucket

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def create_bucket_class_location bucket_name:
  # The ID to give your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket  = storage.create_bucket bucket_name,
                                  location:      "ASIA",
                                  storage_class: "COLDLINE"

  puts "Created bucket #{bucket.name} in #{bucket.location} with #{bucket.storage_class} class"
end

Terraform

您可以使用 Terraform 资源创建存储桶

以下示例包括分配索引页面后缀和自定义错误页面。如需了解详情,请参阅分配专用页面

# Create new storage bucket in the US multi-region
# and settings for main_page_suffix and not_found_page
resource "random_id" "bucket_prefix" {
  byte_length = 8
}

resource "google_storage_bucket" "static_website" {
  name          = "${random_id.bucket_prefix.hex}-static-website-bucket"
  location      = "US"
  storage_class = "STANDARD"
  website {
    main_page_suffix = "index.html"
    not_found_page   = "404.html"
  }
}

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个包含存储桶设置的 JSON 文件,其中必须包含存储桶的 name。如需查看完整的设置列表,请参阅存储桶:插入文档。以下是一些常用的设置,包括:
  3. {
      "name": "BUCKET_NAME",
      "location": "BUCKET_LOCATION",
      "storageClass": "STORAGE_CLASS",
      "iamConfiguration": {
        "uniformBucketLevelAccess": {
          "enabled": true
        },
      }
    }

    其中:

    • BUCKET_NAME 是您要为自己的存储桶指定的名称(须遵循命名要求),例如 my-bucket
    • BUCKET_LOCATION 是您要用于存储自己的存储桶对象数据位置,例如 US-EAST1
    • STORAGE_CLASS 是您存储桶的默认存储类别,例如 STANDARD
  4. 使用 cURL 调用 JSON API
    curl -X POST --data-binary @JSON_FILE_NAME \
         -H "Authorization: Bearer OAUTH2_TOKEN" \
         -H "Content-Type: application/json" \
         "https://storage.googleapis.com/storage/v1/b?project=PROJECT_IDENTIFIER"

    其中:

    • JSON_FILE_NAME 是您在第 2 步中创建的 JSON 文件的名称。
    • OAUTH2_TOKEN 是您在第 1 步中生成的访问令牌。
    • PROJECT_IDENTIFIER 是将与您的存储桶相关联的项目的 ID 或编号,例如 my-project

XML API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建包含存储桶设置的 XML 文件。如需查看完整的设置列表,请参阅 XML:创建存储桶文档。以下是一些常用的设置,包括:
  3. <CreateBucketConfiguration>
       <LocationConstraint>BUCKET_LOCATION</LocationConstraint>
       <StorageClass>STORAGE_CLASS</StorageClass>
    </CreateBucketConfiguration>

    其中:

    • BUCKET_LOCATION 是您要用于存储自己的存储桶对象数据位置,例如 US-EAST1
    • STORAGE_CLASS 是您存储桶的默认存储类别,例如 STANDARD
  4. 使用 cURL 调用 XML API
    curl -X PUT --data-binary @XML_FILE_NAME \
         -H "Authorization: Bearer OAUTH2_TOKEN" \
         -H "x-goog-project-id: PROJECT_ID" \
         "https://storage.googleapis.com/BUCKET_NAME"

    其中:

    • XML_FILE_NAME 是您在第 2 步中创建的 XML 文件的名称。
    • OAUTH2_TOKEN 是您在第 1 步中生成的访问令牌。
    • PROJECT_ID 是将与您的存储桶相关联的项目的 ID,例如 my-project
    • BUCKET_NAME 是您要为自己的存储桶指定的名称(须遵循命名要求),例如 my-bucket

上传网站的文件

将您希望网站传送的文件添加到您的存储桶中:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,点击您创建的存储桶的名称。

    此时会打开“存储桶详情”页面,其中“对象”标签页处于选中状态。

  3. 点击上传文件按钮。

  4. 在文件对话框中,浏览至所需文件并将其选中。

上传完成后,您应该会看到文件名和存储桶中显示的文件信息。

如需了解如何在 Google Cloud 控制台中获取失败的 Cloud Storage 操作的详细错误信息,请参阅问题排查

命令行

使用 gcloud storage cp 命令将文件复制到您的存储桶。 比方说,如需将文件 index.html 从其当前位置 Desktop 复制到存储桶 my-static-assets,请使用以下命令:

gcloud storage cp Desktop/index.html gs://my-static-assets

如果成功,则响应类似如下示例:

Completed files 1/1 | 164.3kiB/164.3kiB

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& file_name,
   std::string const& bucket_name, std::string const& object_name) {
  // Note that the client library automatically computes a hash on the
  // client-side to verify data integrity during transmission.
  StatusOr<gcs::ObjectMetadata> metadata = client.UploadFile(
      file_name, bucket_name, object_name, gcs::IfGenerationMatch(0));
  if (!metadata) throw std::move(metadata).status();

  std::cout << "Uploaded " << file_name << " to object " << metadata->name()
            << " in bucket " << metadata->bucket()
            << "\nFull metadata: " << *metadata << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Cloud.Storage.V1;
using System;
using System.IO;

public class UploadFileSample
{
    public void UploadFile(
        string bucketName = "your-unique-bucket-name",
        string localPath = "my-local-path/my-file-name",
        string objectName = "my-file-name")
    {
        var storage = StorageClient.Create();
        using var fileStream = File.OpenRead(localPath);
        storage.UploadObject(bucketName, objectName, null, fileStream);
        Console.WriteLine($"Uploaded {objectName}.");
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"
	"os"
	"time"

	"cloud.google.com/go/storage"
)

// uploadFile uploads an object.
func uploadFile(w io.Writer, bucket, object string) error {
	// bucket := "bucket-name"
	// object := "object-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	// Open local file.
	f, err := os.Open("notes.txt")
	if err != nil {
		return fmt.Errorf("os.Open: %w", err)
	}
	defer f.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*50)
	defer cancel()

	o := client.Bucket(bucket).Object(object)

	// Optional: set a generation-match precondition to avoid potential race
	// conditions and data corruptions. The request to upload is aborted if the
	// object's generation number does not match your precondition.
	// For an object that does not yet exist, set the DoesNotExist precondition.
	o = o.If(storage.Conditions{DoesNotExist: true})
	// If the live object already exists in your bucket, set instead a
	// generation-match precondition using the live object's generation number.
	// attrs, err := o.Attrs(ctx)
	// if err != nil {
	// 	return fmt.Errorf("object.Attrs: %w", err)
	// }
	// o = o.If(storage.Conditions{GenerationMatch: attrs.Generation})

	// Upload an object with storage.Writer.
	wc := o.NewWriter(ctx)
	if _, err = io.Copy(wc, f); err != nil {
		return fmt.Errorf("io.Copy: %w", err)
	}
	if err := wc.Close(); err != nil {
		return fmt.Errorf("Writer.Close: %w", err)
	}
	fmt.Fprintf(w, "Blob %v uploaded.\n", object)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import java.io.IOException;
import java.nio.file.Paths;

public class UploadObject {
  public static void uploadObject(
      String projectId, String bucketName, String objectName, String filePath) throws IOException {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    // The ID of your GCS object
    // String objectName = "your-object-name";

    // The path to your file to upload
    // String filePath = "path/to/your/file"

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    BlobId blobId = BlobId.of(bucketName, objectName);
    BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build();

    // Optional: set a generation-match precondition to avoid potential race
    // conditions and data corruptions. The request returns a 412 error if the
    // preconditions are not met.
    Storage.BlobWriteOption precondition;
    if (storage.get(bucketName, objectName) == null) {
      // For a target object that does not yet exist, set the DoesNotExist precondition.
      // This will cause the request to fail if the object is created before the request runs.
      precondition = Storage.BlobWriteOption.doesNotExist();
    } else {
      // If the destination already exists in your bucket, instead set a generation-match
      // precondition. This will cause the request to fail if the existing object's generation
      // changes before the request runs.
      precondition =
          Storage.BlobWriteOption.generationMatch(
              storage.get(bucketName, objectName).getGeneration());
    }
    storage.createFrom(blobInfo, Paths.get(filePath), precondition);

    System.out.println(
        "File " + filePath + " uploaded to bucket " + bucketName + " as " + objectName);
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

以下示例会上传单个对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The path to your file to upload
// const filePath = 'path/to/your/file';

// The new ID for your GCS file
// const destFileName = 'your-new-file-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function uploadFile() {
  const options = {
    destination: destFileName,
    // Optional:
    // Set a generation-match precondition to avoid potential race conditions
    // and data corruptions. The request to upload is aborted if the object's
    // generation number does not match your precondition. For a destination
    // object that does not yet exist, set the ifGenerationMatch precondition to 0
    // If the destination object already exists in your bucket, set instead a
    // generation-match precondition using its generation number.
    preconditionOpts: {ifGenerationMatch: generationMatchPrecondition},
  };

  await storage.bucket(bucketName).upload(filePath, options);
  console.log(`${filePath} uploaded to ${bucketName}`);
}

uploadFile().catch(console.error);

以下示例会同时上传多个对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The ID of the first GCS file to download
// const firstFilePath = 'your-first-file-name';

// The ID of the second GCS file to download
// const secondFilePath = 'your-second-file-name';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadManyFilesWithTransferManager() {
  // Uploads the files
  await transferManager.uploadManyFiles([firstFilePath, secondFilePath]);

  for (const filePath of [firstFilePath, secondFilePath]) {
    console.log(`${filePath} uploaded to ${bucketName}.`);
  }
}

uploadManyFilesWithTransferManager().catch(console.error);

以下示例并发上传具有公共前缀的所有对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The local directory to upload
// const directoryName = 'your-directory';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadDirectoryWithTransferManager() {
  // Uploads the directory
  await transferManager.uploadManyFiles(directoryName);

  console.log(`${directoryName} uploaded to ${bucketName}.`);
}

uploadDirectoryWithTransferManager().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Upload a file.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $objectName The name of your Cloud Storage object.
 *        (e.g. 'my-object')
 * @param string $source The path to the file to upload.
 *        (e.g. '/path/to/your/file')
 */
function upload_object(string $bucketName, string $objectName, string $source): void
{
    $storage = new StorageClient();
    if (!$file = fopen($source, 'r')) {
        throw new \InvalidArgumentException('Unable to open file for reading');
    }
    $bucket = $storage->bucket($bucketName);
    $object = $bucket->upload($file, [
        'name' => $objectName
    ]);
    printf('Uploaded %s to gs://%s/%s' . PHP_EOL, basename($source), $bucketName, $objectName);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

以下示例会上传单个对象:

from google.cloud import storage

def upload_blob(bucket_name, source_file_name, destination_blob_name):
    """Uploads a file to the bucket."""
    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"
    # The path to your file to upload
    # source_file_name = "local/path/to/file"
    # The ID of your GCS object
    # destination_blob_name = "storage-object-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)
    blob = bucket.blob(destination_blob_name)

    # Optional: set a generation-match precondition to avoid potential race conditions
    # and data corruptions. The request to upload is aborted if the object's
    # generation number does not match your precondition. For a destination
    # object that does not yet exist, set the if_generation_match precondition to 0.
    # If the destination object already exists in your bucket, set instead a
    # generation-match precondition using its generation number.
    generation_match_precondition = 0

    blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)

    print(
        f"File {source_file_name} uploaded to {destination_blob_name}."
    )

以下示例会同时上传多个对象:

def upload_many_blobs_with_transfer_manager(
    bucket_name, filenames, source_directory="", workers=8
):
    """Upload every file in a list to a bucket, concurrently in a process pool.

    Each blob name is derived from the filename, not including the
    `source_directory` parameter. For complete control of the blob name for each
    file (and other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # A list (or other iterable) of filenames to upload.
    # filenames = ["file_1.txt", "file_2.txt"]

    # The directory on your computer that is the root of all of the files in the
    # list of filenames. This string is prepended (with os.path.join()) to each
    # filename to get the full path to the file. Relative paths and absolute
    # paths are both accepted. This string is not included in the name of the
    # uploaded blob; it is only used to find the source files. An empty string
    # means "the current working directory". Note that this parameter allows
    # directory traversal (e.g. "/", "../") and is not intended for unsanitized
    # end user input.
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    results = transfer_manager.upload_many_from_filenames(
        bucket, filenames, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(filenames, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

以下示例并发上传具有公共前缀的所有对象:

def upload_directory_with_transfer_manager(bucket_name, source_directory, workers=8):
    """Upload every file in a directory, including all files in subdirectories.

    Each blob name is derived from the filename, not including the `directory`
    parameter itself. For complete control of the blob name for each file (and
    other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # The directory on your computer to upload. Files in the directory and its
    # subdirectories will be uploaded. An empty string means "the current
    # working directory".
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from pathlib import Path

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    # Generate a list of paths (in string form) relative to the `directory`.
    # This can be done in a single list comprehension, but is expanded into
    # multiple lines here for clarity.

    # First, recursively get all files in `directory` as Path objects.
    directory_as_path_obj = Path(source_directory)
    paths = directory_as_path_obj.rglob("*")

    # Filter so the list only includes files, not directories themselves.
    file_paths = [path for path in paths if path.is_file()]

    # These paths are relative to the current working directory. Next, make them
    # relative to `directory`
    relative_paths = [path.relative_to(source_directory) for path in file_paths]

    # Finally, convert them all to strings.
    string_paths = [str(path) for path in relative_paths]

    print("Found {} files.".format(len(string_paths)))

    # Start the upload.
    results = transfer_manager.upload_many_from_filenames(
        bucket, string_paths, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(string_paths, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def upload_file bucket_name:, local_file_path:, file_name: nil
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  # The path to your file to upload
  # local_file_path = "/local/path/to/file.txt"

  # The ID of your GCS object
  # file_name = "your-file-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket  = storage.bucket bucket_name, skip_lookup: true

  file = bucket.create_file local_file_path, file_name

  puts "Uploaded #{local_file_path} as #{file.name} in bucket #{bucket_name}"
end

Terraform

# Upload a simple index.html page to the bucket
resource "google_storage_bucket_object" "indexpage" {
  name         = "index.html"
  content      = "<html><body>Hello World!</body></html>"
  content_type = "text/html"
  bucket       = google_storage_bucket.static_website.id
}

# Upload a simple 404 / error page to the bucket
resource "google_storage_bucket_object" "errorpage" {
  name         = "404.html"
  content      = "<html><body>404!</body></html>"
  content_type = "text/html"
  bucket       = google_storage_bucket.static_website.id
}

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 使用 cURL,通过 POST Object 请求调用 JSON API。对于上传到 my-static-assets 存储桶的 index.html 文件:

    curl -X POST --data-binary @index.html \
      -H "Content-Type: text/html" \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      "https://storage.googleapis.com/upload/storage/v1/b/my-static-assets/o?uploadType=media&name=index.html"

XML API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 使用 cURL,通过 PUT Object 请求调用 XML API。对于上传到 my-static-assets 存储桶的 index.html 文件:

    curl -X PUT --data-binary @index.html \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      -H "Content-Type: text/html" \
      "https://storage.googleapis.com/my-static-assets/index.html"

共享您的文件

如需将存储桶中的所有对象设为可供公共互联网上的所有人读取,请执行以下操作:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,点击您要设为公开的存储桶的名称。

  3. 选择页面顶部附近的权限标签。

  4. 如果公开访问窗格显示非公开,请点击标有移除禁止公开访问的按钮,然后在出现的对话框中点击确认

  5. 点击 授予访问权限按钮。

    此时会显示“添加主账号”对话框。

  6. 新的主账号字段中,输入 allUsers

  7. 选择角色下拉菜单中,选择 Cloud Storage 子菜单,然后点击 Storage Object Viewer 选项。

  8. 点击保存

  9. 点击允许公开访问

对象群组被公开共享后,“公共访问权限”列中会针对每个对象显示一个链接图标。您可以点击此图标来获取相应对象的网址。

如需了解如何在 Google Cloud 控制台中获取失败的 Cloud Storage 操作的详细错误信息,请参阅问题排查

命令行

使用 buckets add-iam-policy-binding 命令:

gcloud storage buckets add-iam-policy-binding  gs://my-static-assets --member=allUsers --role=roles/storage.objectViewer

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name) {
  auto current_policy = client.GetNativeBucketIamPolicy(
      bucket_name, gcs::RequestedPolicyVersion(3));
  if (!current_policy) throw std::move(current_policy).status();

  current_policy->set_version(3);
  current_policy->bindings().emplace_back(
      gcs::NativeIamBinding("roles/storage.objectViewer", {"allUsers"}));

  auto updated =
      client.SetNativeBucketIamPolicy(bucket_name, *current_policy);
  if (!updated) throw std::move(updated).status();

  std::cout << "Policy successfully updated: " << *updated << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;
using System.Collections.Generic;

public class MakeBucketPublicSample
{
    public void MakeBucketPublic(string bucketName = "your-unique-bucket-name")
    {
        var storage = StorageClient.Create();

        Policy policy = storage.GetBucketIamPolicy(bucketName);

        policy.Bindings.Add(new Policy.BindingsData
        {
            Role = "roles/storage.objectViewer",
            Members = new List<string> { "allUsers" }
        });

        storage.SetBucketIamPolicy(bucketName, policy);
        Console.WriteLine(bucketName + " is now public ");
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/iam"
	"cloud.google.com/go/storage"
	iampb "google.golang.org/genproto/googleapis/iam/v1"
)

// setBucketPublicIAM makes all objects in a bucket publicly readable.
func setBucketPublicIAM(w io.Writer, bucketName string) error {
	// bucketName := "bucket-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	policy, err := client.Bucket(bucketName).IAM().V3().Policy(ctx)
	if err != nil {
		return fmt.Errorf("Bucket(%q).IAM().V3().Policy: %w", bucketName, err)
	}
	role := "roles/storage.objectViewer"
	policy.Bindings = append(policy.Bindings, &iampb.Binding{
		Role:    role,
		Members: []string{iam.AllUsers},
	})
	if err := client.Bucket(bucketName).IAM().V3().SetPolicy(ctx, policy); err != nil {
		return fmt.Errorf("Bucket(%q).IAM().SetPolicy: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Bucket %v is now publicly readable\n", bucketName)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import com.google.cloud.Identity;
import com.google.cloud.Policy;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import com.google.cloud.storage.StorageRoles;

public class MakeBucketPublic {
  public static void makeBucketPublic(String projectId, String bucketName) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Policy originalPolicy = storage.getIamPolicy(bucketName);
    storage.setIamPolicy(
        bucketName,
        originalPolicy
            .toBuilder()
            .addIdentity(StorageRoles.objectViewer(), Identity.allUsers()) // All users can view
            .build());

    System.out.println("Bucket " + bucketName + " is now publicly readable");
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function makeBucketPublic() {
  await storage.bucket(bucketName).makePublic();

  console.log(`Bucket ${bucketName} is now publicly readable`);
}

makeBucketPublic().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Update the specified bucket's IAM configuration to make it publicly accessible.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 */
function set_bucket_public_iam(string $bucketName): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $policy = $bucket->iam()->policy(['requestedPolicyVersion' => 3]);
    $policy['version'] = 3;

    $role = 'roles/storage.objectViewer';
    $members = ['allUsers'];

    $policy['bindings'][] = [
        'role' => $role,
        'members' => $members
    ];

    $bucket->iam()->setPolicy($policy);

    printf('Bucket %s is now public', $bucketName);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from typing import List

from google.cloud import storage

def set_bucket_public_iam(
    bucket_name: str = "your-bucket-name",
    members: List[str] = ["allUsers"],
):
    """Set a public IAM Policy to bucket"""
    # bucket_name = "your-bucket-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)

    policy = bucket.get_iam_policy(requested_policy_version=3)
    policy.bindings.append(
        {"role": "roles/storage.objectViewer", "members": members}
    )

    bucket.set_iam_policy(policy)

    print(f"Bucket {bucket.name} is now publicly readable")

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def set_bucket_public_iam bucket_name:
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.policy do |p|
    p.add "roles/storage.objectViewer", "allUsers"
  end

  puts "Bucket #{bucket_name} is now publicly readable"
end

Terraform

# Make bucket public by granting allUsers READER access
resource "google_storage_bucket_access_control" "public_rule" {
  bucket = google_storage_bucket.static_website.id
  role   = "READER"
  entity = "allUsers"
}

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个包含以下信息的 JSON 文件:

    {
      "bindings":[
        {
          "role": "roles/storage.objectViewer",
          "members":["allUsers"]
        }
      ]
    }
  3. 使用 cURL,通过 PUT Bucket 请求调用 JSON API

    curl -X PUT --data-binary @JSON_FILE_NAME \
      -H "Authorization: Bearer OAUTH2_TOKEN" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/iam"

    其中:

    • JSON_FILE_NAME 是您在第 2 步中创建的 JSON 文件的路径。
    • OAUTH2_TOKEN 是您在第 1 步中创建的访问令牌。
    • BUCKET_NAME 是您要公开其对象的存储桶的名称。例如 my-static-assets

XML API

XML API 不支持将存储桶中的所有对象设为可供公开读取。请改用 Google Cloud 控制台或 gcloud storage,或为每个对象设置 ACL

如需将存储桶中的单个对象设为可公开访问,您需要将存储桶的访问权限控制模式切换为精细控制。通常,这样可以更轻松、更快地访问存储桶中的所有文件。

如果访问者向网址请求非公开或不存在的文件,他们会收到 http 403 响应代码。如需了解如何添加使用 http 404 响应代码的错误页面,请参阅下一部分。

推荐:分配专用页面

您可以分配索引页面后缀和自定义错误页面,称为专用页面。您可以选择分配任何一项,但如果您不分配索引页面后缀并且上传相应的索引页面,则系统将向访问您的顶级网站的用户传送包含存储桶中公共对象列表的 XML 文档树。

如需详细了解专用页面的行为,请参阅专用页面

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,找到您所创建的存储桶。

  3. 点击与存储桶关联的 Bucket overflow 菜单 (),然后选择修改网站配置

  4. 在网站配置对话框中,指定主页面和错误页面。

  5. 点击保存

如需了解如何在 Google Cloud 控制台中获取失败的 Cloud Storage 操作的详细错误信息,请参阅问题排查

命令行

使用带有 --web-main-page-suffix--web-error-page 标志的 buckets update 命令。

在以下示例中,MainPageSuffix 设置为 index.htmlNotFoundPage 设置为 404.html

gcloud storage buckets update gs://my-static-assets --web-main-page-suffix=index.html --web-error-page=404.html

如果成功,此命令会返回以下内容:

Updating gs://www.example.com/...
  Completed 1

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name,
   std::string const& main_page_suffix, std::string const& not_found_page) {
  StatusOr<gcs::BucketMetadata> original =
      client.GetBucketMetadata(bucket_name);

  if (!original) throw std::move(original).status();
  StatusOr<gcs::BucketMetadata> patched = client.PatchBucket(
      bucket_name,
      gcs::BucketMetadataPatchBuilder().SetWebsite(
          gcs::BucketWebsite{main_page_suffix, not_found_page}),
      gcs::IfMetagenerationMatch(original->metageneration()));
  if (!patched) throw std::move(patched).status();

  if (!patched->has_website()) {
    std::cout << "Static website configuration is not set for bucket "
              << patched->name() << "\n";
    return;
  }

  std::cout << "Static website configuration successfully set for bucket "
            << patched->name() << "\nNew main page suffix is: "
            << patched->website().main_page_suffix
            << "\nNew not found page is: "
            << patched->website().not_found_page << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;

public class BucketWebsiteConfigurationSample
{
    public Bucket BucketWebsiteConfiguration(
        string bucketName = "your-bucket-name",
        string mainPageSuffix = "index.html",
        string notFoundPage = "404.html")
    {
        var storage = StorageClient.Create();
        var bucket = storage.GetBucket(bucketName);

        if (bucket.Website == null)
        {
            bucket.Website = new Bucket.WebsiteData();
        }
        bucket.Website.MainPageSuffix = mainPageSuffix;
        bucket.Website.NotFoundPage = notFoundPage;

        bucket = storage.UpdateBucket(bucket);
        Console.WriteLine($"Static website bucket {bucketName} is set up to use {mainPageSuffix} as the index page and {notFoundPage} as the 404 not found page.");
        return bucket;
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"
	"time"

	"cloud.google.com/go/storage"
)

// setBucketWebsiteInfo sets website configuration on a bucket.
func setBucketWebsiteInfo(w io.Writer, bucketName, indexPage, notFoundPage string) error {
	// bucketName := "www.example.com"
	// indexPage := "index.html"
	// notFoundPage := "404.html"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*10)
	defer cancel()

	bucket := client.Bucket(bucketName)
	bucketAttrsToUpdate := storage.BucketAttrsToUpdate{
		Website: &storage.BucketWebsite{
			MainPageSuffix: indexPage,
			NotFoundPage:   notFoundPage,
		},
	}
	if _, err := bucket.Update(ctx, bucketAttrsToUpdate); err != nil {
		return fmt.Errorf("Bucket(%q).Update: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Static website bucket %v is set up to use %v as the index page and %v as the 404 page\n", bucketName, indexPage, notFoundPage)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;

public class SetBucketWebsiteInfo {
  public static void setBucketWesbiteInfo(
      String projectId, String bucketName, String indexPage, String notFoundPage) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your static website bucket
    // String bucketName = "www.example.com";

    // The index page for a static website bucket
    // String indexPage = "index.html";

    // The 404 page for a static website bucket
    // String notFoundPage = "404.html";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Bucket bucket = storage.get(bucketName);
    bucket.toBuilder().setIndexPage(indexPage).setNotFoundPage(notFoundPage).build().update();

    System.out.println(
        "Static website bucket "
            + bucketName
            + " is set up to use "
            + indexPage
            + " as the index page and "
            + notFoundPage
            + " as the 404 page");
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The name of the main page
// const mainPageSuffix = 'http://example.com';

// The Name of a 404 page
// const notFoundPage = 'http://example.com/404.html';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function addBucketWebsiteConfiguration() {
  await storage.bucket(bucketName).setMetadata({
    website: {
      mainPageSuffix,
      notFoundPage,
    },
  });

  console.log(
    `Static website bucket ${bucketName} is set up to use ${mainPageSuffix} as the index page and ${notFoundPage} as the 404 page`
  );
}

addBucketWebsiteConfiguration().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Update the given bucket's website configuration.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $indexPageObject the name of an object in the bucket to use as
 *        (e.g. 'index.html')
 *     an index page for a static website bucket.
 * @param string $notFoundPageObject the name of an object in the bucket to use
 *        (e.g. '404.html')
 *     as the 404 Not Found page.
 */
function define_bucket_website_configuration(string $bucketName, string $indexPageObject, string $notFoundPageObject): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $bucket->update([
        'website' => [
            'mainPageSuffix' => $indexPageObject,
            'notFoundPage' => $notFoundPageObject
        ]
    ]);

    printf(
        'Static website bucket %s is set up to use %s as the index page and %s as the 404 page.',
        $bucketName,
        $indexPageObject,
        $notFoundPageObject
    );
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import storage

def define_bucket_website_configuration(bucket_name, main_page_suffix, not_found_page):
    """Configure website-related properties of bucket"""
    # bucket_name = "your-bucket-name"
    # main_page_suffix = "index.html"
    # not_found_page = "404.html"

    storage_client = storage.Client()

    bucket = storage_client.get_bucket(bucket_name)
    bucket.configure_website(main_page_suffix, not_found_page)
    bucket.patch()

    print(
        "Static website bucket {} is set up to use {} as the index page and {} as the 404 page".format(
            bucket.name, main_page_suffix, not_found_page
        )
    )
    return bucket

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def define_bucket_website_configuration bucket_name:, main_page_suffix:, not_found_page:
  # The ID of your static website bucket
  # bucket_name = "www.example.com"

  # The index page for a static website bucket
  # main_page_suffix = "index.html"

  # The 404 page for a static website bucket
  # not_found_page = "404.html"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.update do |b|
    b.website_main = main_page_suffix
    b.website_404 = not_found_page
  end

  puts "Static website bucket #{bucket_name} is set up to use #{main_page_suffix} as the index page and " \
       "#{not_found_page} as the 404 page"
end

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个 JSON 文件,以将 website 对象中的 mainPageSuffixnotFoundPage 属性设置为所需页面。

    在以下示例中,mainPageSuffix 设置为 index.htmlnotFoundPage 设置为 404.html

    {
      "website":{
        "mainPageSuffix": "index.html",
        "notFoundPage": "404.html"
      }
    }
  3. 使用 cURL,通过 PATCH Bucket 请求调用 JSON API。对于存储桶 my-static-assets

    curl -X PATCH --data-binary @web-config.json \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/my-static-assets"

XML API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个 XML 文件,以将 WebsiteConfiguration 元素中的 MainPageSuffixNotFoundPage 元素设置为所需页面。

    在以下示例中,MainPageSuffix 设置为 index.htmlNotFoundPage 设置为 404.html

    <WebsiteConfiguration>
      <MainPageSuffix>index.html</MainPageSuffix>
      <NotFoundPage>404.html</NotFoundPage>
    </WebsiteConfiguration>
  3. 使用 cURL,通过 PUT Bucket 请求和 websiteConfig 查询字符串参数调用 XML API。用于 my-static-assets

    curl -X PUT --data-binary @web-config.xml \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      https://storage.googleapis.com/my-static-assets?websiteConfig

设置负载均衡器和 SSL 证书

Cloud Storage 本身不支持使用 HTTPS 的自定义网域,因此您还需要设置一份 SSL 证书(该证书会附加到 HTTPS 负载均衡器),以通过 HTTPS 传送您的网站。本部分介绍了如何将您的存储桶添加到负载均衡器的后端,以及如何将新的 Google 管理的 SSL 证书添加到负载均衡器的前端。

  1. 转到 Google Cloud 控制台中的“负载均衡”页面。
    进入“负载均衡”
  2. 应用负载均衡器 (HTTP/S) 卡片上,点击开始配置
  3. 选择从互联网到我的虚拟机或无服务器服务
  4. 选择全球外部应用负载均衡器
  5. 点击继续

    系统会显示负载均衡器的配置窗口。

  6. 在继续配置之前,请为负载均衡器提供一个名称,例如 example-lb

配置前端

本部分介绍了如何配置 HTTPS 协议和创建 SSL 证书。您还可以选择现有证书或上传自行管理的 SSL 证书

  1. 点击前端配置
  2. (可选)为您的前端配置指定一个名称
  3. 协议部分,选择 HTTPS(包括 HTTP/2)
  4. IP 版本部分,选择 IPv4。如果您希望使用 IPv6,请参阅 IPv6 终结了解详情。
  5. 对于 IP 地址字段:

    • 在下拉列表中,点击创建 IP 地址
    • 预留新的静态 IP 地址弹出式窗口中,输入一个名称(例如 example-ip)作为 IP 地址的名称
    • 点击预留
  6. 端口中,选择 443

  7. 证书字段下拉菜单中,选择创建新证书。随即将在一个面板中显示证书创建表单。配置以下内容:

    • 为您的证书指定一个名称,例如 example-ssl
    • 创建模式部分,选择创建 Google 管理的证书
    • 网域部分,输入您的网站名称,例如 www.example.com。如果您希望通过其他网域(例如根网域 example.com)传送内容,请按 Enter 键将这些网域添加到其他行。每个证书的网域上限为 100 个。
  8. 点击创建

  9. (可选)如果您希望 Google Cloud 自动设置部分 HTTP 负载均衡器来重定向 HTTP 流量,请选中启用从 HTTP 到 HTTPS 的重定向旁边的复选框。

  10. 点击完成

配置后端

  1. 点击后端配置
  2. 后端服务和后端存储桶下拉列表中,点击创建后端存储桶
  3. 选择后端存储桶名称,例如 example-bucket。您选择的名称可能与您之前创建的存储桶的名称不同。
  4. 点击 Cloud Storage 存储桶字段中的浏览
  5. 选择您之前创建的 my-static-assets 存储桶,然后点击选择
  6. (可选)如果要使用 Cloud CDN,请选中启用 Cloud CDN 对应的复选框,并根据需要配置 Cloud CDN。请注意,Cloud CDN 可能会产生额外费用
  7. 点击创建

配置路由规则

路由规则是外部应用负载均衡器的网址映射的组件。在本教程中,您应该跳过负载均衡器配置的这一部分,因为它会自动设置为使用您刚刚配置的后端。

检查配置

  1. 点击检查并最终确定
  2. 检查前端路由规则后端
  3. 点击创建

您可能需要等待几分钟才能创建负载均衡器。

将您的网域连接到负载均衡器

创建负载均衡器后,点击负载均衡器的名称:example-lb。请注意与负载均衡器关联的 IP 地址:例如 30.90.80.100。如需将您的网域指向负载均衡器,请使用您的网域注册服务创建 A 记录。如果您向 SSL 证书添加了多个网域,则必须为每个网域添加一条 A 记录,所有网域均指向负载均衡器的 IP 地址。例如,为 www.example.comexample.com 创建 A 记录:

NAME                  TYPE     DATA
www                   A        30.90.80.100
@                     A        30.90.80.100

Google Cloud 最长可能需要 60-90 分钟来预配证书并通过负载均衡器提供网站。如需监控您的证书状态,请执行以下操作:

控制台

  1. 转到 Google Cloud 控制台中的“负载均衡”页面。
    进入“负载均衡”
  2. 点击负载均衡器的名称:example-lb
  3. 点击与负载均衡器关联的 SSL 证书的名称:example-ssl
  4. 状态网域状态行显示了证书状态。两行都必须处于活跃状态,才能使证书对您的网站有效。

命令行

  1. 如需检查证书状态,请运行以下命令:

    gcloud compute ssl-certificates describe CERTIFICATE_NAME \
      --global \
      --format="get(name,managed.status)"
    
  2. 如需检查网域状态,请运行以下命令:

    gcloud compute ssl-certificates describe CERTIFICATE_NAME \
      --global \
      --format="get(managed.domainStatus)"
    

如需详细了解证书状态,请参阅排查 SSL 证书问题

测试网站

在 SSL 证书生效后,请转到 https://www.example.com/test.html(其中 test.html 是一个存储在用作后端的存储桶中的对象)验证是否从存储桶传送内容。如果您设置了 MainPageSuffix 属性,则 https://www.example.com 会转到 index.html

清理

完成本教程后,您可以清理您创建的资源,让它们停止使用配额,以免产生费用。以下部分介绍如何删除或关闭这些资源。

删除项目

为了避免产生费用,最简单的方法是删除您为本教程创建的项目。

要删除项目,请执行以下操作:

  1. 在 Google Cloud 控制台中,进入管理资源页面。

    转到“管理资源”

  2. 在项目列表中,选择要删除的项目,然后点击删除
  3. 在对话框中输入项目 ID,然后点击关闭以删除项目。

删除负载均衡器和存储桶

如果您不想删除整个项目,请删除为教程创建的负载均衡器和存储桶:

  1. 转到 Google Cloud 控制台中的“负载均衡”页面。
    进入“负载均衡”
  2. 选中 example-lb 旁边的复选框。
  3. 点击删除
  4. (可选)选中您要删除的资源以及负载均衡器(例如 my-static-assets 存储桶或 example-ssl SSL 证书)旁边的复选框。
  5. 点击删除负载均衡器删除负载均衡器和所选的资源

释放预留 IP 地址

如需删除用于教程的预留 IP 地址,请执行以下操作:

  1. 在 Google Cloud 控制台中,转到外部 IP 地址页面。

    转到“外部 IP 地址”

  2. 选中 example-ip 旁边的复选框。

  3. 点击释放静态地址

  4. 在确认窗口中,点击删除

后续步骤

自行试用

如果您是 Google Cloud 新手,请创建一个账号来评估 Cloud Storage 在实际场景中的表现。新客户还可获享 $300 赠金,用于运行、测试和部署工作负载。

免费试用 Cloud Storage