使用 HTTP 托管静态网站


本教程介绍如何配置 Cloud Storage 存储桶以托管域名归您所有的静态网站。静态网页可使用 HTML、CSS 和 JavaScript 等客户端技术。静态网页不能包含动态内容,例如 PHP 等服务器端脚本。

本教程介绍了如何通过 HTTP 传送内容。如需查看使用 HTTPS 的教程,请参阅托管静态网站

如需查看有关静态网页的示例和提示(包括如何为动态网站托管静态资源),请参阅静态网站页面

目标

在此教程中,您将学习以下操作:

  • 使用 CNAME 记录将您的网域指向 Cloud Storage。
  • 创建一个与您的域名相关联的存储桶。
  • 上传和共享您的网站的文件。
  • 测试网站。

费用

本教程使用 Google Cloud 的以下收费组件:

  • Cloud Storage

如需详细了解托管静态网站时可能产生的费用,请参阅关于监控存储费用的提示;如需详细了解 Cloud Storage 费用,请参阅价格页面。

准备工作

  1. 登录您的 Google Cloud 账号。如果您是 Google Cloud 新手,请创建一个账号来评估我们的产品在实际场景中的表现。新客户还可获享 $300 赠金,用于运行、测试和部署工作负载。
  2. 在 Google Cloud Console 中的项目选择器页面上,选择或创建一个 Google Cloud 项目

    转到“项目选择器”

  3. 确保您的 Google Cloud 项目已启用结算功能

  4. 在 Google Cloud Console 中的项目选择器页面上,选择或创建一个 Google Cloud 项目

    转到“项目选择器”

  5. 确保您的 Google Cloud 项目已启用结算功能

  6. 准备一个归您所有或管理的网域。如果您没有现成可用的域名,可以通过多项服务(例如 Cloud Domains)注册新网域。

    本教程使用的网域是 example.com

  7. 确认您是要使用的网域的所有者或管理员。确保您所验证的网域是顶级网域(如 example.com),而不是子网域(如 www.example.com)。

    注意:如果您是要关联到存储桶的网域的所有者,则说明您过去可能已经执行此步骤。如果您的网域是通过 Cloud Domains 购买的,那么系统会自动进行验证。

将您的网域连接到 Cloud Storage

如需将您的网域连接到 Cloud Storage,请通过您的域名注册服务创建 CNAME 记录。CNAME 记录是一种 DNS 记录。它会将请求访问您域名中的网址的流量定向到您想要传送的资源(在本例中为 Cloud Storage 存储桶中的对象)。对于 www.example.comCNAME 记录可能包含以下信息:

NAME                  TYPE     DATA
www                   CNAME    c.storage.googleapis.com.

如需详细了解 CNAME 重定向,请参阅适用于 CNAME 别名的 URI

如需将您的网域连接到 Cloud Storage,请按如下所述操作:

  1. 创建一条指向 c.storage.googleapis.com.CNAME 记录。

    您使用的网域注册服务应具备供您管理网域(包括添加 CNAME 记录)的方法。例如,如果您使用 Cloud DNS,则可以在添加、修改和删除记录页面上找到添加资源记录的说明。

创建存储桶

使用与您为域名创建的 CNAME 相匹配的名称创建一个存储桶。

例如,如果您添加了从 example.comwww 子网域指向 c.storage.googleapis.com.CNAME 记录,则创建一个名为 www.example.com 的存储桶。

要创建存储桶,请按以下步骤操作:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 点击 创建,打开存储桶创建表单。

  3. 输入您的存储桶信息,然后点击继续以完成各个步骤:

    • 存储桶的名称,它与 CNAME 记录的关联主机名匹配。

    • 选择存储桶的位置类型位置。例如,区域us-east1

    • 选择 Standard 存储空间 (Standard Storage) 作为存储类别

    • 访问权限控制中选择统一

  4. 点击创建

如果成功,您会定向至该存储桶的显示有“此存储桶中没有任何对象”文字的页面。

命令行

使用 buckets create 命令:

gcloud storage buckets create gs://www.example.com

如果成功,此命令会返回以下内容:

Creating gs://www.example.com/...

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name,
   std::string const& storage_class, std::string const& location) {
  StatusOr<gcs::BucketMetadata> bucket_metadata =
      client.CreateBucket(bucket_name, gcs::BucketMetadata()
                                           .set_storage_class(storage_class)
                                           .set_location(location));
  if (!bucket_metadata) throw std::move(bucket_metadata).status();

  std::cout << "Bucket " << bucket_metadata->name() << " created."
            << "\nFull Metadata: " << *bucket_metadata << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;

public class CreateRegionalBucketSample
{
    /// <summary>
    /// Creates a storage bucket with region.
    /// </summary>
    /// <param name="projectId">The ID of the project to create the buckets in.</param>
    /// <param name="location">The location of the bucket. Object data for objects in the bucket resides in 
    /// physical storage within this region. Defaults to US.</param>
    /// <param name="bucketName">The name of the bucket to create.</param>
    /// <param name="storageClass">The bucket's default storage class, used whenever no storageClass is specified
    /// for a newly-created object. This defines how objects in the bucket are stored
    /// and determines the SLA and the cost of storage. Values include MULTI_REGIONAL,
    /// REGIONAL, STANDARD, NEARLINE, COLDLINE, ARCHIVE, and DURABLE_REDUCED_AVAILABILITY.
    /// If this value is not specified when the bucket is created, it will default to
    /// STANDARD.</param>
    public Bucket CreateRegionalBucket(
        string projectId = "your-project-id",
        string bucketName = "your-unique-bucket-name",
        string location = "us-west1",
        string storageClass = "REGIONAL")
    {
        var storage = StorageClient.Create();
        Bucket bucket = new Bucket
        {
            Location = location,
            Name = bucketName,
            StorageClass = storageClass
        };
        var newlyCreatedBucket = storage.CreateBucket(projectId, bucket);
        Console.WriteLine($"Created {bucketName}.");
        return newlyCreatedBucket;
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"
	"time"

	"cloud.google.com/go/storage"
)

// createBucketClassLocation creates a new bucket in the project with Storage class and
// location.
func createBucketClassLocation(w io.Writer, projectID, bucketName string) error {
	// projectID := "my-project-id"
	// bucketName := "bucket-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*30)
	defer cancel()

	storageClassAndLocation := &storage.BucketAttrs{
		StorageClass: "COLDLINE",
		Location:     "asia",
	}
	bucket := client.Bucket(bucketName)
	if err := bucket.Create(ctx, projectID, storageClassAndLocation); err != nil {
		return fmt.Errorf("Bucket(%q).Create: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Created bucket %v in %v with storage class %v\n", bucketName, storageClassAndLocation.Location, storageClassAndLocation.StorageClass)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.BucketInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageClass;
import com.google.cloud.storage.StorageOptions;

public class CreateBucketWithStorageClassAndLocation {
  public static void createBucketWithStorageClassAndLocation(String projectId, String bucketName) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID to give your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();

    // See the StorageClass documentation for other valid storage classes:
    // https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/storage/StorageClass.html
    StorageClass storageClass = StorageClass.COLDLINE;

    // See this documentation for other valid locations:
    // http://g.co/cloud/storage/docs/bucket-locations#location-mr
    String location = "ASIA";

    Bucket bucket =
        storage.create(
            BucketInfo.newBuilder(bucketName)
                .setStorageClass(storageClass)
                .setLocation(location)
                .build());

    System.out.println(
        "Created bucket "
            + bucket.getName()
            + " in "
            + bucket.getLocation()
            + " with storage class "
            + bucket.getStorageClass());
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The name of a storage class
// See the StorageClass documentation for other valid storage classes:
// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/storage/StorageClass.html
// const storageClass = 'coldline';

// The name of a location
// See this documentation for other valid locations:
// http://g.co/cloud/storage/docs/locations#location-mr
// const location = 'ASIA';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
// The bucket in the sample below will be created in the project associated with this client.
// For more information, please see https://cloud.google.com/docs/authentication/production or https://googleapis.dev/nodejs/storage/latest/Storage.html
const storage = new Storage();

async function createBucketWithStorageClassAndLocation() {
  // For default values see: https://cloud.google.com/storage/docs/locations and
  // https://cloud.google.com/storage/docs/storage-classes
  const [bucket] = await storage.createBucket(bucketName, {
    location,
    [storageClass]: true,
  });

  console.log(
    `${bucket.name} created with ${storageClass} class in ${location}`
  );
}

createBucketWithStorageClassAndLocation().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Create a new bucket with a custom default storage class and location.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 */
function create_bucket_class_location(string $bucketName): void
{
    $storage = new StorageClient();
    $storageClass = 'COLDLINE';
    $location = 'ASIA';
    $bucket = $storage->createBucket($bucketName, [
        'storageClass' => $storageClass,
        'location' => $location,
    ]);

    $objects = $bucket->objects([
        'encryption' => [
            'defaultKmsKeyName' => null,
        ]
    ]);

    printf('Created bucket %s in %s with storage class %s', $bucketName, $storageClass, $location);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import storage


def create_bucket_class_location(bucket_name):
    """
    Create a new bucket in the US region with the coldline storage
    class
    """
    # bucket_name = "your-new-bucket-name"

    storage_client = storage.Client()

    bucket = storage_client.bucket(bucket_name)
    bucket.storage_class = "COLDLINE"
    new_bucket = storage_client.create_bucket(bucket, location="us")

    print(
        "Created bucket {} in {} with storage class {}".format(
            new_bucket.name, new_bucket.location, new_bucket.storage_class
        )
    )
    return new_bucket

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def create_bucket_class_location bucket_name:
  # The ID to give your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket  = storage.create_bucket bucket_name,
                                  location:      "ASIA",
                                  storage_class: "COLDLINE"

  puts "Created bucket #{bucket.name} in #{bucket.location} with #{bucket.storage_class} class"
end

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个将您的网站名称分配给 name 属性的 JSON 文件:

    {
      "name": "www.example.com"
    }
  3. 使用 cURL 调用 JSON API。对于 www.example.com:

    curl -X POST --data-binary @website-bucket-name.json \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b?project=my-static-website"

XML API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 使用 cURL 调用 XML API,以使用您的网站名称创建一个存储桶。对于 www.example.com:

    curl -X PUT \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      -H "x-goog-project-id: my-static-website" \
      "https://storage.googleapis.com/www.example.com"

上传网站的文件

要将您希望网站传送的文件添加到您的存储桶中,请执行以下操作:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,点击您创建的存储桶的名称。

  3. 点击对象标签中的上传文件按钮。

  4. 在文件对话框中,浏览至所需文件并将其选中。

上传完成后,您应该会看到文件名和存储桶中显示的文件信息。

命令行

使用 gcloud storage cp 命令将文件复制到您的存储桶。 例如,要从文件 index.html 的当前位置 Desktop 复制此文件,请使用以下命令:

gcloud storage cp Desktop/index.html gs://www.example.com

如果成功,则响应类似如下示例:

Completed files 1/1 | 164.3kiB/164.3kiB

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& file_name,
   std::string const& bucket_name, std::string const& object_name) {
  // Note that the client library automatically computes a hash on the
  // client-side to verify data integrity during transmission.
  StatusOr<gcs::ObjectMetadata> metadata = client.UploadFile(
      file_name, bucket_name, object_name, gcs::IfGenerationMatch(0));
  if (!metadata) throw std::move(metadata).status();

  std::cout << "Uploaded " << file_name << " to object " << metadata->name()
            << " in bucket " << metadata->bucket()
            << "\nFull metadata: " << *metadata << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Cloud.Storage.V1;
using System;
using System.IO;

public class UploadFileSample
{
    public void UploadFile(
        string bucketName = "your-unique-bucket-name",
        string localPath = "my-local-path/my-file-name",
        string objectName = "my-file-name")
    {
        var storage = StorageClient.Create();
        using var fileStream = File.OpenRead(localPath);
        storage.UploadObject(bucketName, objectName, null, fileStream);
        Console.WriteLine($"Uploaded {objectName}.");
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"
	"os"
	"time"

	"cloud.google.com/go/storage"
)

// uploadFile uploads an object.
func uploadFile(w io.Writer, bucket, object string) error {
	// bucket := "bucket-name"
	// object := "object-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	// Open local file.
	f, err := os.Open("notes.txt")
	if err != nil {
		return fmt.Errorf("os.Open: %w", err)
	}
	defer f.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*50)
	defer cancel()

	o := client.Bucket(bucket).Object(object)

	// Optional: set a generation-match precondition to avoid potential race
	// conditions and data corruptions. The request to upload is aborted if the
	// object's generation number does not match your precondition.
	// For an object that does not yet exist, set the DoesNotExist precondition.
	o = o.If(storage.Conditions{DoesNotExist: true})
	// If the live object already exists in your bucket, set instead a
	// generation-match precondition using the live object's generation number.
	// attrs, err := o.Attrs(ctx)
	// if err != nil {
	// 	return fmt.Errorf("object.Attrs: %w", err)
	// }
	// o = o.If(storage.Conditions{GenerationMatch: attrs.Generation})

	// Upload an object with storage.Writer.
	wc := o.NewWriter(ctx)
	if _, err = io.Copy(wc, f); err != nil {
		return fmt.Errorf("io.Copy: %w", err)
	}
	if err := wc.Close(); err != nil {
		return fmt.Errorf("Writer.Close: %w", err)
	}
	fmt.Fprintf(w, "Blob %v uploaded.\n", object)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import java.io.IOException;
import java.nio.file.Paths;

public class UploadObject {
  public static void uploadObject(
      String projectId, String bucketName, String objectName, String filePath) throws IOException {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    // The ID of your GCS object
    // String objectName = "your-object-name";

    // The path to your file to upload
    // String filePath = "path/to/your/file"

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    BlobId blobId = BlobId.of(bucketName, objectName);
    BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build();

    // Optional: set a generation-match precondition to avoid potential race
    // conditions and data corruptions. The request returns a 412 error if the
    // preconditions are not met.
    Storage.BlobWriteOption precondition;
    if (storage.get(bucketName, objectName) == null) {
      // For a target object that does not yet exist, set the DoesNotExist precondition.
      // This will cause the request to fail if the object is created before the request runs.
      precondition = Storage.BlobWriteOption.doesNotExist();
    } else {
      // If the destination already exists in your bucket, instead set a generation-match
      // precondition. This will cause the request to fail if the existing object's generation
      // changes before the request runs.
      precondition =
          Storage.BlobWriteOption.generationMatch(
              storage.get(bucketName, objectName).getGeneration());
    }
    storage.createFrom(blobInfo, Paths.get(filePath), precondition);

    System.out.println(
        "File " + filePath + " uploaded to bucket " + bucketName + " as " + objectName);
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

以下示例会上传单个对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The path to your file to upload
// const filePath = 'path/to/your/file';

// The new ID for your GCS file
// const destFileName = 'your-new-file-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function uploadFile() {
  const options = {
    destination: destFileName,
    // Optional:
    // Set a generation-match precondition to avoid potential race conditions
    // and data corruptions. The request to upload is aborted if the object's
    // generation number does not match your precondition. For a destination
    // object that does not yet exist, set the ifGenerationMatch precondition to 0
    // If the destination object already exists in your bucket, set instead a
    // generation-match precondition using its generation number.
    preconditionOpts: {ifGenerationMatch: generationMatchPrecondition},
  };

  await storage.bucket(bucketName).upload(filePath, options);
  console.log(`${filePath} uploaded to ${bucketName}`);
}

uploadFile().catch(console.error);

以下示例会同时上传多个对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The ID of the first GCS file to download
// const firstFilePath = 'your-first-file-name';

// The ID of the second GCS file to download
// const secondFilePath = 'your-second-file-name';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadManyFilesWithTransferManager() {
  // Uploads the files
  await transferManager.uploadManyFiles([firstFilePath, secondFilePath]);

  for (const filePath of [firstFilePath, secondFilePath]) {
    console.log(`${filePath} uploaded to ${bucketName}.`);
  }
}

uploadManyFilesWithTransferManager().catch(console.error);

以下示例并发上传具有公共前缀的所有对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The local directory to upload
// const directoryName = 'your-directory';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadDirectoryWithTransferManager() {
  // Uploads the directory
  await transferManager.uploadManyFiles(directoryName);

  console.log(`${directoryName} uploaded to ${bucketName}.`);
}

uploadDirectoryWithTransferManager().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Upload a file.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $objectName The name of your Cloud Storage object.
 *        (e.g. 'my-object')
 * @param string $source The path to the file to upload.
 *        (e.g. '/path/to/your/file')
 */
function upload_object(string $bucketName, string $objectName, string $source): void
{
    $storage = new StorageClient();
    if (!$file = fopen($source, 'r')) {
        throw new \InvalidArgumentException('Unable to open file for reading');
    }
    $bucket = $storage->bucket($bucketName);
    $object = $bucket->upload($file, [
        'name' => $objectName
    ]);
    printf('Uploaded %s to gs://%s/%s' . PHP_EOL, basename($source), $bucketName, $objectName);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

以下示例会上传单个对象:

from google.cloud import storage


def upload_blob(bucket_name, source_file_name, destination_blob_name):
    """Uploads a file to the bucket."""
    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"
    # The path to your file to upload
    # source_file_name = "local/path/to/file"
    # The ID of your GCS object
    # destination_blob_name = "storage-object-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)
    blob = bucket.blob(destination_blob_name)

    # Optional: set a generation-match precondition to avoid potential race conditions
    # and data corruptions. The request to upload is aborted if the object's
    # generation number does not match your precondition. For a destination
    # object that does not yet exist, set the if_generation_match precondition to 0.
    # If the destination object already exists in your bucket, set instead a
    # generation-match precondition using its generation number.
    generation_match_precondition = 0

    blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)

    print(
        f"File {source_file_name} uploaded to {destination_blob_name}."
    )

以下示例会同时上传多个对象:

def upload_many_blobs_with_transfer_manager(
    bucket_name, filenames, source_directory="", workers=8
):
    """Upload every file in a list to a bucket, concurrently in a process pool.

    Each blob name is derived from the filename, not including the
    `source_directory` parameter. For complete control of the blob name for each
    file (and other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # A list (or other iterable) of filenames to upload.
    # filenames = ["file_1.txt", "file_2.txt"]

    # The directory on your computer that is the root of all of the files in the
    # list of filenames. This string is prepended (with os.path.join()) to each
    # filename to get the full path to the file. Relative paths and absolute
    # paths are both accepted. This string is not included in the name of the
    # uploaded blob; it is only used to find the source files. An empty string
    # means "the current working directory". Note that this parameter allows
    # directory traversal (e.g. "/", "../") and is not intended for unsanitized
    # end user input.
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    results = transfer_manager.upload_many_from_filenames(
        bucket, filenames, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(filenames, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

以下示例并发上传具有公共前缀的所有对象:

def upload_directory_with_transfer_manager(bucket_name, source_directory, workers=8):
    """Upload every file in a directory, including all files in subdirectories.

    Each blob name is derived from the filename, not including the `directory`
    parameter itself. For complete control of the blob name for each file (and
    other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # The directory on your computer to upload. Files in the directory and its
    # subdirectories will be uploaded. An empty string means "the current
    # working directory".
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from pathlib import Path

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    # Generate a list of paths (in string form) relative to the `directory`.
    # This can be done in a single list comprehension, but is expanded into
    # multiple lines here for clarity.

    # First, recursively get all files in `directory` as Path objects.
    directory_as_path_obj = Path(source_directory)
    paths = directory_as_path_obj.rglob("*")

    # Filter so the list only includes files, not directories themselves.
    file_paths = [path for path in paths if path.is_file()]

    # These paths are relative to the current working directory. Next, make them
    # relative to `directory`
    relative_paths = [path.relative_to(source_directory) for path in file_paths]

    # Finally, convert them all to strings.
    string_paths = [str(path) for path in relative_paths]

    print("Found {} files.".format(len(string_paths)))

    # Start the upload.
    results = transfer_manager.upload_many_from_filenames(
        bucket, string_paths, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(string_paths, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def upload_file bucket_name:, local_file_path:, file_name: nil
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  # The path to your file to upload
  # local_file_path = "/local/path/to/file.txt"

  # The ID of your GCS object
  # file_name = "your-file-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket  = storage.bucket bucket_name, skip_lookup: true

  file = bucket.create_file local_file_path, file_name

  puts "Uploaded #{local_file_path} as #{file.name} in bucket #{bucket_name}"
end

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 使用 cURL,通过 POST Object 请求调用 JSON API。对于 www.example.com 的索引页面:

    curl -X POST --data-binary @index.html \
      -H "Content-Type: text/html" \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      "https://storage.googleapis.com/upload/storage/v1/b/www.example.com/o?uploadType=media&name=index.html"

XML API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 使用 cURL,通过 PUT Object 请求调用 XML API。对于 www.example.com 的索引页面:

    curl -X PUT --data-binary @index.html \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      -H "Content-Type: text/html" \
      "https://storage.googleapis.com/www.example.com/index.html"

共享您的文件

如需将存储桶中的所有对象设为可供公共互联网上的所有人读取,请执行以下操作:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,点击您要设为公开的存储桶的名称。

  3. 选择页面顶部附近的权限标签。

  4. 如果公开访问窗格显示非公开,请点击标有移除禁止公开访问的按钮,然后在出现的对话框中点击确认

  5. 点击 授予访问权限按钮。

    此时会显示“添加主账号”对话框。

  6. 新的主账号字段中,输入 allUsers

  7. 选择角色下拉菜单中,选择 Cloud Storage 子菜单,然后点击 Storage Object Viewer 选项。

  8. 点击保存

  9. 点击允许公开访问

对象群组被公开共享后,“公共访问权限”列中会针对每个对象显示一个链接图标。您可以点击此图标来获取相应对象的网址。

命令行

使用 buckets add-iam-policy-binding 命令:

gcloud storage buckets add-iam-policy-binding gs://www.example.com --member=allUsers --role=roles/storage.objectViewer

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name) {
  auto current_policy = client.GetNativeBucketIamPolicy(
      bucket_name, gcs::RequestedPolicyVersion(3));
  if (!current_policy) throw std::move(current_policy).status();

  current_policy->set_version(3);
  current_policy->bindings().emplace_back(
      gcs::NativeIamBinding("roles/storage.objectViewer", {"allUsers"}));

  auto updated =
      client.SetNativeBucketIamPolicy(bucket_name, *current_policy);
  if (!updated) throw std::move(updated).status();

  std::cout << "Policy successfully updated: " << *updated << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;
using System.Collections.Generic;

public class MakeBucketPublicSample
{
    public void MakeBucketPublic(string bucketName = "your-unique-bucket-name")
    {
        var storage = StorageClient.Create();

        Policy policy = storage.GetBucketIamPolicy(bucketName);

        policy.Bindings.Add(new Policy.BindingsData
        {
            Role = "roles/storage.objectViewer",
            Members = new List<string> { "allUsers" }
        });

        storage.SetBucketIamPolicy(bucketName, policy);
        Console.WriteLine(bucketName + " is now public ");
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/iam"
	"cloud.google.com/go/storage"
	iampb "google.golang.org/genproto/googleapis/iam/v1"
)

// setBucketPublicIAM makes all objects in a bucket publicly readable.
func setBucketPublicIAM(w io.Writer, bucketName string) error {
	// bucketName := "bucket-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	policy, err := client.Bucket(bucketName).IAM().V3().Policy(ctx)
	if err != nil {
		return fmt.Errorf("Bucket(%q).IAM().V3().Policy: %w", bucketName, err)
	}
	role := "roles/storage.objectViewer"
	policy.Bindings = append(policy.Bindings, &iampb.Binding{
		Role:    role,
		Members: []string{iam.AllUsers},
	})
	if err := client.Bucket(bucketName).IAM().V3().SetPolicy(ctx, policy); err != nil {
		return fmt.Errorf("Bucket(%q).IAM().SetPolicy: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Bucket %v is now publicly readable\n", bucketName)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import com.google.cloud.Identity;
import com.google.cloud.Policy;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import com.google.cloud.storage.StorageRoles;

public class MakeBucketPublic {
  public static void makeBucketPublic(String projectId, String bucketName) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Policy originalPolicy = storage.getIamPolicy(bucketName);
    storage.setIamPolicy(
        bucketName,
        originalPolicy
            .toBuilder()
            .addIdentity(StorageRoles.objectViewer(), Identity.allUsers()) // All users can view
            .build());

    System.out.println("Bucket " + bucketName + " is now publicly readable");
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function makeBucketPublic() {
  await storage.bucket(bucketName).makePublic();

  console.log(`Bucket ${bucketName} is now publicly readable`);
}

makeBucketPublic().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Update the specified bucket's IAM configuration to make it publicly accessible.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 */
function set_bucket_public_iam(string $bucketName): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $policy = $bucket->iam()->policy(['requestedPolicyVersion' => 3]);
    $policy['version'] = 3;

    $role = 'roles/storage.objectViewer';
    $members = ['allUsers'];

    $policy['bindings'][] = [
        'role' => $role,
        'members' => $members
    ];

    $bucket->iam()->setPolicy($policy);

    printf('Bucket %s is now public', $bucketName);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from typing import List

from google.cloud import storage


def set_bucket_public_iam(
    bucket_name: str = "your-bucket-name",
    members: List[str] = ["allUsers"],
):
    """Set a public IAM Policy to bucket"""
    # bucket_name = "your-bucket-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)

    policy = bucket.get_iam_policy(requested_policy_version=3)
    policy.bindings.append(
        {"role": "roles/storage.objectViewer", "members": members}
    )

    bucket.set_iam_policy(policy)

    print(f"Bucket {bucket.name} is now publicly readable")

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def set_bucket_public_iam bucket_name:
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.policy do |p|
    p.add "roles/storage.objectViewer", "allUsers"
  end

  puts "Bucket #{bucket_name} is now publicly readable"
end

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个包含以下信息的 JSON 文件:

    {
      "bindings":[
        {
          "role": "roles/storage.objectViewer",
          "members":["allUsers"]
        }
      ]
    }
  3. 使用 cURL,通过 PUT Bucket 请求调用 JSON API

    curl -X PUT --data-binary @JSON_FILE_NAME \
      -H "Authorization: Bearer OAUTH2_TOKEN" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/iam"

    其中:

    • JSON_FILE_NAME 是您在第 2 步中创建的 JSON 文件的路径。
    • OAUTH2_TOKEN 是您在第 1 步中创建的访问令牌。
    • BUCKET_NAME 是您要公开其对象的存储桶的名称。例如 my-bucket

XML API

XML API 不支持将存储桶中的所有对象设为可供公开读取。请改用 Google Cloud 控制台或 gcloud storage

您可以将存储桶中的对象群组设为可公开访问,但一般来说,这样可以更轻松、更快地访问存储桶中的所有文件。

如果访问者向网址请求非公开或不存在的文件,他们会收到 http 403 响应代码。如需了解如何添加使用 http 404 响应代码的错误页面,请参阅下一部分。

推荐:分配专用页面

您可以分配索引页面后缀(由 MainPageSuffix 属性控制)和自定义错误页面(由 NotFoundPage 属性控制)。分配这些页面都是可选的,但如果没有索引页面,则当用户访问您的顶级网站(例如 http://www.example.com)时,将没有内容可以传送。如需了解详情,请参阅网站配置示例

在以下示例中,MainPageSuffix 设置为 index.htmlNotFoundPage 设置为 404.html

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,找到您所创建的存储桶。

  3. 点击与存储桶关联的 Bucket overflow 菜单 (),然后选择修改网站配置

  4. 在网站配置对话框中,指定主页面和错误页面。

  5. 点击保存

命令行

使用带有 --web-main-page-suffix--web-error-page 标志的 buckets update 命令。

gcloud storage buckets update gs://www.example.com --web-main-page-suffix=index.html --web-error-page=404.html

如果成功,此命令会返回以下内容:

Updating gs://www.example.com/...
  Completed 1

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name,
   std::string const& main_page_suffix, std::string const& not_found_page) {
  StatusOr<gcs::BucketMetadata> original =
      client.GetBucketMetadata(bucket_name);

  if (!original) throw std::move(original).status();
  StatusOr<gcs::BucketMetadata> patched = client.PatchBucket(
      bucket_name,
      gcs::BucketMetadataPatchBuilder().SetWebsite(
          gcs::BucketWebsite{main_page_suffix, not_found_page}),
      gcs::IfMetagenerationMatch(original->metageneration()));
  if (!patched) throw std::move(patched).status();

  if (!patched->has_website()) {
    std::cout << "Static website configuration is not set for bucket "
              << patched->name() << "\n";
    return;
  }

  std::cout << "Static website configuration successfully set for bucket "
            << patched->name() << "\nNew main page suffix is: "
            << patched->website().main_page_suffix
            << "\nNew not found page is: "
            << patched->website().not_found_page << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;

public class BucketWebsiteConfigurationSample
{
    public Bucket BucketWebsiteConfiguration(
        string bucketName = "your-bucket-name",
        string mainPageSuffix = "index.html",
        string notFoundPage = "404.html")
    {
        var storage = StorageClient.Create();
        var bucket = storage.GetBucket(bucketName);

        if (bucket.Website == null)
        {
            bucket.Website = new Bucket.WebsiteData();
        }
        bucket.Website.MainPageSuffix = mainPageSuffix;
        bucket.Website.NotFoundPage = notFoundPage;

        bucket = storage.UpdateBucket(bucket);
        Console.WriteLine($"Static website bucket {bucketName} is set up to use {mainPageSuffix} as the index page and {notFoundPage} as the 404 not found page.");
        return bucket;
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"
	"time"

	"cloud.google.com/go/storage"
)

// setBucketWebsiteInfo sets website configuration on a bucket.
func setBucketWebsiteInfo(w io.Writer, bucketName, indexPage, notFoundPage string) error {
	// bucketName := "www.example.com"
	// indexPage := "index.html"
	// notFoundPage := "404.html"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*10)
	defer cancel()

	bucket := client.Bucket(bucketName)
	bucketAttrsToUpdate := storage.BucketAttrsToUpdate{
		Website: &storage.BucketWebsite{
			MainPageSuffix: indexPage,
			NotFoundPage:   notFoundPage,
		},
	}
	if _, err := bucket.Update(ctx, bucketAttrsToUpdate); err != nil {
		return fmt.Errorf("Bucket(%q).Update: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Static website bucket %v is set up to use %v as the index page and %v as the 404 page\n", bucketName, indexPage, notFoundPage)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;

public class SetBucketWebsiteInfo {
  public static void setBucketWesbiteInfo(
      String projectId, String bucketName, String indexPage, String notFoundPage) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your static website bucket
    // String bucketName = "www.example.com";

    // The index page for a static website bucket
    // String indexPage = "index.html";

    // The 404 page for a static website bucket
    // String notFoundPage = "404.html";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Bucket bucket = storage.get(bucketName);
    bucket.toBuilder().setIndexPage(indexPage).setNotFoundPage(notFoundPage).build().update();

    System.out.println(
        "Static website bucket "
            + bucketName
            + " is set up to use "
            + indexPage
            + " as the index page and "
            + notFoundPage
            + " as the 404 page");
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The name of the main page
// const mainPageSuffix = 'http://example.com';

// The Name of a 404 page
// const notFoundPage = 'http://example.com/404.html';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function addBucketWebsiteConfiguration() {
  await storage.bucket(bucketName).setMetadata({
    website: {
      mainPageSuffix,
      notFoundPage,
    },
  });

  console.log(
    `Static website bucket ${bucketName} is set up to use ${mainPageSuffix} as the index page and ${notFoundPage} as the 404 page`
  );
}

addBucketWebsiteConfiguration().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Update the given bucket's website configuration.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $indexPageObject the name of an object in the bucket to use as
 *        (e.g. 'index.html')
 *     an index page for a static website bucket.
 * @param string $notFoundPageObject the name of an object in the bucket to use
 *        (e.g. '404.html')
 *     as the 404 Not Found page.
 */
function define_bucket_website_configuration(string $bucketName, string $indexPageObject, string $notFoundPageObject): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $bucket->update([
        'website' => [
            'mainPageSuffix' => $indexPageObject,
            'notFoundPage' => $notFoundPageObject
        ]
    ]);

    printf(
        'Static website bucket %s is set up to use %s as the index page and %s as the 404 page.',
        $bucketName,
        $indexPageObject,
        $notFoundPageObject
    );
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

from google.cloud import storage


def define_bucket_website_configuration(bucket_name, main_page_suffix, not_found_page):
    """Configure website-related properties of bucket"""
    # bucket_name = "your-bucket-name"
    # main_page_suffix = "index.html"
    # not_found_page = "404.html"

    storage_client = storage.Client()

    bucket = storage_client.get_bucket(bucket_name)
    bucket.configure_website(main_page_suffix, not_found_page)
    bucket.patch()

    print(
        "Static website bucket {} is set up to use {} as the index page and {} as the 404 page".format(
            bucket.name, main_page_suffix, not_found_page
        )
    )
    return bucket

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

def define_bucket_website_configuration bucket_name:, main_page_suffix:, not_found_page:
  # The ID of your static website bucket
  # bucket_name = "www.example.com"

  # The index page for a static website bucket
  # main_page_suffix = "index.html"

  # The 404 page for a static website bucket
  # not_found_page = "404.html"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.update do |b|
    b.website_main = main_page_suffix
    b.website_404 = not_found_page
  end

  puts "Static website bucket #{bucket_name} is set up to use #{main_page_suffix} as the index page and " \
       "#{not_found_page} as the 404 page"
end

REST API

JSON API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个 JSON 文件,以将 website 对象中的 mainPageSuffixnotFoundPage 属性设置为所需页面:

    {
      "website":{
        "mainPageSuffix": "index.html",
        "notFoundPage": "404.html"
      }
    }
  3. 使用 cURL,通过 PATCH Bucket 请求调用 JSON API。对于 www.example.com:

    curl -X PATCH --data-binary @web-config.json \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/www.example.com"

XML API

  1. OAuth 2.0 Playground 获取授权访问令牌。将 Playground 配置为使用您自己的 OAuth 凭据。如需了解相关说明,请参阅 API 身份验证
  2. 创建一个 XML 文件,以将 WebsiteConfiguration 元素中的 MainPageSuffixNotFoundPage 元素设置为所需页面:

    <WebsiteConfiguration>
      <MainPageSuffix>index.html</MainPageSuffix>
      <NotFoundPage>404.html</NotFoundPage>
    </WebsiteConfiguration>
  3. 使用 cURL,通过 PUT Bucket 请求和 websiteConfig 查询字符串参数调用 XML API。对于 www.example.com:

    curl -X PUT --data-binary @web-config.xml \
      -H "Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg" \
      https://storage.googleapis.com/www.example.com?websiteConfig

测试网站

通过在浏览器中请求访问相应域名,您可以验证内容是否来源于相应存储桶。为实现此目的,您可以使用对象路径,也可以仅使用域名(如果设置了 MainPageSuffix 属性)。

例如,如果您的一个名为 test.html 的对象存储在名为 www.example.com 的存储桶中,请在您的浏览器中转至 www.example.com/test.html 以检查是否可以访问此对象。

清除数据

完成本教程后,您可以清理您创建的资源,让它们停止使用配额,以免产生费用。以下部分介绍如何删除或关闭这些资源。

删除项目

为了避免产生费用,最简单的方法是删除您为本教程创建的项目。

要删除项目,请执行以下操作:

  1. 在 Google Cloud 控制台中,进入管理资源页面。

    转到“管理资源”

  2. 在项目列表中,选择要删除的项目,然后点击删除
  3. 在对话框中输入项目 ID,然后点击关闭以删除项目。

后续步骤