In some cases, when you upgrade your client library for Cloud Bigtable, you also need to update your application's code or configuration. This page explains which client library upgrades require you to update your application.
If you are skipping over some versions of the client library, be sure to check for instructions that relate to the versions you are skipping. For example, if you are upgrading from version 1.0.0 to version 3.0.0, and this page describes code changes that are necessary when upgrading to version 2.0.0, you may need to make those code changes in your application.
HBase client for Java
Upgrading to 1.0.0-pre4 and later
In version 1.0.0-pre4, the
bigtable-hbase-1.3 artifacts are no longer provided.
If you were using one of these artifacts, see Client Libraries to
learn about the artifacts in the current version of the client library.
In addition, version 1.0.0-pre4 includes a new Apache Beam-compatible version of the import/export tool. For instructions on how to use the new tool, see Exporting Data as Sequence Files and Importing Data from Sequence Files.
Upgrading to 1.0.0-pre3 and later
Version 1.0.0-pre3 introduces a new version of the Cloud Dataflow connector.
The new connector is compatible with Apache Beam. If you are using the previous version of the connector, update your code to use the new version. See the Cloud Dataflow release notes for information about migrating from the Cloud Dataflow 1.x SDK to the Cloud Dataflow 2.x (Beam-compatible) SDK.
Upgrading to 1.0.0-pre2 and later
In version 1.0.0-pre2, the Cloud Bigtable Maven artifacts include the
netty-tcnative-boringssl-static library. You should update
your Maven project to remove the dependency on the
Upgrading to 1.0.0-pre1 and later
Version 1.0.0-pre1 provides a new set of Maven artifacts:
bigtable-hbase-1.x: Use this artifact for standalone applications where you control your dependencies.
bigtable-hbase-1.x-hadoop: Use this artifact for Hadoop environments.
bigtable-hbase-1.x-shaded: Use this artifact for environments other than Hadoop that require older versions of the HBase client for Java's dependencies, such as protobuf and Guava.
You must update your Maven project to use the most appropriate one of these artifacts.
In addition, you must update your Maven project to remove the dependency on the
hbase-client artifact. This artifact is no longer required.
Finally, if your configuration settings (in your code or in an
file) include a value for
hbase.client.connection.impl, you must change the
Upgrading to 0.9.6.2 and later
With version 0.9.6.2, you must upgrade to version 1.1.33.Fork26 of the
Upgrading to 0.9.1 and later
With version 0.9.1, you must upgrade to version 1.1.33.Fork19 of the
In addition, you can use a single
netty-tcnative-boringssl-static JAR file for
all supported platforms (Linux, macOS, and Windows). You no longer need to use
an OS-specific classifier in your Maven project. The following example shows how
to add the required artifact to your
<dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative-boringssl-static</artifactId> <version>1.1.33.Fork19</version> </dependency>
Upgrading to 0.9.0 and later
Version 0.9.0 changes the way you connect to Cloud Bigtable. Instead of specifying a cluster ID and zone, you specify an instance ID. You can find the instance ID by visiting the Google Cloud Platform Console.
If you use an
hbase-site.xml file to connect to Cloud Bigtable, you
must make the following changes:
- Add the property
google.bigtable.instance.id, with the value set to your instance ID.
- Remove the property
- Remove the property
If you connect to Cloud Bigtable by calling
BigtableConfiguration.connect(), you must update your code as shown below:
// Old code BigtableConfiguration.connect(projectId, zone, clusterId); // New code BigtableConfiguration.connect(projectId, instanceId);
Updating the Cloud Dataflow connector
If you use the Cloud Dataflow connector for Cloud Bigtable,
you must update the code that creates a
// Old code CloudBigtableScanConfiguration config = new CloudBigtableScanConfiguration.Builder() .withProjectId("project-id") .withClusterId("cluster-id") .withZoneId("zone") .withTableId("table") .build(); // New code CloudBigtableScanConfiguration config = new CloudBigtableScanConfiguration.Builder() .withProjectId("project-id") .withInstanceId("instance-id") .withTableId("table") .build();
June 2016 changes
In June 2016, the Go client was updated to change the way you connect to Cloud Bigtable. Instead of specifying a cluster ID and zone, you specify an instance ID. You can find the instance ID by visiting the Google Cloud Platform Console.
You must update your code as shown below:
// Old code adminClient, err := bigtable.NewAdminClient(ctx, project, zone, cluster) client, err := bigtable.NewClient(ctx, project, zone, cluster) // New code adminClient, err := bigtable.NewAdminClient(ctx, project, instance) client, err := bigtable.NewClient(ctx, project, instance)