Lire des données
Restez organisé à l'aide des collections
Enregistrez et classez les contenus selon vos préférences.
Lire des données de Cloud Bigtable avec Apache Beam
En savoir plus
Pour obtenir une documentation détaillée incluant cet exemple de code, consultez les articles suivants :
Exemple de code
Sauf indication contraire, le contenu de cette page est régi par une licence Creative Commons Attribution 4.0, et les échantillons de code sont régis par une licence Apache 2.0. Pour en savoir plus, consultez les Règles du site Google Developers. Java est une marque déposée d'Oracle et/ou de ses sociétés affiliées.
[[["Facile à comprendre","easyToUnderstand","thumb-up"],["J'ai pu résoudre mon problème","solvedMyProblem","thumb-up"],["Autre","otherUp","thumb-up"]],[["Difficile à comprendre","hardToUnderstand","thumb-down"],["Informations ou exemple de code incorrects","incorrectInformationOrSampleCode","thumb-down"],["Il n'y a pas l'information/les exemples dont j'ai besoin","missingTheInformationSamplesINeed","thumb-down"],["Problème de traduction","translationIssue","thumb-down"],["Autre","otherDown","thumb-down"]],[],[[["\u003cp\u003eThis code sample demonstrates how to read data from Cloud Bigtable using Apache Beam.\u003c/p\u003e\n"],["\u003cp\u003eThe example uses the \u003ccode\u003eCloudBigtableIO\u003c/code\u003e class to configure and perform the read operation with a specified scan.\u003c/p\u003e\n"],["\u003cp\u003eIt utilizes a \u003ccode\u003eScan\u003c/code\u003e object with a \u003ccode\u003eFirstKeyOnlyFilter\u003c/code\u003e to optimize the data retrieval process, setting cache blocks to false.\u003c/p\u003e\n"],["\u003cp\u003eThe pipeline reads the results from Bigtable and processes each row, printing the row key to the console, using a \u003ccode\u003eDoFn\u003c/code\u003e to apply a custom transformation.\u003c/p\u003e\n"],["\u003cp\u003eThe required configuration parameters for Bigtable, such as project ID, instance ID, and table ID, are provided through the \u003ccode\u003eBigtableOptions\u003c/code\u003e interface, which extends \u003ccode\u003eDataflowPipelineOptions\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Read data from Cloud Bigtable with Apache Beam.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Bigtable HBase Beam connector](/bigtable/docs/hbase-dataflow-java)\n\nCode sample\n-----------\n\n### Java\n\n\nTo learn how to install and use the client library for Bigtable, see\n[Bigtable client libraries](/bigtable/docs/reference/libraries).\n\n\nTo authenticate to Bigtable, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import com.google.cloud.bigtable.beam.CloudBigtableIO;\n import com.google.cloud.bigtable.beam.CloudBigtableScanConfiguration;\n import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;\n import org.apache.beam.sdk.Pipeline;\n import org.apache.beam.sdk.io.Read;\n import org.apache.beam.sdk.options.Default;\n import org.apache.beam.sdk.options.Description;\n import org.apache.beam.sdk.options.PipelineOptionsFactory;\n import org.apache.beam.sdk.transforms.DoFn;\n import org.apache.beam.sdk.transforms.ParDo;\n import org.apache.hadoop.hbase.client.Result;\n import org.apache.hadoop.hbase.client.Scan;\n import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;\n import org.apache.hadoop.hbase.util.Bytes;\n\n public class HelloWorldRead {\n public static void main(String[] args) {\n BigtableOptions options =\n PipelineOptionsFactory.fromArgs(args).withValidation().as(BigtableOptions.class);\n Pipeline p = Pipeline.create(options);\n\n Scan scan = new Scan();\n scan.setCacheBlocks(false);\n scan.setFilter(new FirstKeyOnlyFilter());\n\n CloudBigtableScanConfiguration config =\n new CloudBigtableScanConfiguration.Builder()\n .withProjectId(options.getBigtableProjectId())\n .withInstanceId(options.getBigtableInstanceId())\n .withTableId(options.getBigtableTableId())\n .withScan(scan)\n .build();\n\n p.apply(Read.from(CloudBigtableIO.read(config)))\n .apply(\n ParDo.of(\n new DoFn\u003cResult, Void\u003e() {\n @ProcessElement\n public void processElement(@Element Result row, OutputReceiver\u003cVoid\u003e out) {\n System.out.println(Bytes.toString(row.getRow()));\n }\n }));\n\n p.run().waitUntilFinish();\n }\n\n public interface BigtableOptions extends DataflowPipelineOptions {\n @Description(\"The Bigtable project ID, this can be different than your Dataflow project\")\n @Default.String(\"bigtable-project\")\n String getBigtableProjectId();\n\n void setBigtableProjectId(String bigtableProjectId);\n\n @Description(\"The Bigtable instance ID\")\n @Default.String(\"bigtable-instance\")\n String getBigtableInstanceId();\n\n void setBigtableInstanceId(String bigtableInstanceId);\n\n @Description(\"The Bigtable table ID in the instance.\")\n @Default.String(\"mobile-time-series\")\n String getBigtableTableId();\n\n void setBigtableTableId(String bigtableTableId);\n }\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=bigtable)."]]