Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
A instalação do componente opcional HBase é limitada a
clusters do Dataproc criados com a versão de imagem
1.5 ou
2.0.
Embora o Google Cloud ofereça muitos serviços que permitem implantar o Apache
HBase autogerenciado, o Bigtable costuma ser a melhor opção, já que oferece uma API aberta com portabilidade de HBase e de carga de trabalho.
As tabelas de banco de dados do HBase podem ser migradas para o Bigtable para gerenciamento dos dados subjacentes, enquanto os aplicativos que antes interagiam com o HBase, como o Spark, podem permanecer no Dataproc e se conectar com segurança ao Bigtable.
Neste guia, apresentamos as etapas gerais para começar a usar o Bigtable
e fornecemos referências para migrar dados para o Bigtable de implantações
do HBase do Dataproc.
Começar a usar o Bigtable
O Cloud Bigtable é uma plataforma NoSQL altamente escalonável e de alto desempenho que oferece
compatibilidade com o cliente da API Apache HBase
e portabilidade para cargas de trabalho do HBase. Ele é compatível com as versões 1.x e 2.x da API HBase e pode ser incluído no aplicativo atual para ler e gravar no Bigtable. Os aplicativos HBase atuais podem adicionar a biblioteca de cliente HBase do Bigtable para ler e gravar dados armazenados no Bigtable.
Consulte Bigtable e a API HBase para mais informações sobre como configurar seu aplicativo HBase com o Bigtable.
Criar um cluster do Bigtable
Para começar a usar o Bigtable, crie um cluster e tabelas para
armazenar dados que antes estavam no HBase. Siga as etapas na documentação do Bigtable para
criar uma instância,
um cluster e
tabelas com
o mesmo esquema das tabelas do HBase. Para a criação automatizada de tabelas com DDLs de tabelas do HBase, consulte a ferramenta de tradução de esquema.
Abra a instância do Bigtable no console Google Cloud para conferir a tabela e os gráficos de monitoramento do lado do servidor, incluindo linhas por segundo, latência e capacidade de transferência, para gerenciar a tabela recém-provisionada. Para mais informações, consulte
Monitoring.
Migrar dados do Dataproc para o Bigtable
Depois de criar as tabelas no Bigtable, importe e valide seus dados seguindo as orientações em Migrar o HBase no Google Cloud para o Bigtable.
Depois de migrar os dados, é possível atualizar os aplicativos para enviar leituras e gravações
para o Bigtable.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-04 UTC."],[[["\u003cp\u003eThe HBase component is deprecated in Dataproc version 2.1 and later, and while a Beta version was available in versions 1.5 and 2.0, its use is not recommended due to the ephemeral nature of Dataproc clusters.\u003c/p\u003e\n"],["\u003cp\u003eBigtable is recommended as an alternative to HBase, offering an open API with HBase compatibility and workload portability, making it suitable for applications that previously used HBase.\u003c/p\u003e\n"],["\u003cp\u003eYou can migrate existing HBase applications and their data to Bigtable by using the Bigtable HBase client library and following the provided migration steps.\u003c/p\u003e\n"],["\u003cp\u003eGetting started with Bigtable involves creating a cluster and tables, which can be done manually or through automated tools like the schema translator for HBase DDLs.\u003c/p\u003e\n"],["\u003cp\u003eAfter migrating, you can use tools like server-side monitoring charts to manage the Bigtable tables and review examples of using Spark with Bigtable for continued application functionality.\u003c/p\u003e\n"]]],[],null,["| **Deprecated:** Starting with Dataproc [version 2.1](/dataproc/docs/concepts/versioning/dataproc-release-2.1), you can no longer use the optional HBase component. Dataproc [version 1.5](/dataproc/docs/concepts/versioning/dataproc-release-1.5) and Dataproc [version 2.0](/dataproc/docs/concepts/versioning/dataproc-release-2.0) offer a Beta version of HBase with no support. However, due to the ephemeral nature of Dataproc clusters, using HBase is not recommended.\n\nInstallation of the optional HBase component is limited to\nDataproc clusters created with image version\n[1.5](/dataproc/docs/concepts/versioning/dataproc-release-1.5) or\n[2.0](/dataproc/docs/concepts/versioning/dataproc-release-2.0).\n\nWhile Google Cloud provides many services that let you deploy self-managed Apache\nHBase, [Bigtable](/bigtable/docs/overview) is\noften the best option as it provides an open API with HBase and workload portability.\nHBase database tables can be migrated to Bigtable for management of the\nunderlying data, while applications that previously interoperated with HBase,\nsuch as Spark, may remain on Dataproc and securely connect with Bigtable.\nIn this guide, we provide the high-level steps for getting started with Bigtable\nand provide references for migrating data to Bigtable from Dataproc HBase\ndeployments.\n\nGet started with Bigtable\n\nCloud Bigtable is a highly scalable and performant NoSQL platform that provides\n[Apache HBase API client compatibility](https://cloud.google.com/bigtable/docs/hbase-bigtable)\nand portability for HBase workloads. The client is compatible with HBase API\nversions 1.x and 2.x and may be included with the existing application to read\nand write to Bigtable. Existing HBase applications may add the Bigtable HBase\nclient library to read and write data stored in Bigtable.\n\nSee\n[Bigtable and the HBase API](https://cloud.google.com/bigtable/docs/hbase-bigtable)\nfor more information on configuring your HBase application with Bigtable.\n\nCreate a Bigtable cluster\n\nYou can get started using Bigtable by creating a cluster and tables for\nstoring data that was previously stored in HBase. Follow the steps in the Bigtable documentation for\n[creating an instance](/bigtable/docs/creating-instance#creating-instance),\na cluster, and\n[tables](/bigtable/docs/managing-tables) with\nthe same schema as the HBase tables. For automated creation of tables from HBase\ntable DDLs, refer to the\n[schema translator tool](/bigtable/docs/migrate-hbase-on-google-cloud-to-bigtable#create-destination-table).\n\nOpen the Bigtable instance in Google Cloud console to view the table and\nserver-side monitoring charts, including rows per second, latency, and throughput, to manage\nthe newly provisioned table. For additional information, see\n[Monitoring](/bigtable/docs/monitoring-instance).\n\nMigrate data from Dataproc to Bigtable\n\nAfter you create the tables in Bigtable, you can import and validate\nyour data by following the guidance at\n[Migrate HBase on Google Cloud to Bigtable](/bigtable/docs/migrate-hbase-on-google-cloud-to-bigtable).\nAfter you migrate the data, you can update applications to send reads and writes\nto Bigtable.\n\nWhat's next\n\n- See [Wordcount Spark examples](https://github.com/GoogleCloudPlatform/java-docs-samples/tree/main/bigtable/spark) for running Spark with the Bigtable.\n- Review online migration options with [live replication from HBase to Bigtable](/bigtable/docs/hbase-replication).\n- Watch [How Box modernized their NoSQL databases](https://www.youtube.com/watch?v=DteQ09WFhaU) to understand other benefits."]]