[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Data type enforcement in Bigtable\n=================================\n\nBigtable's flexible schema lets you store data of any type --\nstrings, dates, numbers, JSON documents, or even images or PDFs -- in a\nBigtable table.\n\nThis document describes when Bigtable enforces type, requiring you\nto encode or decode it in your application code. For a list of\nBigtable data types, see\n[Type](/bigtable/docs/reference/data/rpc/google.bigtable.v2#type_1)\nin the Data API reference documentation.\n\nEnforced types\n--------------\n\nData type is enforced for the following data:\n\n- Aggregate column families (counters)\n- Timestamps\n- Materialized views\n\n### Aggregates\n\nFor the *aggregate* data type, encoding depends on the aggregation type. When\nyou create an aggregate column family, you must specify an aggregation type.\n\nThis table shows the input type and encoding for each aggregation type.\n\nWhen you query the data in aggregate cells using SQL, SQL automatically\nincorporates type information.\n\nWhen you read the data in aggregate cells using the Data API's `ReadRows`\nmethod, Bigtable returns bytes, so your application must\ndecode the values using the encoding that Bigtable used to map the\ntyped data to bytes.\n\nYou can't convert a column family that contains non-aggregate data into an\naggregate column family. Columns in aggregate column families can't contain\nnon-aggregate cells, and standard column families can't contain aggregate cells.\n\nFor more information about creating tables with aggregate column families, see\n[Create a table](/bigtable/docs/managing-tables#create-table). For code samples\nthat show how to increment an aggregate cell with encoded values, see [Increment\na value](/bigtable/docs/writing-data#increment-append).\n\n### Timestamps\n\nEach Bigtable cell has an `Int64` timestamp that must be a\nmicrosecond value with, at most, millisecond precision. Bigtable\nrejects a timestamp with microsecond precision, such as 3023483279876543. In\nthis example, the acceptable timestamp value is 3023483279876000. A timestamp is\nthe number of microseconds since the [Unix\nepoch](https://wikipedia.org/wiki/Unix_time), `1970-01-01 00:00:00 UTC`.\n\n### Continuous materialized views\n\n|\n| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nContinuous materialized views are read-only resources that you can read by using\nSQL or with a `ReadRows` Data API call. Data in a materialized view is typed\nbased on the query that defines it. For an overview, see [Continuous\nmaterialized views](/bigtable/docs/continuous-materialized-views).\n\nWhen you use SQL to query a continuous materialized view, SQL automatically\nincorporates type information.\n\nWhen you [read from a continuous materialized\nview](/bigtable/docs/reads#reads-continuous-materialized-views) using a Data API\n`ReadRows` request, you must know each column's type and decode it in your\napplication code.\n\n\nAggregated values in a continuous materialized view are stored using encoding\ndescribed in the following table, based on the output type of the column from\nthe view definition.\n\nFor more information, see [Encoding](/bigtable/docs/reference/data/rpc/google.bigtable.v2#encoding_1) in the Data API reference.\n\n\u003cbr /\u003e\n\n### Structured row keys\n\n|\n| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nStructured row keys let you access your data using multi-column keys, similar\nto composite keys in relational databases.\n\nThe type and encoding for structured row keys are defined by a *row key schema*\nthat you can optionally add to a Bigtable table. Structured row\nkey data is stored as bytes, but GoogleSQL for\nBigtable automatically uses the type and encoding defined in the\nrow key schema when you execute a SQL query on the table.\n\nUsing a row key schema to query a table with a `ReadRows` request isn't\nsupported. A continuous materialized view has a row key schema by default. For\nmore information about structured row keys, see [Manage row key\nschemas](/bigtable/docs/manage-row-key-schemas).\n\nUnenforced types\n----------------\n\nIf no type information is provided, then Bigtable treats each cell\nas bytes with an unknown encoding.\n\nWhen querying column families that are created without type enforcement, you must\nprovide type information at read time to ensure that the data is read correctly.\nThis is relevant with database functions whose behavior\ndepends on the data type. GoogleSQL for Bigtable offers\n[CAST](/bigtable/docs/reference/sql/conversion_functions#cast)\nfunctions to do type conversions at query time. These functions convert from\nbytes to the types that various functions expect.\n\nWhile Bigtable doesn't enforce types, certain operations assume a\ndata type. Knowing this helps you ensure that your data is written in a way that\ncan be processed within the database. The following are examples:\n\n- Increments using `ReadModifyWriteRow` assume the cell contains a 64-bit big-endian signed integer.\n- The `TO_VECTOR64` function in SQL expects the cell to contain a byte array that's a concatenation of the big-endian bytes of 64-bit floating point numbers.\n- The `TO_VECTOR32` function in SQL expects the cell to contain a byte array that's a concatenation of the big-endian bytes of 32-bit floating point numbers.\n\nWhat's next\n-----------\n\n- [Reads](/bigtable/docs/reads)\n- [GoogleSQL for Bigtable overview](/bigtable/docs/googlesql-overview)"]]