GoogleSQL for BigQuery 支持以下文本分析函数。
函数列表
名称 | 摘要 |
---|---|
BAG_OF_WORDS
|
获取词元化文档中每个字词(词元)的频率。 |
TEXT_ANALYZE
|
从文本中提取字词(词元)并将其转换为词元化文档。 |
TF_IDF
|
评估某个字词(词元)与一组词元化文档中的某个词元化文档的相关性。 |
BAG_OF_WORDS
BAG_OF_WORDS(tokenized_document)
定义
获取词元化文档中每个字词(词元)的频率。
定义
tokenized_document
:ARRAY<STRING>
值,表示经过词元化处理的文档。词元化文档是一组字词(词元),用于文本分析。
返回类型
ARRAY<STRUCT<term STRING, count INT64>>
定义:
term
:词元化文档中的唯一字词。count
:在词元化文档中找到该字词的次数。
示例
以下查询会在两个词元化文档中生成字词及其频率:
WITH
ExampleTable AS (
SELECT 1 AS id, ['I', 'like', 'pie', 'pie', 'pie', NULL] AS f UNION ALL
SELECT 2 AS id, ['yum', 'yum', 'pie', NULL] AS f
)
SELECT id, BAG_OF_WORDS(f) AS results
FROM ExampleTable
ORDER BY id;
/*----+------------------------------------------------*
| id | results |
+----+------------------------------------------------+
| 1 | [(null, 1), ('I', 1), ('like', 1), ('pie', 3)] |
| 2 | [(null, 1), ('pie', 1), ('yum', 2)] |
*----+------------------------------------------------*/
TEXT_ANALYZE
TEXT_ANALYZE(
text
[, analyzer=>{ 'LOG_ANALYZER' | 'NO_OP_ANALYZER' | 'PATTERN_ANALYZER' }]
[, analyzer_options=>analyzer_options_values]
)
说明
从文本中提取字词(词元)并将其转换为词元化文档。
定义
text
:STRING
值,表示要进行词元化处理的输入文本。analyzer
:可选的必需命名参数,用于确定使用哪个分析器将text
转换为字词(词元)数组。此数据类型可为:'LOG_ANALYZER'
(默认):在遇到分隔符时将输入拆分为字词,然后对字词进行标准化。如果未指定analyzer
,则默认使用此参数。如需了解详情,请参阅LOG_ANALYZER
文本分析器。'NO_OP_ANALYZER'
:将文本提取为单个字词(词元),但不应用标准化。如需了解详情,请参阅NO_OP_ANALYZER
文本分析器。'PATTERN_ANALYZER'
:将输入拆分为与正则表达式匹配的字词。如需了解详情,请参阅PATTERN_ANALYZER
文本分析器。
analyzer_options
:可选的必需命名参数,接受文本分析规则列表作为 JSON 格式的STRING
。如需了解详情,请参阅文本分析器选项。
详细信息
无法保证此函数生成的词元的顺序。
如果未指定分析器,则默认使用 LOG_ANALYZER
分析器。
返回类型
ARRAY<STRING>
示例
以下查询结合使用默认文本分析器 LOG_ANALYZER
和输入文本:
SELECT TEXT_ANALYZE('I like pie, you like-pie, they like 2 PIEs.') AS results
/*--------------------------------------------------------------------------*
| results |
+--------------------------------------------------------------------------+
| ['i', 'like', 'pie', 'you', 'like', 'pie', 'they', 'like', '2', 'pies' ] |
*--------------------------------------------------------------------------*/
以下查询结合使用 NO_OP_ANALYZER
文本分析器和输入文本:
SELECT TEXT_ANALYZE(
'I like pie, you like-pie, they like 2 PIEs.',
analyzer=>'NO_OP_ANALYZER'
) AS results
/*-----------------------------------------------*
| results |
+-----------------------------------------------+
| 'I like pie, you like-pie, they like 2 PIEs.' |
*-----------------------------------------------*/
以下查询结合使用 PATTERN_ANALYZER
文本分析器和输入文本:
SELECT TEXT_ANALYZE(
'I like pie, you like-pie, they like 2 PIEs.',
analyzer=>'PATTERN_ANALYZER'
) AS results
/*----------------------------------------------------------------*
| results |
+----------------------------------------------------------------+
| ['like', 'pie', 'you', 'like', 'pie', 'they', 'like', 'pies' ] |
*----------------------------------------------------------------*/
如需查看包含分析器选项的其他示例,请参阅文本分析。
如需了解可用于增强分析器支持的查询的有用分析器方法,请参阅使用文本分析器进行搜索。
TF_IDF
TF_IDF(tokenized_document) OVER()
TF_IDF(tokenized_document, max_distinct_tokens) OVER()
TF_IDF(tokenized_document, max_distinct_tokens, frequency_threshold) OVER()
说明
使用 TF-IDF(字词频率 - 逆向文档频率)算法,评估某个字词与一组词元化文档中的某个词元化文档的相关性。
定义
tokenized_document
:ARRAY<STRING>
值,表示经过词元化处理的文档。经过词法单元化处理的文档是一组用于文本分析的字词(词元)。max_distinct_tokens
:可选参数。接受非负INT64
值,该值表示字典的大小,不包括未知字词。字词会添加到字典中,直到达到此阈值。因此,如果此值是
20
,则系统会添加前 20 个唯一字词,之后不会添加任何其他字词。如果未提供此参数,则默认值为
32000
。如果指定了此参数,则最大值为1048576
。frequency_threshold
:可选参数。接受非负INT64
值,该值表示字词必须最少在词元化文档中出现多少次,才能包含在字典中。因此,如果此值为3
,则字词必须至少在词元化文档中出现三次,才能添加到字典中。如果未提供此参数,则默认值为
5
。
详细信息
此函数使用 TF-IDF(字词频率 - 逆向文档频率)算法来计算一组词元化文档中的字词的相关性。TF-IDF 会将两个指标相乘:字词在一个文档中出现的次数(字词频率),以及字词在一组文档中出现的逆向文档频率(逆向文档频率)。
TDIF:
term frequency * inverse document frequency
字词频率:
(count of term in document) / (document size)
逆向文档频率:
log(1 + document set size / (1 + count of documents containing term))
如果字词满足 max_distinct_tokens
和 frequency_threshold
的条件,则会添加到字词字典中,否则这些字词会被视为未知字词。未知字词始终是字典中的第一个字词,以 NULL
表示。字典的其余部分按字词频率排序,而不是按字母顺序排序。
返回类型
ARRAY<STRUCT<term STRING, tf_idf DOUBLE>>
定义:
term
:添加到字典中的唯一字词。tf_idf
:该字词的 TF-IDF 计算。
示例
以下查询会计算一组词元化文档中至少出现两次的不超过 10 个字词的相关性。在此示例中,命名参数会按位置传入。10
表示 max_distinct_tokens
,2
表示 frequency_threshold
:
WITH ExampleTable AS (
SELECT 1 AS id, ['I', 'like', 'pie', 'pie', 'pie', NULL] AS f UNION ALL
SELECT 2 AS id, ['yum', 'yum', 'pie', NULL] AS f UNION ALL
SELECT 3 AS id, ['I', 'yum', 'pie', NULL] AS f UNION ALL
SELECT 4 AS id, ['you', 'like', 'pie', 'too', NULL] AS f
)
SELECT id, TF_IDF(f, 10, 2) OVER() AS results
FROM ExampleTable
ORDER BY id;
/*----+-------------------------------------------------*
| id | results |
+----+-------------------------------------------------+
| 1 | [{"index":null,"value":"0.1304033435859887"}, |
| | {"index":"I","value":"0.1412163100645339"}, |
| | {"index":"like","value":"0.1412163100645339"}, |
| | {"index":"pie","value":"0.29389333245105953"}] |
+----+-------------------------------------------------+
| 2 | [{"index":null,"value":"0.1956050153789831"}, |
| | {"index":"pie","value":"0.14694666622552977"}, |
| | {"index":"yum","value":"0.4236489301936017"}] |
+----+-------------------------------------------------+
| 3 | [{"index":null,"value":"0.1956050153789831"}, |
| | {"index":"I","value":"0.21182446509680086"}, |
| | {"index":"pie","value":"0.14694666622552977"}, |
| | {"index":"yum","value":"0.21182446509680086"}] |
+----+-------------------------------------------------+
| 4 | [{"index":null,"value":"0.4694520369095594"}, |
| | {"index":"like","value":"0.1694595720774407"}, |
| | {"index":"pie","value":"0.11755733298042381"}] |
*----+-------------------------------------------------*/
以下查询会计算一组词元化文档中至少出现一次的不超过三个字词的相关性:
WITH ExampleTable AS (
SELECT 1 AS id, ['I', 'like', 'pie', 'pie', 'pie', NULL] AS f UNION ALL
SELECT 2 AS id, ['yum', 'yum', 'pie', NULL] AS f UNION ALL
SELECT 3 AS id, ['I', 'yum', 'pie', NULL] AS f UNION ALL
SELECT 4 AS id, ['you', 'like', 'pie', 'too', NULL] AS f
)
SELECT id, TF_IDF(f, 3, 2) OVER() AS results
FROM ExampleTable
ORDER BY id;
/*----+-------------------------------------------------*
| id | results |
+----+-------------------------------------------------+
| 1 | [{"index":null,"value":"0.12679902142647365"}, |
| | {"index":"I","value":"0.1412163100645339"}, |
| | {"index":"like","value":"0.1412163100645339"}, |
| | {"index":"pie","value":"0.29389333245105953"}] |
+----+-------------------------------------------------+
| 2 | [{"index":null,"value":"0.5705955964191315"}, |
| | {"index":"pie","value":"0.14694666622552977"}] |
+----+-------------------------------------------------+
| 3 | [{"index":null,"value":"0.380397064279421"}, |
| | {"index":"I","value":"0.21182446509680086"}, |
| | {"index":"pie","value":"0.14694666622552977"}] |
+----+-------------------------------------------------+
| 4 | [{"index":null,"value":"0.45647647713530515"}, |
| | {"index":"like","value":"0.1694595720774407"}, |
| | {"index":"pie","value":"0.11755733298042381"}] |
*----+-------------------------------------------------*/