Operações do TensorFlow disponíveis

Nesta página, apresentamos uma lista das APIs Python do TensorFlow e operadores de gráfico disponíveis no Cloud TPU.

APIs Python disponíveis

A lista abaixo é um guia para o conjunto de APIs TensorFlow Python disponíveis. Esta não é uma lista completa. As funções de biblioteca que não estão na lista talvez funcionem se forem compostas de funções primitivas disponíveis.

Consulte o guia de desempenho para ver recomendações sobre operadores específicos.

Módulo API Python disponível Comentários
tf tf.abs
tf.acosh
tf.add
tf.add_n
tf.angle
tf.arg_max O argumento dimension precisa ser uma constante de tempo de compilação.
tf.arg_min O argumento dimension precisa ser uma constante de tempo de compilação.
tf.asinh
tf.assign Disponível apenas para a variável "resource".
tf.assign_add Disponível apenas para a variável "resource".
tf.assign_sub Disponível apenas para a variável "resource".
tf.atan
tf.atan2
tf.atanh
tf.batch_to_space Os argumentos crops e block_shape precisam ser constantes de tempo de compilação.
tf.batch_to_space_nd O argumento crops precisa ser uma constante de tempo de compilação.
tf.broadcast_dynamic_shape
tf.broadcast_static_shape
tf.case Experimental (fluxo de controle). Talvez ainda não funcione de maneira confiável.
tf.cast
tf.ceil
tf.cholesky Experimental. Talvez tenha problemas com a precisão numérica.
tf.cholesky_solve Experimental. Talvez tenha problemas com a precisão numérica.
tf.clip_by_average_norm
tf.clip_by_global_norm
tf.clip_by_norm
tf.clip_by_value
tf.complex
tf.concat concat_dim precisa ser uma constante de tempo de compilação.
tf.cond Experimental (fluxo de controle). Talvez ainda não funcione de maneira confiável.
tf.conj
tf.constant
tf.convert_to_tensor
tf.cos
tf.cosh
tf.cross
tf.cumprod axis precisa ser uma constante de tempo de compilação.
tf.cumsum axis precisa ser uma constante de tempo de compilação.
tf.depth_to_space
tf.diag
tf.diag_part
tf.div A divisão int32 é mais lenta do que os outros tipos.
tf.divide A divisão int32 é mais lenta do que os outros tipos.
tf.dynamic_stitch indices precisa ser uma constante de tempo de compilação.
tf.einsum
tf.equal
tf.erf
tf.erfc
tf.exp
tf.expand_dims dims precisa ser uma constante de tempo de compilação.
tf.expm1
tf.extract_image_patches
tf.eye
tf.fake_quant_with_min_max_args
tf.fake_quant_with_min_max_args_gradient
tf.fake_quant_with_min_max_vars
tf.fake_quant_with_min_max_vars_gradient
tf.fft
tf.fft2d
tf.fft3d
tf.fill O argumento dims precisa ser uma constante de tempo de compilação.
tf.floor
tf.floordiv
tf.floormod
tf.foldl Experimental (fluxo de controle).
tf.foldr Experimental (fluxo de controle).
tf.gather axis precisa ser uma constante de tempo de compilação.
tf.gather_nd
tf.greater
tf.greater_equal
tf.hessians Experimental (fluxo de controle).
tf.identity
tf.identity_n
tf.ifft
tf.ifft2d
tf.ifft3d
tf.imag
tf.invert_permutation O argumento x precisa ser uma constante de tempo de compilação.
tf.is_finite
tf.is_inf
tf.is_nan
tf.is_non_decreasing
tf.is_strictly_increasing
tf.less
tf.less_equal
tf.linspace Os argumentos start, stop e num precisam ser constantes de tempo de compilação.
tf.log
tf.log1p
tf.log_sigmoid
tf.logical_and
tf.logical_or
tf.logical_not
tf.logical_xor
tf.matmul Usa um matmul bfloat16 com acúmulo float32.
tf.matrix_band_part
tf.matrix_diag
tf.matrix_diag_part
tf.matrix_set_diag
tf.matrix_triangular_solve Experimental. Talvez tenha problemas com a precisão numérica.
tf.maximum
tf.meshgrid
tf.minimum
tf.mod
tf.multinomial O argumento num_samples precisa ser uma constante de tempo de compilação.
tf.multiply
tf.negative
tf.no_op
tf.norm
tf.not_equal
tf.one_hot depth precisa ser uma constante de tempo de compilação.
tf.ones
tf.ones_like
tf.pad O argumento paddings precisa ser uma constante de tempo de compilação. O gradiente do preenchimento REFLECT ainda não está disponível.
tf.pow
tf.random_normal shape precisa ser uma constante de tempo de compilação.
tf.random_uniform shape precisa ser uma constante de tempo de compilação.
tf.range Os argumentos start, limit e delta precisam ser constantes de tempo de compilação.
tf.rank
tf.real
tf.realdiv
tf.reciprocal
tf.reduce_all axis precisa ser uma constante de tempo de compilação.
tf.reduce_any axis precisa ser uma constante de tempo de compilação.
tf.reduce_logsumexp
tf.reduce_max axis precisa ser uma constante de tempo de compilação.
tf.reduce_min axis precisa ser uma constante de tempo de compilação.
tf.reduce_prod axis precisa ser uma constante de tempo de compilação.
tf.reduce_sum axis precisa ser uma constante de tempo de compilação.
tf.reshape O argumento shape precisa ser uma constante de tempo de compilação.
tf.reverse O argumento dims precisa ser uma constante de tempo de compilação.
tf.reverse_sequence
tf.reverse_v2 O argumento axis precisa ser uma constante de tempo de compilação.
tf.rint
tf.round
tf.rsqrt
tf.saturate_cast
tf.scalar_mul
tf.scan Experimental (fluxo de controle).
tf.scatter_nd
tf.sequence_mask
tf.shape
tf.shape_n
tf.sigmoid
tf.sign
tf.sin
tf.sinh
tf.size
tf.slice size precisa ser uma constante de tempo de compilação. Além disso, begin precisa ser uma constante de tempo de compilação ou size não pode ser negativo. A retropropagação é compatível apenas se begin e size forem constantes de tempo de compilação.
tf.space_to_batch paddings e block_shape precisam ser constantes de tempo de compilação.
tf.space_to_batch_nd paddings precisa ser uma constante de tempo de compilação.
tf.space_to_depth
tf.split axis precisa ser uma constante de tempo de compilação.
tf.sqrt
tf.square
tf.squared_difference
tf.squeeze
tf.stack
tf.stop_gradient
tf.strided_slice
tf.tan
tf.tanh
tf.tensordot
tf.tile O argumento multiples precisa ser uma constante de tempo de compilação.
tf.to_bfloat16
tf.to_float
tf.to_int32
tf.to_int64 O suporte para int64 é limitado.
tf.trace
tf.transpose O argumento perm precisa ser uma constante de tempo de compilação.
tf.truediv
tf.truncated_normal shape precisa ser uma constante de tempo de compilação.
tf.truncatediv
tf.truncatemod
tf.unsorted_segment_sum
tf.unstack
tf.where Os campos x e y não podem ser None. Se x e y forem None, o operador não terá uma forma estática.
tf.while_loop A computação do gradiente de um loop "while" requer que o argumento maximum_iterations seja passado.
tf.zeros
tf.zeros_like
tf.Tensor.__getitem__ O início, fim e passos de um pedaço precisam ser constantes de tempo de compilação.
tf.bitwise tf.bitwise_and
tf.bitwise_or
tf.bitwise_invert
tf.contrib.stateless tf.contrib.stateless.stateless_random_normal
tf.contrib.stateless.stateless_random_uniform
tf.image tf.image.adjust_brightness
tf.image.adjust_contrast
tf.image.adjust_gamma
tf.image.adjust_hue
tf.image.adjust_saturation
tf.image.central_crop O fator "crop" precisa ser uma constante de tempo de compilação.
tf.image.convert_image_dtype
tf.image.flip_left_right
tf.image.flip_up_down
tf.image.grayscale_to_rgb
tf.image.hsv_to_rgb
tf.image.resize_bilinear Somente align_corners=True está disponível. size precisa ser uma constante de tempo de compilação.
tf.image.random_brightness
tf.image.random_contrast
tf.image.random_flip_left_right
tf.image.random_flip_up_down
tf.image.random_hue
tf.image.random_saturation
tf.image.rgb_to_hsv
tf.image.rgb_to_grayscale
tf.image.rot90
tf.image.total_variation
tf.image.transpose_image
tf.layers tf.layers.average_pooling1d
tf.layers.average_pooling2d
tf.layers.average_pooling1d
tf.layers.batch_normalization
tf.layers.conv1d
tf.layers.conv2d
tf.layers.conv2d_transpose
tf.layers.conv3d
tf.layers.conv3d_transpose
tf.layers.dense
tf.layers.dropout
tf.layers.flatten
tf.layers.max_pooling1d
tf.layers.max_pooling2d
tf.layers.max_pooling3d
tf.layers.separable_conv2d
tf.nn tf.nn.atrous_conv2d
tf.nn.atrous_conv2d_transpose
tf.nn.avg_pool
tf.nn.avg_pool3d
tf.nn.batch_normalization
tf.nn.bias_add
tf.nn.conv1d
tf.nn.conv2d
tf.nn.conv2d_backprop_filter
tf.nn.conv2d_backprop_input
tf.nn.conv2d_transpose
tf.nn.conv3d
tf.nn.conv3d_backprop_filter
tf.nn.conv3d_backprop_input
tf.nn.conv3d_transpose
tf.nn.convolution
tf.nn.crelu
tf.nn.depthwise_conv2d
tf.nn.depthwise_conv2d_native
tf.nn.depthwise_conv2d_native_backprop_filter
tf.nn.depthwise_conv2d_native_backprop_input
tf.nn.dropout
tf.nn.dynamic_rnn Experimental.
tf.nn.elu
tf.nn.fused_batch_norm
tf.nn.l2_loss
tf.nn.l2_normalize
tf.nn.leaky_relu
tf.nn.local_response_normalization
tf.nn.log_poisson_loss
tf.nn.log_softmax
tf.nn.max_pool
tf.nn.max_pool3d
tf.nn.moments
tf.nn.normalize_moments
tf.nn.pool
tf.nn.relu
tf.nn.relu6
tf.nn.relu_layer
tf.nn.selu
tf.nn.separable_conv2d
tf.nn.sigmoid_cross_entropy_with_logits
tf.nn.softmax
tf.nn.softmax_cross_entropy_with_logits
tf.nn.softplus
tf.nn.softsign
tf.nn.sparse_softmax_cross_entropy_with_logits
tf.nn.static_bidirectional_rnn Experimental.
tf.nn.static_rnn Experimental.
tf.nn.weighted_cross_entropy_with_logits Experimental.
tf.nn.weighted_moments
tf.nn.with_space_to_batch
tf.nn.xw_plus_b
tf.nn.zero_fraction
tf.spectral tf.spectral.fft
tf.spectral.fft2d
tf.spectral.fft3d
tf.spectral.ifft
tf.spectral.ifft2d
tf.spectral.ifft3d
tf.spectral.irfft fft_length precisa ser uma constante de tempo de compilação.
tf.spectral.irfft2d fft_length precisa ser uma constante de tempo de compilação.
tf.spectral.irfft3d fft_length precisa ser uma constante de tempo de compilação.
tf.spectral.rfft fft_length precisa ser uma constante de tempo de compilação.
tf.spectral.rfft2d fft_length precisa ser uma constante de tempo de compilação.
tf.spectral.rfft3d fft_length precisa ser uma constante de tempo de compilação.

APIs Python indisponíveis

Esta não é uma lista completa. Estas operações não estão disponíveis no Cloud TPU:

Módulo API Python indisponível Comentários
tf tf.accumulate_n Usa variáveis REF.
tf.acos
tf.asin
tf.betainc
tf.bitcast
tf.add_check_numerics_ops Programas que contêm operadores "check-numerics" são executados, mas esse operador é atualmente ignorado.
tf.assert_... Programas que contêm declarações são executados, mas as declarações são ignoradas.
tf.check_numerics Programas que contêm operadores "check-numerics" são executados, mas esse operador é atualmente ignorado.
tf.confusion_matrix
tf.count_nonzero Usa a redução int64.
tf.count_up_to
tf.create_partitioned_variables
tf.dequantize
tf.digamma
tf.dynamic_partition
tf.edit_distance
tf.fake_quant_with_min_max_vars_per_channel
tf.fake_quant_with_min_max_vars_per_channel_gradient
tf.histogram_fixed_width
tf.igamma
tf.igammac
tf.lbeta
tf.lgamma
tf.matrix_determinant
tf.matrix_inverse
tf.matrix_solve
tf.matrix_solve_ls
tf.polygamma
tf.py_func
tf.qr
tf.quantize_v2
tf.quantized_concat
tf.random_crop
tf.random_gamma
tf.random_poisson
tf.random_shuffle
tf.scatter_add
tf.scatter_div
tf.scatter_mul
tf.scatter_nd_add
tf.scatter_nd_sub
tf.scatter_nd_update
tf.segment_mean
tf.segment_max
tf.segment_min
tf.segment_prod
tf.segment_sum
tf.self_adjoint_eig
tf.self_adjoint_eigvals
tf.setdiff1d
tf.sparse_...
tf.string_...
tf.substr
tf.svd
tf.to_double
tf.unique
tf.unsorted_segment_max
tf.zeta
tf.bitwise.bitwise_xor
tf.contrib.stateless.stateless_truncated_normal

Operadores de gráfico disponíveis

Operador Restrição de tipo
Abs T={bfloat16,float,int32,int64}
Acos T={bfloat16,complex64,float,int32,int64}
Acosh T={bfloat16,complex64,float}
Add T={bfloat16,complex64,float,int32,int64}
AddN T={bfloat16,complex64,float,int32,int64,uint32,uint64}
AdjustContrastv2 T={float}
AdjustHue T={float}
AdjustSaturation T={float}
All Tidx={int32,int64}
AllToAll T={bfloat16,float}
Angle Tout={float}
T={complex64}
Any Tidx={int32,int64}
ApproximateEqual T={bfloat16,complex64,float,int32,int64,uint32,uint64}
ArgMax Tidx={int32,int64}
output_type={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
ArgMin Tidx={int32,int64}
output_type={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
Asin T={bfloat16,complex64,float,int32,int64}
Asinh T={bfloat16,complex64,float}
Assert T={bfloat16,bool,complex64,float,int32,int64,string,uint32,uint64}
AssignAddVariableOp dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
AssignSubVariableOp dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
AssignVariableOp dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Atan T={bfloat16,complex64,float,int32,int64}
Atan2 T={bfloat16,float}
Atanh T={bfloat16,complex64,float}
AvgPool T={bfloat16,float}
AvgPool3D T={bfloat16,float}
AvgPool3DGrad T={bfloat16,float}
AvgPoolGrad T={bfloat16,float}
BatchMatMul T={bfloat16,complex64,float,int32,int64}
BatchToSpace Tidx={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
BatchToSpaceND Tcrops={int32,int64}
Tblock_shape={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
BiasAdd T={bfloat16,complex64,float,int32,int64,uint32,uint64}
BiasAddGrad T={bfloat16,complex64,float,int32,int64,uint32,uint64}
BiasAddV1 T={bfloat16,complex64,float,int32,int64,uint32,uint64}
Bitcast type={bfloat16,complex64,float,int32,int64,uint32,uint64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
BitwiseAnd T={int32,int64,uint32,uint64}
BitwiseOr T={int32,int64,uint32,uint64}
BitwiseXor T={int32,int64,uint32,uint64}
BroadcastArgs T={int32,int64}
BroadcastGradientArgs T={int32,int64}
BroadcastTo Tidx={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Bucketize T={float,int32,int64}
Cast DstT={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
SrcT={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Ceil T={bfloat16,float}
CheckNumerics T={bfloat16,float}
Cholesky T={float}
ClipByValue T={bfloat16,complex64,float,int32,int64,uint32,uint64}
CollectivePermute T={bfloat16,float}
Complex Tout={complex64}
T={float}
ComplexAbs Tout={float}
T={complex64}
Concat T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ConcatOffset
ConcatV2 Tidx={int32}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Conj T={complex64}
ConjugateTranspose Tperm={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Const dtype={bfloat16,bool,complex64,float,int32,int64,string,uint32,uint64}
ControlTrigger
Conv2D T={bfloat16,float}
Conv2DBackpropFilter T={bfloat16,float}
Conv2DBackpropInput T={bfloat16,float}
Conv3D T={bfloat16,float}
Conv3DBackpropFilterV2 T={bfloat16,float}
Conv3DBackpropInputV2 Tshape={int32,int64}
T={bfloat16,float}
Cos T={bfloat16,complex64,float}
Cosh T={bfloat16,complex64,float}
Cross T={bfloat16,float,int32,int64,uint32,uint64}
CrossReplicaSum T={bfloat16,float}
Cumprod Tidx={int32,int64}
T={bfloat16,float,int32}
Cumsum Tidx={int32,int64}
T={bfloat16,float,int32}
DataFormatVecPermute T={int32,int64}
DepthToSpace T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
DepthwiseConv2dNative T={bfloat16,float}
DepthwiseConv2dNativeBackpropFilter T={bfloat16,float}
DepthwiseConv2dNativeBackpropInput T={bfloat16,float}
Diag T={bfloat16,complex64,float,int32,int64}
DiagPart T={bfloat16,complex64,float,int32,int64}
Digamma T={bfloat16,float}
Div T={bfloat16,complex64,float,int32,int64}
DivNoNan T={float}
DynamicStitch T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Elu T={bfloat16,float}
EluGrad T={bfloat16,float}
Empty dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
EmptyTensorList shape_type={int32,int64}
element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Equal T={bfloat16,bool,complex64,float,int32,int64}
Erf T={bfloat16,float}
Erfc T={bfloat16,float}
Exp T={bfloat16,complex64,float}
ExpandDims Tdim={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Expm1 T={bfloat16,complex64,float}
ExtractImagePatches T={bfloat16,float,int32,int64,uint32,uint64}
FFT Tcomplex={complex64}
FFT2D Tcomplex={complex64}
FFT3D Tcomplex={complex64}
FakeParam dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
FakeQuantWithMinMaxArgs
FakeQuantWithMinMaxArgsGradient
FakeQuantWithMinMaxVars
FakeQuantWithMinMaxVarsGradient
Fill index_type={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Floor T={bfloat16,float}
FloorDiv T={bfloat16,complex64,float,int32,int64}
FloorMod T={bfloat16,float,int32,int64}
FusedBatchNorm T={float}
FusedBatchNormGrad T={float}
FusedBatchNormGradV2 U={float}
T={bfloat16,float}
FusedBatchNormV2 U={float}
T={bfloat16,float}
Gather Tindices={int32,int64}
Tparams={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
GatherNd Tindices={int32,int64}
Tparams={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
GatherV2 Taxis={int32,int64}
Tindices={int32,int64}
Tparams={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
GetItem T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Greater T={bfloat16,float,int32,int64,uint32,uint64}
GreaterEqual T={bfloat16,float,int32,int64,uint32,uint64}
HSVToRGB T={bfloat16,float}
IFFT Tcomplex={complex64}
IFFT2D Tcomplex={complex64}
IFFT3D Tcomplex={complex64}
IRFFT
IRFFT2D
IRFFT3D
Identity T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
IdentityN T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
If Tout={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Tin={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Tcond={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Imag Tout={float}
T={complex64}
InfeedDequeue dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
InfeedDequeueTuple dtypes={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
InplaceAdd T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
InplaceUpdate T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Inv T={bfloat16,complex64,float,int32,int64}
Invert T={int32,int64,uint32,uint64}
InvertPermutation T={int32}
IsFinite T={bfloat16,float}
IsInf T={bfloat16,float}
IsNan T={bfloat16,float}
L2Loss T={bfloat16,float}
LRN T={bfloat16,float}
LRNGrad T={bfloat16,float}
LeakyRelu T={bfloat16,float}
LeakyReluGrad T={bfloat16,float}
LeftShift T={int32,int64,uint32,uint64}
Less T={bfloat16,float,int32,int64,uint32,uint64}
LessEqual T={bfloat16,float,int32,int64,uint32,uint64}
Lgamma T={bfloat16,float}
LinSpace Tidx={int32,int64}
T={bfloat16,float}
ListDiff out_idx={int32,int64}
T={int32,int64}
Log T={bfloat16,complex64,float}
Log1p T={bfloat16,complex64,float}
LogSoftmax T={bfloat16,float}
LogicalAnd
LogicalNot
LogicalOr
MatMul T={bfloat16,complex64,float}
MatrixBandPart Tindex={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
MatrixDiag T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
MatrixDiagPart T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
MatrixSetDiag T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
MatrixTriangularSolve T={complex64,float}
Max Tidx={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
MaxPool T={bfloat16,float,int32,int64}
MaxPool3D T={bfloat16,float}
MaxPool3DGrad TInput={bfloat16,float}
T={bfloat16,float}
MaxPool3DGradGrad T={float}
MaxPoolGrad T={bfloat16,float,int32,int64,uint32,uint64}
MaxPoolGradGrad T={float}
MaxPoolGradGradV2 T={float}
MaxPoolGradV2 T={bfloat16,float,int32,int64,uint32,uint64}
MaxPoolV2 T={bfloat16,float,int32,int64}
Maximum T={bfloat16,float,int32,int64}
Mean Tidx={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
Min Tidx={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
Minimum T={bfloat16,float,int32,int64}
MirrorPad Tpaddings={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Mod T={bfloat16,float,int32,int64}
Mul T={bfloat16,complex64,float,int32,int64}
Multinomial output_dtype={int32,int64}
T={bfloat16,float,int32,int64,uint32,uint64}
Neg T={bfloat16,complex64,float,int32,int64}
NoOp
NonMaxSuppressionV4 T={float}
NotEqual T={bfloat16,bool,complex64,float,int32,int64}
OneHot TI={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
OnesLike T={bfloat16,bool,complex64,float,int32,int64}
OutfeedEnqueue dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
OutfeedEnqueueTuple dtypes={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Pack T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Pad Tpaddings={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
PadV2 Tpaddings={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ParallelDynamicStitch T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
PlaceholderWithDefault dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Pow T={bfloat16,complex64,float,int32,int64}
PreventGradient T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Prod Tidx={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
Qr T={float}
QuantizeAndDequantizeV2 T={bfloat16,float}
QuantizeAndDequantizeV3 T={bfloat16,float}
RFFT
RFFT2D
RFFT3D
RGBToHSV T={bfloat16,float}
RandomShuffle T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
RandomStandardNormal T={int32,int64}
dtype={bfloat16,float}
RandomUniform T={int32,int64}
dtype={bfloat16,float}
RandomUniformInt T={int32,int64}
Tout={int32,int64}
Range Tidx={bfloat16,float,int32,int64}
Rank T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ReadVariableOp dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Real Tout={float}
T={complex64}
RealDiv T={bfloat16,complex64,float,int32,int64}
Reciprocal T={bfloat16,complex64,float,int32,int64}
ReciprocalGrad T={bfloat16,complex64,float}
RecvTPUEmbeddingActivations
Relu T={bfloat16,float,int32,int64,uint32,uint64}
Relu6 T={bfloat16,float,int32,int64,uint32,uint64}
Relu6Grad T={bfloat16,float,int32,int64,uint32,uint64}
ReluGrad T={bfloat16,float,int32,int64,uint32,uint64}
Reshape Tshape={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ResizeBilinear T={bfloat16,float,int32,int64}
ResizeBilinearGrad T={bfloat16,float}
ResizeNearestNeighbor T={float,int32,int64}
ResourceApplyAdaMax T={bfloat16,float}
ResourceApplyAdadelta T={bfloat16,float}
ResourceApplyAdagrad T={bfloat16,float}
ResourceApplyAdagradDA T={bfloat16,float}
ResourceApplyAdam T={bfloat16,float}
ResourceApplyAddSign T={bfloat16,float}
ResourceApplyCenteredRMSProp T={bfloat16,float}
ResourceApplyFtrl T={bfloat16,float}
ResourceApplyFtrlV2 T={bfloat16,float}
ResourceApplyGradientDescent T={bfloat16,float}
ResourceApplyKerasMomentum T={bfloat16,float}
ResourceApplyMomentum T={bfloat16,float}
ResourceApplyPowerSign T={bfloat16,float}
ResourceApplyProximalAdagrad T={bfloat16,float}
ResourceApplyProximalGradientDescent T={bfloat16,float}
ResourceApplyRMSProp T={bfloat16,float}
ResourceGather Tindices={int32,int64}
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ResourceScatterAdd Tindices={int32,int64}
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
ResourceScatterDiv Tindices={int32,int64}
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
ResourceScatterMax Tindices={int32,int64}
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
ResourceScatterMin Tindices={int32,int64}
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
ResourceScatterMul Tindices={int32,int64}
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
ResourceScatterNdAdd Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ResourceScatterNdSub Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ResourceScatterNdUpdate Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ResourceScatterSub Tindices={int32,int64}
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64}
ResourceScatterUpdate Tindices={int32,int64}
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ResourceStridedSliceAssign Index={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Reverse T={bool,complex64,float,int32,int64}
ReverseSequence Tlen={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ReverseV2 T={bfloat16,bool,complex64,float,int32,int64}
Tidx={int32,int64}
RightShift T={int32,int64,uint32,uint64}
Rint T={bfloat16,float}
Round T={bfloat16,complex64,float,int32,int64}
Rsqrt T={bfloat16,complex64,float}
RsqrtGrad T={bfloat16,complex64,float}
ScatterNd Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Select T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Selu T={bfloat16,float}
SeluGrad T={bfloat16,float}
SendTPUEmbeddingGradients
Shape out_type={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
ShapeN out_type={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Sigmoid T={bfloat16,complex64,float}
SigmoidGrad T={bfloat16,complex64,float}
Sign T={bfloat16,complex64,float,int32,int64}
Sin T={bfloat16,complex64,float}
Sinh T={bfloat16,complex64,float}
Size out_type={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Slice Index={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Snapshot T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Softmax T={bfloat16,float}
SoftmaxCrossEntropyWithLogits T={bfloat16,float}
Softplus T={bfloat16,float}
SoftplusGrad T={bfloat16,float}
Softsign T={bfloat16,float}
SoftsignGrad T={bfloat16,float}
SpaceToBatch Tpaddings={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
SpaceToBatchND Tblock_shape={int32,int64}
Tpaddings={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
SpaceToDepth T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
SparseMatMul Tb={bfloat16,float}
Ta={bfloat16,float}
SparseSoftmaxCrossEntropyWithLogits Tlabels={int32,int64}
T={bfloat16,float}
SparseToDense Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Split T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
SplitV Tlen={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Sqrt T={bfloat16,complex64,float}
SqrtGrad T={bfloat16,complex64,float}
Square T={bfloat16,complex64,float,int32,int64}
SquaredDifference T={bfloat16,complex64,float,int32,int64}
Squeeze T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
StackCloseV2
StackPopV2 elem_type={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
StackPushV2 T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
StackV2 elem_type={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
StatelessIf Tout={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Tin={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Tcond={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
StatelessMultinomial output_dtype={int32,int64}
Tseed={int32}
T={bfloat16,float}
StatelessRandomNormal Tseed={int32}
T={int32,int64}
dtype={bfloat16,float}
StatelessRandomUniform Tseed={int32}
T={int32,int64}
dtype={bfloat16,float}
StatelessRandomUniformInt Tseed={int32}
T={int32,int64}
dtype={int32,int64}
StatelessTruncatedNormal Tseed={int32}
T={int32,int64}
dtype={bfloat16,float}
StatelessWhile T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
StopGradient T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
StridedSlice Index={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
StridedSliceGrad Index={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Sub T={bfloat16,complex64,float,int32,int64}
Sum Tidx={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
SymbolicGradient Tout={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Tin={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TPUEmbeddingActivations
Tan T={bfloat16,complex64,float,int32,int64}
Tanh T={bfloat16,complex64,float}
TanhGrad T={bfloat16,complex64,float}
TensorArrayCloseV3
TensorArrayConcatV3 dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorArrayGatherV3 dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorArrayGradV3
TensorArrayReadV3 dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorArrayScatterV3 T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorArraySizeV3
TensorArraySplitV3 T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorArrayV3 dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorArrayWriteV3 T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorListElementShape shape_type={int32,int64}
TensorListPopBack element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorListPushBack element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TensorListReserve shape_type={int32,int64}
element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Tile Tmultiples={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TopKV2 T={bfloat16,float,int32,uint32}
Transpose Tperm={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
TruncateDiv T={bfloat16,complex64,float,int32,int64}
TruncateMod T={bfloat16,float,int32,int64}
TruncatedNormal T={int32,int64}
dtype={float}
Unpack T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
UnsortedSegmentMax Tnumsegments={int32,int64}
Tindices={int32,int64}
T={bfloat16,float,int32,int64,uint32,uint64}
UnsortedSegmentMin Tnumsegments={int32,int64}
Tindices={int32,int64}
T={bfloat16,float,int32,int64,uint32,uint64}
UnsortedSegmentProd Tnumsegments={int32,int64}
Tindices={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
UnsortedSegmentSum Tnumsegments={int32,int64}
Tindices={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
VarIsInitializedOp
VariableShape out_type={int32,int64}
While T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Xdivy T={complex64,float}
XlaBroadcastHelper Tindices={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
XlaConv Tindices={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
XlaDequantize
XlaDot T={bfloat16,complex64,float,int32,int64,uint32,uint64}
XlaDynamicSlice Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaDynamicUpdateSlice Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaHostCompute Toutputs={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Tinputs={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaIf Tout={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Tin={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Tcond={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
XlaKeyValueSort V={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
K={bfloat16,float,int32,int64,uint32,uint64}
XlaPad Tindices={int32,int64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaRecv dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaRecvFromHost Toutput={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaReduce T={bfloat16,complex64,float,int32,int64,uint32,uint64}
XlaReduceWindow Tindices={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
XlaSelectAndScatter Tindices={int32,int64}
T={bfloat16,complex64,float,int32,int64,uint32,uint64}
XlaSend T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaSendToHost Tinput={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaSort T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
XlaWhile T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
Xlogy T={complex64,float}
ZerosLike T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
_Arg T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}
_ArrayToList out_types={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
_ListToArray T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
Tin={bfloat16,bool,complex64,float,int32,int64,uint32,uint64}
_Retval T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64}