Operaciones de TensorFlow disponibles
En esta página se enumeran las APIs de Python de TensorFlow y los operadores de gráficos disponibles en Cloud TPU.
APIs de Python disponibles
La siguiente lista es una guía del conjunto de APIs de Python de TensorFlow disponibles. Esta lista no es exhaustiva. Las funciones de biblioteca que no estén en esta lista pueden funcionar si se componen de primitivas disponibles.
Consulta la guía de rendimiento para obtener recomendaciones sobre operadores específicos.
Módulo | API de Python disponible | Comentarios |
---|---|---|
tf |
tf.abs |
|
tf.acosh |
||
tf.add |
||
tf.add_n |
||
tf.angle |
||
tf.arg_max |
El argumento dimension debe ser una constante de tiempo de compilación. |
|
tf.arg_min |
El argumento dimension debe ser una constante de tiempo de compilación. |
|
tf.asinh |
||
tf.assign |
Solo está disponible para variables de recursos. | |
tf.assign_add |
Solo está disponible para variables de recursos. | |
tf.assign_sub |
Solo está disponible para variables de recursos. | |
tf.atan |
||
tf.atan2 |
||
tf.atanh |
||
tf.batch_to_space |
Los argumentos crops y block_shape deben ser constantes en tiempo de compilación. |
|
tf.batch_to_space_nd |
El argumento crops debe ser una constante de tiempo de compilación. |
|
tf.broadcast_dynamic_shape |
||
tf.broadcast_static_shape |
||
tf.case |
Experimental (flujo de control). Puede que aún no funcione de forma fiable. | |
tf.cast |
||
tf.ceil |
||
tf.cholesky |
Experimental. Puede que haya problemas con la precisión numérica. | |
tf.cholesky_solve |
Experimental. Puede que haya problemas con la precisión numérica. | |
tf.clip_by_average_norm |
||
tf.clip_by_global_norm |
||
tf.clip_by_norm |
||
tf.clip_by_value |
||
tf.complex |
||
tf.concat |
concat_dim debe ser una constante de tiempo de compilación. |
|
tf.cond |
Experimental (flujo de control). Puede que aún no funcione de forma fiable. | |
tf.conj |
||
tf.constant |
||
tf.convert_to_tensor |
||
tf.cos |
||
tf.cosh |
||
tf.cross |
||
tf.cumprod |
axis debe ser una constante de tiempo de compilación. |
|
tf.cumsum |
axis debe ser una constante de tiempo de compilación. |
|
tf.depth_to_space |
||
tf.diag |
||
tf.diag_part |
||
tf.div |
La división int32 es más lenta que otros tipos. |
|
tf.divide |
La división int32 es más lenta que otros tipos. |
|
tf.dynamic_stitch |
indices debe ser una constante de tiempo de compilación. |
|
tf.einsum |
||
tf.equal |
||
tf.erf |
||
tf.erfc |
||
tf.exp |
||
tf.expand_dims |
dims debe ser una constante de tiempo de compilación. |
|
tf.expm1 |
||
tf.extract_image_patches |
||
tf.eye |
||
tf.fake_quant_with_min_max_args |
||
tf.fake_quant_with_min_max_args_gradient |
||
tf.fake_quant_with_min_max_vars |
||
tf.fake_quant_with_min_max_vars_gradient |
||
tf.fft |
||
tf.fft2d |
||
tf.fft3d |
||
tf.fill |
El argumento dims debe ser una constante de tiempo de compilación. |
|
tf.floor |
||
tf.floordiv |
||
tf.floormod |
||
tf.foldl |
Experimental (flujo de control). | |
tf.foldr |
Experimental (flujo de control). | |
tf.gather |
axis debe ser una constante de tiempo de compilación. |
|
tf.gather_nd |
||
tf.greater |
||
tf.greater_equal |
||
tf.hessians |
Experimental (control-flow. | |
tf.identity |
||
tf.identity_n |
||
tf.ifft |
||
tf.ifft2d |
||
tf.ifft3d |
||
tf.imag |
||
tf.invert_permutation |
El argumento x debe ser una constante de tiempo de compilación. |
|
tf.is_finite |
||
tf.is_inf |
||
tf.is_nan |
||
tf.is_non_decreasing |
||
tf.is_strictly_increasing |
||
tf.less |
||
tf.less_equal |
||
tf.linspace |
Los argumentos start , stop y num deben ser constantes de tiempo de compilación. |
|
tf.log |
||
tf.log1p |
||
tf.log_sigmoid |
||
tf.logical_and |
||
tf.logical_or |
||
tf.logical_not |
||
tf.logical_xor |
||
tf.matmul |
Usa una bfloat16 matmul con acumulación float32 . |
|
tf.matrix_band_part |
||
tf.matrix_diag |
||
tf.matrix_diag_part |
||
tf.matrix_set_diag |
||
tf.matrix_triangular_solve |
Experimental. Puede que haya problemas con la precisión numérica. | |
tf.maximum |
||
tf.meshgrid |
||
tf.minimum |
||
tf.mod |
||
tf.multinomial |
El argumento num_samples debe ser una constante de tiempo de compilación. |
|
tf.multiply |
||
tf.negative |
||
tf.no_op |
||
tf.norm |
||
tf.not_equal |
||
tf.one_hot |
depth debe ser una constante de tiempo de compilación. |
|
tf.ones |
||
tf.ones_like |
||
tf.pad |
El argumento paddings debe ser una constante de tiempo de compilación. El gradiente del relleno de REFLECT aún no está disponible. |
|
tf.pow |
||
tf.random_normal |
shape debe ser una constante de tiempo de compilación. |
|
tf.random_uniform |
shape debe ser una constante de tiempo de compilación. |
|
tf.range |
Los argumentos start , limit y delta deben ser constantes de tiempo de compilación. |
|
tf.rank |
||
tf.real |
||
tf.realdiv |
||
tf.reciprocal |
||
tf.reduce_all |
axis debe ser una constante de tiempo de compilación. |
|
tf.reduce_any |
axis debe ser una constante de tiempo de compilación. |
|
tf.reduce_logsumexp |
||
tf.reduce_max |
axis debe ser una constante de tiempo de compilación. |
|
tf.reduce_min |
axis debe ser una constante de tiempo de compilación. |
|
tf.reduce_prod |
axis debe ser una constante de tiempo de compilación. |
|
tf.reduce_sum |
axis debe ser una constante de tiempo de compilación. |
|
tf.reshape |
El argumento shape debe ser una constante de tiempo de compilación. |
|
tf.reverse |
El argumento dims debe ser una constante de tiempo de compilación. |
|
tf.reverse_sequence |
||
tf.reverse_v2 |
El argumento axis debe ser una constante de tiempo de compilación. |
|
tf.rint |
||
tf.round |
||
tf.rsqrt |
||
tf.saturate_cast |
||
tf.scalar_mul |
||
tf.scan |
Experimental (flujo de control). | |
tf.scatter_nd |
||
tf.sequence_mask |
||
tf.shape |
||
tf.shape_n |
||
tf.sigmoid |
||
tf.sign |
||
tf.sin |
||
tf.sinh |
||
tf.size |
||
tf.slice |
size debe ser una constante de tiempo de compilación. Además, begin debe ser una constante de tiempo de compilación o size debe ser no negativo. La retropopagación solo se admite si begin y size son constantes de tiempo de compilación. |
|
tf.space_to_batch |
paddings y block_shape deben ser constantes de tiempo de compilación. |
|
tf.space_to_batch_nd |
paddings debe ser una constante de tiempo de compilación. |
|
tf.space_to_depth |
||
tf.split |
axis debe ser una constante de tiempo de compilación. |
|
tf.sqrt |
||
tf.square |
||
tf.squared_difference |
||
tf.squeeze |
||
tf.stack |
||
tf.stop_gradient |
||
tf.strided_slice |
||
tf.tan |
||
tf.tanh |
||
tf.tensordot |
||
tf.tile |
El argumento multiples debe ser una constante de tiempo de compilación. |
|
tf.to_bfloat16 |
||
tf.to_float |
||
tf.to_int32 |
||
tf.to_int64 |
La asistencia de int64 es limitada. |
|
tf.trace |
||
tf.transpose |
El argumento perm debe ser una constante de tiempo de compilación. |
|
tf.truediv |
||
tf.truncated_normal |
shape debe ser una constante de tiempo de compilación. |
|
tf.truncatediv |
||
tf.truncatemod |
||
tf.unsorted_segment_sum |
||
tf.unstack |
||
tf.where |
Tanto x como y deben ser distintos de None . Si ambos son None , el operador no tendría una forma estática.x y |
|
tf.while_loop |
Para calcular el gradiente de un bucle while, es necesario transferir el argumento maximum_iterations . |
|
tf.zeros |
||
tf.zeros_like |
||
tf.Tensor.__getitem__ |
El inicio, el final y los pasos de un slice deben ser constantes de tiempo de compilación. | |
tf.bitwise |
tf.bitwise_and |
|
tf.bitwise_or |
||
tf.bitwise_invert |
||
tf.contrib.stateless |
tf.contrib.stateless.stateless_random_normal |
|
tf.contrib.stateless.stateless_random_uniform |
||
tf.image |
tf.image.adjust_brightness |
|
tf.image.adjust_contrast |
||
tf.image.adjust_gamma |
||
tf.image.adjust_hue |
||
tf.image.adjust_saturation |
||
tf.image.central_crop |
El factor de recorte debe ser una constante de tiempo de compilación. | |
tf.image.convert_image_dtype |
||
tf.image.flip_left_right |
||
tf.image.flip_up_down |
||
tf.image.grayscale_to_rgb |
||
tf.image.hsv_to_rgb |
||
tf.image.resize_bilinear |
Solo está disponible align_corners=True . size debe ser una constante de tiempo de compilación. |
|
tf.image.random_brightness |
||
tf.image.random_contrast |
||
tf.image.random_flip_left_right |
||
tf.image.random_flip_up_down |
||
tf.image.random_hue |
||
tf.image.random_saturation |
||
tf.image.rgb_to_hsv |
||
tf.image.rgb_to_grayscale |
||
tf.image.rot90 |
||
tf.image.total_variation |
||
tf.image.transpose_image |
||
tf.layers |
tf.layers.average_pooling1d |
|
tf.layers.average_pooling2d |
||
tf.layers.average_pooling1d |
||
tf.layers.batch_normalization |
||
tf.layers.conv1d |
||
tf.layers.conv2d |
||
tf.layers.conv2d_transpose |
||
tf.layers.conv3d |
||
tf.layers.conv3d_transpose |
||
tf.layers.dense |
||
tf.layers.dropout |
||
tf.layers.flatten |
||
tf.layers.max_pooling1d |
||
tf.layers.max_pooling2d |
||
tf.layers.max_pooling3d |
||
tf.layers.separable_conv2d |
||
tf.nn |
tf.nn.atrous_conv2d |
|
tf.nn.atrous_conv2d_transpose |
||
tf.nn.avg_pool |
||
tf.nn.avg_pool3d |
||
tf.nn.batch_normalization |
||
tf.nn.bias_add |
||
tf.nn.conv1d |
||
tf.nn.conv2d |
||
tf.nn.conv2d_backprop_filter |
||
tf.nn.conv2d_backprop_input |
||
tf.nn.conv2d_transpose |
||
tf.nn.conv3d |
||
tf.nn.conv3d_backprop_filter |
||
tf.nn.conv3d_backprop_input |
||
tf.nn.conv3d_transpose |
||
tf.nn.convolution |
||
tf.nn.crelu |
||
tf.nn.depthwise_conv2d |
||
tf.nn.depthwise_conv2d_native |
||
tf.nn.depthwise_conv2d_native_backprop_filter |
||
tf.nn.depthwise_conv2d_native_backprop_input |
||
tf.nn.dropout |
||
tf.nn.dynamic_rnn |
Experimental. | |
tf.nn.elu |
||
tf.nn.fused_batch_norm |
||
tf.nn.l2_loss |
||
tf.nn.l2_normalize |
||
tf.nn.leaky_relu |
||
tf.nn.local_response_normalization |
||
tf.nn.log_poisson_loss |
||
tf.nn.log_softmax |
||
tf.nn.max_pool |
||
tf.nn.max_pool3d |
||
tf.nn.moments |
||
tf.nn.normalize_moments |
||
tf.nn.pool |
||
tf.nn.relu |
||
tf.nn.relu6 |
||
tf.nn.relu_layer |
||
tf.nn.selu |
||
tf.nn.separable_conv2d |
||
tf.nn.sigmoid_cross_entropy_with_logits |
||
tf.nn.softmax |
||
tf.nn.softmax_cross_entropy_with_logits |
||
tf.nn.softplus |
||
tf.nn.softsign |
||
tf.nn.sparse_softmax_cross_entropy_with_logits |
||
tf.nn.static_bidirectional_rnn |
Experimental. | |
tf.nn.static_rnn |
Experimental. | |
tf.nn.weighted_cross_entropy_with_logits |
Experimental. | |
tf.nn.weighted_moments |
||
tf.nn.with_space_to_batch |
||
tf.nn.xw_plus_b |
||
tf.nn.zero_fraction |
||
tf.spectral |
tf.spectral.fft |
|
tf.spectral.fft2d |
||
tf.spectral.fft3d |
||
tf.spectral.ifft |
||
tf.spectral.ifft2d |
||
tf.spectral.ifft3d |
||
tf.spectral.irfft |
fft_length debe ser una constante de tiempo de compilación. |
|
tf.spectral.irfft2d |
fft_length debe ser una constante de tiempo de compilación. |
|
tf.spectral.irfft3d |
fft_length debe ser una constante de tiempo de compilación. |
|
tf.spectral.rfft |
fft_length debe ser una constante de tiempo de compilación. |
|
tf.spectral.rfft2d |
fft_length debe ser una constante de tiempo de compilación. |
|
tf.spectral.rfft3d |
fft_length debe ser una constante de tiempo de compilación. |
APIs de Python no disponibles
Esta lista no es exhaustiva. Las operaciones que no están disponibles en la TPU de Cloud son las siguientes:
Módulo | API de Python no disponible | Comentarios |
---|---|---|
tf |
tf.accumulate_n |
Usa variables Ref. |
tf.acos |
||
tf.asin |
||
tf.betainc |
||
tf.bitcast |
||
tf.add_check_numerics_ops |
Los programas que contengan operadores de comprobación numérica deberían ejecutarse, pero el operador de comprobación numérica se ignora actualmente. | |
tf.assert_... |
Los programas que contengan aserciones se ejecutarán, pero las aserciones se ignorarán. | |
tf.check_numerics |
Los programas que contengan operadores de comprobación numérica deberían ejecutarse, pero el operador de comprobación numérica se ignora actualmente. | |
tf.confusion_matrix |
||
tf.count_nonzero |
Usa la reducción int64 . |
|
tf.count_up_to |
||
tf.create_partitioned_variables |
||
tf.dequantize |
||
tf.digamma |
||
tf.dynamic_partition |
||
tf.edit_distance |
||
tf.fake_quant_with_min_max_vars_per_channel |
||
tf.fake_quant_with_min_max_vars_per_channel_gradient |
||
tf.histogram_fixed_width |
||
tf.igamma |
||
tf.igammac |
||
tf.lbeta |
||
tf.lgamma |
||
tf.matrix_determinant |
||
tf.matrix_inverse |
||
tf.matrix_solve |
||
tf.matrix_solve_ls |
||
tf.polygamma |
||
tf.py_func |
||
tf.qr |
||
tf.quantize_v2 |
||
tf.quantized_concat |
||
tf.random_crop |
||
tf.random_gamma |
||
tf.random_poisson |
||
tf.random_shuffle |
||
tf.scatter_add |
||
tf.scatter_div |
||
tf.scatter_mul |
||
tf.scatter_nd_add |
||
tf.scatter_nd_sub |
||
tf.scatter_nd_update |
||
tf.segment_mean |
||
tf.segment_max |
||
tf.segment_min |
||
tf.segment_prod |
||
tf.segment_sum |
||
tf.self_adjoint_eig |
||
tf.self_adjoint_eigvals |
||
tf.setdiff1d |
||
tf.sparse_... |
||
tf.string_... |
||
tf.substr |
||
tf.svd |
||
tf.to_double |
||
tf.unique |
||
tf.unsorted_segment_max |
||
tf.zeta |
||
tf.bitwise.bitwise_xor |
||
tf.contrib.stateless.stateless_truncated_normal |
Operadores de gráficos disponibles
Operador | Restricción de tipo |
---|---|
Abs |
T={bfloat16,float,int32,int64} |
Acos |
T={bfloat16,complex64,float,int32,int64} |
Acosh |
T={bfloat16,complex64,float} |
Add |
T={bfloat16,complex64,float,int32,int64} |
AddN |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
AdjustContrastv2 |
T={float} |
AdjustHue |
T={float} |
AdjustSaturation |
T={float} |
All |
Tidx={int32,int64} |
AllToAll |
T={bfloat16,float} |
Angle |
Tout={float} T={complex64} |
Any |
Tidx={int32,int64} |
ApproximateEqual |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ArgMax |
Tidx={int32,int64} output_type={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ArgMin |
Tidx={int32,int64} output_type={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
Asin |
T={bfloat16,complex64,float,int32,int64} |
Asinh |
T={bfloat16,complex64,float} |
Assert |
T={bfloat16,bool,complex64,float,int32,int64,string,uint32,uint64} |
AssignAddVariableOp |
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
AssignSubVariableOp |
dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
AssignVariableOp |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Atan |
T={bfloat16,complex64,float,int32,int64} |
Atan2 |
T={bfloat16,float} |
Atanh |
T={bfloat16,complex64,float} |
AvgPool |
T={bfloat16,float} |
AvgPool3D |
T={bfloat16,float} |
AvgPool3DGrad |
T={bfloat16,float} |
AvgPoolGrad |
T={bfloat16,float} |
BatchMatMul |
T={bfloat16,complex64,float,int32,int64} |
BatchToSpace |
Tidx={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
BatchToSpaceND |
Tcrops={int32,int64} Tblock_shape={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
BiasAdd |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
BiasAddGrad |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
BiasAddV1 |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
Bitcast |
type={bfloat16,complex64,float,int32,int64,uint32,uint64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
BitwiseAnd |
T={int32,int64,uint32,uint64} |
BitwiseOr |
T={int32,int64,uint32,uint64} |
BitwiseXor |
T={int32,int64,uint32,uint64} |
BroadcastArgs |
T={int32,int64} |
BroadcastGradientArgs |
T={int32,int64} |
BroadcastTo |
Tidx={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Bucketize |
T={float,int32,int64} |
Cast |
DstT={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} SrcT={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Ceil |
T={bfloat16,float} |
CheckNumerics |
T={bfloat16,float} |
Cholesky |
T={float} |
ClipByValue |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
CollectivePermute |
T={bfloat16,float} |
Complex |
Tout={complex64} T={float} |
ComplexAbs |
Tout={float} T={complex64} |
Concat |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ConcatOffset |
|
ConcatV2 |
Tidx={int32} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Conj |
T={complex64} |
ConjugateTranspose |
Tperm={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Const |
dtype={bfloat16,bool,complex64,float,int32,int64,string,uint32,uint64} |
ControlTrigger |
|
Conv2D |
T={bfloat16,float} |
Conv2DBackpropFilter |
T={bfloat16,float} |
Conv2DBackpropInput |
T={bfloat16,float} |
Conv3D |
T={bfloat16,float} |
Conv3DBackpropFilterV2 |
T={bfloat16,float} |
Conv3DBackpropInputV2 |
Tshape={int32,int64} T={bfloat16,float} |
Cos |
T={bfloat16,complex64,float} |
Cosh |
T={bfloat16,complex64,float} |
Cross |
T={bfloat16,float,int32,int64,uint32,uint64} |
CrossReplicaSum |
T={bfloat16,float} |
Cumprod |
Tidx={int32,int64} T={bfloat16,float,int32} |
Cumsum |
Tidx={int32,int64} T={bfloat16,float,int32} |
DataFormatVecPermute |
T={int32,int64} |
DepthToSpace |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
DepthwiseConv2dNative |
T={bfloat16,float} |
DepthwiseConv2dNativeBackpropFilter |
T={bfloat16,float} |
DepthwiseConv2dNativeBackpropInput |
T={bfloat16,float} |
Diag |
T={bfloat16,complex64,float,int32,int64} |
DiagPart |
T={bfloat16,complex64,float,int32,int64} |
Digamma |
T={bfloat16,float} |
Div |
T={bfloat16,complex64,float,int32,int64} |
DivNoNan |
T={float} |
DynamicStitch |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Elu |
T={bfloat16,float} |
EluGrad |
T={bfloat16,float} |
Empty |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
EmptyTensorList |
shape_type={int32,int64} element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Equal |
T={bfloat16,bool,complex64,float,int32,int64} |
Erf |
T={bfloat16,float} |
Erfc |
T={bfloat16,float} |
Exp |
T={bfloat16,complex64,float} |
ExpandDims |
Tdim={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Expm1 |
T={bfloat16,complex64,float} |
ExtractImagePatches |
T={bfloat16,float,int32,int64,uint32,uint64} |
FFT |
Tcomplex={complex64} |
FFT2D |
Tcomplex={complex64} |
FFT3D |
Tcomplex={complex64} |
FakeParam |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
FakeQuantWithMinMaxArgs |
|
FakeQuantWithMinMaxArgsGradient |
|
FakeQuantWithMinMaxVars |
|
FakeQuantWithMinMaxVarsGradient |
|
Fill |
index_type={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Floor |
T={bfloat16,float} |
FloorDiv |
T={bfloat16,complex64,float,int32,int64} |
FloorMod |
T={bfloat16,float,int32,int64} |
FusedBatchNorm |
T={float} |
FusedBatchNormGrad |
T={float} |
FusedBatchNormGradV2 |
U={float} T={bfloat16,float} |
FusedBatchNormV2 |
U={float} T={bfloat16,float} |
Gather |
Tindices={int32,int64} Tparams={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
GatherNd |
Tindices={int32,int64} Tparams={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
GatherV2 |
Taxis={int32,int64} Tindices={int32,int64} Tparams={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
GetItem |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Greater |
T={bfloat16,float,int32,int64,uint32,uint64} |
GreaterEqual |
T={bfloat16,float,int32,int64,uint32,uint64} |
HSVToRGB |
T={bfloat16,float} |
IFFT |
Tcomplex={complex64} |
IFFT2D |
Tcomplex={complex64} |
IFFT3D |
Tcomplex={complex64} |
IRFFT |
|
IRFFT2D |
|
IRFFT3D |
|
Identity |
T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
IdentityN |
T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
If |
Tout={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} Tin={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} Tcond={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
Imag |
Tout={float} T={complex64} |
InfeedDequeue |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
InfeedDequeueTuple |
dtypes={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
InplaceAdd |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
InplaceUpdate |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Inv |
T={bfloat16,complex64,float,int32,int64} |
Invert |
T={int32,int64,uint32,uint64} |
InvertPermutation |
T={int32} |
IsFinite |
T={bfloat16,float} |
IsInf |
T={bfloat16,float} |
IsNan |
T={bfloat16,float} |
L2Loss |
T={bfloat16,float} |
LRN |
T={bfloat16,float} |
LRNGrad |
T={bfloat16,float} |
LeakyRelu |
T={bfloat16,float} |
LeakyReluGrad |
T={bfloat16,float} |
LeftShift |
T={int32,int64,uint32,uint64} |
Less |
T={bfloat16,float,int32,int64,uint32,uint64} |
LessEqual |
T={bfloat16,float,int32,int64,uint32,uint64} |
Lgamma |
T={bfloat16,float} |
LinSpace |
Tidx={int32,int64} T={bfloat16,float} |
ListDiff |
out_idx={int32,int64} T={int32,int64} |
Log |
T={bfloat16,complex64,float} |
Log1p |
T={bfloat16,complex64,float} |
LogSoftmax |
T={bfloat16,float} |
LogicalAnd |
|
LogicalNot |
|
LogicalOr |
|
MatMul |
T={bfloat16,complex64,float} |
MatrixBandPart |
Tindex={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
MatrixDiag |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
MatrixDiagPart |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
MatrixSetDiag |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
MatrixTriangularSolve |
T={complex64,float} |
Max |
Tidx={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
MaxPool |
T={bfloat16,float,int32,int64} |
MaxPool3D |
T={bfloat16,float} |
MaxPool3DGrad |
TInput={bfloat16,float} T={bfloat16,float} |
MaxPool3DGradGrad |
T={float} |
MaxPoolGrad |
T={bfloat16,float,int32,int64,uint32,uint64} |
MaxPoolGradGrad |
T={float} |
MaxPoolGradGradV2 |
T={float} |
MaxPoolGradV2 |
T={bfloat16,float,int32,int64,uint32,uint64} |
MaxPoolV2 |
T={bfloat16,float,int32,int64} |
Maximum |
T={bfloat16,float,int32,int64} |
Mean |
Tidx={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
Min |
Tidx={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
Minimum |
T={bfloat16,float,int32,int64} |
MirrorPad |
Tpaddings={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Mod |
T={bfloat16,float,int32,int64} |
Mul |
T={bfloat16,complex64,float,int32,int64} |
Multinomial |
output_dtype={int32,int64} T={bfloat16,float,int32,int64,uint32,uint64} |
Neg |
T={bfloat16,complex64,float,int32,int64} |
NoOp |
|
NonMaxSuppressionV4 |
T={float} |
NotEqual |
T={bfloat16,bool,complex64,float,int32,int64} |
OneHot |
TI={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
OnesLike |
T={bfloat16,bool,complex64,float,int32,int64} |
OutfeedEnqueue |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
OutfeedEnqueueTuple |
dtypes={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Pack |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Pad |
Tpaddings={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
PadV2 |
Tpaddings={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ParallelDynamicStitch |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
PlaceholderWithDefault |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Pow |
T={bfloat16,complex64,float,int32,int64} |
PreventGradient |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Prod |
Tidx={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
Qr |
T={float} |
QuantizeAndDequantizeV2 |
T={bfloat16,float} |
QuantizeAndDequantizeV3 |
T={bfloat16,float} |
RFFT |
|
RFFT2D |
|
RFFT3D |
|
RGBToHSV |
T={bfloat16,float} |
RandomShuffle |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
RandomStandardNormal |
T={int32,int64} dtype={bfloat16,float} |
RandomUniform |
T={int32,int64} dtype={bfloat16,float} |
RandomUniformInt |
T={int32,int64} Tout={int32,int64} |
Range |
Tidx={bfloat16,float,int32,int64} |
Rank |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ReadVariableOp |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Real |
Tout={float} T={complex64} |
RealDiv |
T={bfloat16,complex64,float,int32,int64} |
Reciprocal |
T={bfloat16,complex64,float,int32,int64} |
ReciprocalGrad |
T={bfloat16,complex64,float} |
RecvTPUEmbeddingActivations |
|
Relu |
T={bfloat16,float,int32,int64,uint32,uint64} |
Relu6 |
T={bfloat16,float,int32,int64,uint32,uint64} |
Relu6Grad |
T={bfloat16,float,int32,int64,uint32,uint64} |
ReluGrad |
T={bfloat16,float,int32,int64,uint32,uint64} |
Reshape |
Tshape={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ResizeBilinear |
T={bfloat16,float,int32,int64} |
ResizeBilinearGrad |
T={bfloat16,float} |
ResizeNearestNeighbor |
T={float,int32,int64} |
ResourceApplyAdaMax |
T={bfloat16,float} |
ResourceApplyAdadelta |
T={bfloat16,float} |
ResourceApplyAdagrad |
T={bfloat16,float} |
ResourceApplyAdagradDA |
T={bfloat16,float} |
ResourceApplyAdam |
T={bfloat16,float} |
ResourceApplyAddSign |
T={bfloat16,float} |
ResourceApplyCenteredRMSProp |
T={bfloat16,float} |
ResourceApplyFtrl |
T={bfloat16,float} |
ResourceApplyFtrlV2 |
T={bfloat16,float} |
ResourceApplyGradientDescent |
T={bfloat16,float} |
ResourceApplyKerasMomentum |
T={bfloat16,float} |
ResourceApplyMomentum |
T={bfloat16,float} |
ResourceApplyPowerSign |
T={bfloat16,float} |
ResourceApplyProximalAdagrad |
T={bfloat16,float} |
ResourceApplyProximalGradientDescent |
T={bfloat16,float} |
ResourceApplyRMSProp |
T={bfloat16,float} |
ResourceGather |
Tindices={int32,int64} dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterAdd |
Tindices={int32,int64} dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterDiv |
Tindices={int32,int64} dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterMax |
Tindices={int32,int64} dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterMin |
Tindices={int32,int64} dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterMul |
Tindices={int32,int64} dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterNdAdd |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterNdSub |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterNdUpdate |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterSub |
Tindices={int32,int64} dtype={bfloat16,complex64,float,int32,int64,uint32,uint64} |
ResourceScatterUpdate |
Tindices={int32,int64} dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ResourceStridedSliceAssign |
Index={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Reverse |
T={bool,complex64,float,int32,int64} |
ReverseSequence |
Tlen={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ReverseV2 |
T={bfloat16,bool,complex64,float,int32,int64} Tidx={int32,int64} |
RightShift |
T={int32,int64,uint32,uint64} |
Rint |
T={bfloat16,float} |
Round |
T={bfloat16,complex64,float,int32,int64} |
Rsqrt |
T={bfloat16,complex64,float} |
RsqrtGrad |
T={bfloat16,complex64,float} |
ScatterNd |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Select |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Selu |
T={bfloat16,float} |
SeluGrad |
T={bfloat16,float} |
SendTPUEmbeddingGradients |
|
Shape |
out_type={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
ShapeN |
out_type={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Sigmoid |
T={bfloat16,complex64,float} |
SigmoidGrad |
T={bfloat16,complex64,float} |
Sign |
T={bfloat16,complex64,float,int32,int64} |
Sin |
T={bfloat16,complex64,float} |
Sinh |
T={bfloat16,complex64,float} |
Size |
out_type={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Slice |
Index={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Snapshot |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Softmax |
T={bfloat16,float} |
SoftmaxCrossEntropyWithLogits |
T={bfloat16,float} |
Softplus |
T={bfloat16,float} |
SoftplusGrad |
T={bfloat16,float} |
Softsign |
T={bfloat16,float} |
SoftsignGrad |
T={bfloat16,float} |
SpaceToBatch |
Tpaddings={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
SpaceToBatchND |
Tblock_shape={int32,int64} Tpaddings={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
SpaceToDepth |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
SparseMatMul |
Tb={bfloat16,float} Ta={bfloat16,float} |
SparseSoftmaxCrossEntropyWithLogits |
Tlabels={int32,int64} T={bfloat16,float} |
SparseToDense |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Split |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
SplitV |
Tlen={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Sqrt |
T={bfloat16,complex64,float} |
SqrtGrad |
T={bfloat16,complex64,float} |
Square |
T={bfloat16,complex64,float,int32,int64} |
SquaredDifference |
T={bfloat16,complex64,float,int32,int64} |
Squeeze |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
StackCloseV2 |
|
StackPopV2 |
elem_type={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
StackPushV2 |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
StackV2 |
elem_type={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
StatelessIf |
Tout={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} Tin={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} Tcond={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
StatelessMultinomial |
output_dtype={int32,int64} Tseed={int32} T={bfloat16,float} |
StatelessRandomNormal |
Tseed={int32} T={int32,int64} dtype={bfloat16,float} |
StatelessRandomUniform |
Tseed={int32} T={int32,int64} dtype={bfloat16,float} |
StatelessRandomUniformInt |
Tseed={int32} T={int32,int64} dtype={int32,int64} |
StatelessTruncatedNormal |
Tseed={int32} T={int32,int64} dtype={bfloat16,float} |
StatelessWhile |
T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
StopGradient |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
StridedSlice |
Index={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
StridedSliceGrad |
Index={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Sub |
T={bfloat16,complex64,float,int32,int64} |
Sum |
Tidx={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
SymbolicGradient |
Tout={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} Tin={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TPUEmbeddingActivations |
|
Tan |
T={bfloat16,complex64,float,int32,int64} |
Tanh |
T={bfloat16,complex64,float} |
TanhGrad |
T={bfloat16,complex64,float} |
TensorArrayCloseV3 |
|
TensorArrayConcatV3 |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorArrayGatherV3 |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorArrayGradV3 |
|
TensorArrayReadV3 |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorArrayScatterV3 |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorArraySizeV3 |
|
TensorArraySplitV3 |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorArrayV3 |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorArrayWriteV3 |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorListElementShape |
shape_type={int32,int64} |
TensorListPopBack |
element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorListPushBack |
element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TensorListReserve |
shape_type={int32,int64} element_dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
Tile |
Tmultiples={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TopKV2 |
T={bfloat16,float,int32,uint32} |
Transpose |
Tperm={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
TruncateDiv |
T={bfloat16,complex64,float,int32,int64} |
TruncateMod |
T={bfloat16,float,int32,int64} |
TruncatedNormal |
T={int32,int64} dtype={float} |
Unpack |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
UnsortedSegmentMax |
Tnumsegments={int32,int64} Tindices={int32,int64} T={bfloat16,float,int32,int64,uint32,uint64} |
UnsortedSegmentMin |
Tnumsegments={int32,int64} Tindices={int32,int64} T={bfloat16,float,int32,int64,uint32,uint64} |
UnsortedSegmentProd |
Tnumsegments={int32,int64} Tindices={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
UnsortedSegmentSum |
Tnumsegments={int32,int64} Tindices={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
VarIsInitializedOp |
|
VariableShape |
out_type={int32,int64} |
While |
T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
Xdivy |
T={complex64,float} |
XlaBroadcastHelper |
Tindices={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
XlaConv |
Tindices={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
XlaDequantize |
|
XlaDot |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
XlaDynamicSlice |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaDynamicUpdateSlice |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaHostCompute |
Toutputs={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} Tinputs={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaIf |
Tout={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} Tin={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} Tcond={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
XlaKeyValueSort |
V={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} K={bfloat16,float,int32,int64,uint32,uint64} |
XlaPad |
Tindices={int32,int64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaRecv |
dtype={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaRecvFromHost |
Toutput={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaReduce |
T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
XlaReduceWindow |
Tindices={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
XlaSelectAndScatter |
Tindices={int32,int64} T={bfloat16,complex64,float,int32,int64,uint32,uint64} |
XlaSend |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaSendToHost |
Tinput={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaSort |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
XlaWhile |
T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
Xlogy |
T={complex64,float} |
ZerosLike |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
_Arg |
T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |
_ArrayToList |
out_types={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
_ListToArray |
T={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} Tin={bfloat16,bool,complex64,float,int32,int64,uint32,uint64} |
_Retval |
T={bfloat16,bool,complex64,float,int32,int64,resource,uint32,uint64} |