Reference for built-in NCF algorithm

Stay organized with collections Save and categorize content based on your preferences.

This page provides detailed reference information about arguments you submit to AI Platform Training when running a training job using the built-in NCF algorithm.

Versioning

The built-in NCF algorithm uses TensorFlow 2.3.

Data format arguments

The following arguments are used for data formatting:

Arguments Details
train_dataset_path Cloud Storage path to a TFRecord file.
Required
Type: String
eval_dataset_path Cloud Storage path to a TFRecord file. Must have the same format as training_data_path.

Required
Type: String
job-dir Cloud Storage path where model, checkpoints and other training artifacts reside. The following directories are created here:
  • model: This contains the trained model
  • Will also contain model training checkpoints

Required
Type: String

Hyperparameters

Hyperparameter Details
BASIC PARAMETERS
input_meta_data_path Google Cloud Storage path to an input metadata schema file

Required
Type: String
train_epochs Number of epochs to run training for.

Required
Type: Int
Default: 10
learning_rate Learning rate used by the Adam optimizer.


Required
Type:Float
Default: 0.001
ADVANCED PARAMETERS
batch_size Batch size for training.

Type: Int
Default: 256
eval_batch_size Batch size for evaluation.

Type: Int
Default: 256
num_factors Embedding size of the MF model.

Required
Type: Int
Default: 8
layers Sizes of the hidden layers for MLP. Format as comma-separated integers.

Type: String
Default: 64,32,16,6
mf_regularization Regularization factor for MF embeddings.

Type: Float
Default: 0.
mlp_regularization The regularization factor for each MLP layer. Format as comma-separated floats. Must have same number of entries as layers parameter.

Type: String
Default: 0.,0.,0.,0.
num_neg Number of negative instances to pair with a positive instance.

Type: Int
Default: 4
beta1 Beta 1 hyperparameter for the Adam optimizer.

Type: Float
Default: 0.9
beta2 Beta 2 hyperparameter for the Adam optimizer.

Type: Float
Default: 0.999
epsilon Epsilon hyperparameter for the Adam optimizer.

Type: Float
Default: 0.000000001
hr_threshold Value of HR evalutation metric at which training should stop.

Type: Float
Default: None
constructor_type Strategy used to generate false negatives.

Type: Enumeration
Options: bisection, materialized
ml_perf Change model behavior to match MLPerf reference implementations.

Type: Boolean
Default: False
output_ml_perf_compliance_logging Output relevant logging for MLPerf compliance (only available if ml_perf is set to True.)

Type: Boolean
Default: False
keras_use_ctl Use custom Keras training loop in model training.

Type: Boolean
Default: False