RankingMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Evaluation metrics used by weighted-ALS models specified by feedback_type=implicit.
Calculates a precision per user for all the items by ranking them and then averages all the precisions across all the users.
Similar to the mean squared error computed in regression and explicit recommendation models except instead of computing the rating directly, the output from evaluate is computed against a preference which is 1 or 0 depending on if the rating exists or not.
A metric to determine the goodness of a ranking calculated from the predicted confidence by comparing it to an ideal rank measured by the original ratings.
Determines the goodness of a ranking by computing the percentile rank from the predicted confidence and dividing it by the original rank.
Inheritancebuiltins.object > proto.message.Message > RankingMetrics
Delete the value on the given field.
This is generally equivalent to setting a falsy value.
Return True if the messages are equal, False otherwise.
Return True if the messages are unequal, False otherwise.
Set the value on the given field.
For well-known protocol buffer types which are marshalled, either the protocol buffer object or the Python equivalent is accepted.