Class RankingMetrics (2.2.0)

RankingMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Evaluation metrics used by weighted-ALS models specified by feedback_type=implicit.

Attributes

NameDescription
mean_average_precision `.wrappers.DoubleValue`
Calculates a precision per user for all the items by ranking them and then averages all the precisions across all the users.
mean_squared_error `.wrappers.DoubleValue`
Similar to the mean squared error computed in regression and explicit recommendation models except instead of computing the rating directly, the output from evaluate is computed against a preference which is 1 or 0 depending on if the rating exists or not.
normalized_discounted_cumulative_gain `.wrappers.DoubleValue`
A metric to determine the goodness of a ranking calculated from the predicted confidence by comparing it to an ideal rank measured by the original ratings.
average_rank `.wrappers.DoubleValue`
Determines the goodness of a ranking by computing the percentile rank from the predicted confidence and dividing it by the original rank.

Inheritance

builtins.object > proto.message.Message > RankingMetrics

Methods

__delattr__

__delattr__(key)

Delete the value on the given field.

This is generally equivalent to setting a falsy value.

__eq__

__eq__(other)

Return True if the messages are equal, False otherwise.

__ne__

__ne__(other)

Return True if the messages are unequal, False otherwise.

__setattr__

__setattr__(key, value)

Set the value on the given field.

For well-known protocol buffer types which are marshalled, either the protocol buffer object or the Python equivalent is accepted.