Skip to content

declearn.metrics.MeanMetric

Bases: Metric[MeanState]

Generic mean-aggregation metric template.

This abstract class implements a template for Metric classes that rely on computing a sample-wise score and its average across iterative inputs.

Abstract

To implement such an actual Metric, inherit MeanMetric and define:

  • name: str class attribute: Name identifier of the Metric (should be unique across existing Metric classes). Used for automated type-registration and name- based retrieval. Also used to label output results.
  • metric_func(y_true: np.ndarray, y_pred: np.ndarray) -> np.ndarray: Method that computes a score from the predictions and labels associated with a given batch, that is to be aggregated into an average metric across all input batches.
Source code in declearn/metrics/_mean.py
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
class MeanMetric(Metric[MeanState], register=False, metaclass=abc.ABCMeta):
    """Generic mean-aggregation metric template.

    This abstract class implements a template for Metric classes
    that rely on computing a sample-wise score and its average
    across iterative inputs.

    Abstract
    --------
    To implement such an actual Metric, inherit `MeanMetric` and define:

    - name: str class attribute:
        Name identifier of the Metric (should be unique across existing
        Metric classes). Used for automated type-registration and name-
        based retrieval. Also used to label output results.
    - metric_func(y_true: np.ndarray, y_pred: np.ndarray) -> np.ndarray:
        Method that computes a score from the predictions and labels
        associated with a given batch, that is to be aggregated into
        an average metric across all input batches.
    """

    state_cls = MeanState

    def build_initial_states(
        self,
    ) -> MeanState:
        return MeanState()

    @abc.abstractmethod
    def metric_func(
        self,
        y_true: np.ndarray,
        y_pred: np.ndarray,
    ) -> np.ndarray:
        """Compute the sample-wise metric for a single batch.

        This method is called by the `update` one, which adds optional
        sample-weighting and updates the metric's states to eventually
        compute the average metric over a sequence of input batches.

        Parameters
        ----------
        y_true: numpy.ndarray
            True labels or values that were to be predicted.
        y_pred: numpy.ndarray
            Predictions (scores or values) that are to be evaluated.

        Returns
        -------
        scores: numpy.ndarray
            Sample-wise metric value.
        """

    def get_result(
        self,
    ) -> Dict[str, Union[float, np.ndarray]]:
        if self._states.divisor == 0:
            return {self.name: 0.0}
        result = self._states.num_sum / self._states.divisor
        return {self.name: result}

    def update(
        self,
        y_true: np.ndarray,
        y_pred: np.ndarray,
        s_wght: Optional[np.ndarray] = None,
    ) -> None:
        scores = self.metric_func(y_true, y_pred)
        if s_wght is None:
            self._states.num_sum += float(scores.sum())
            self._states.divisor += len(y_pred)
        else:
            s_wght = self._prepare_sample_weights(s_wght, len(y_pred))
            self._states.num_sum += float((s_wght * scores).sum())
            self._states.divisor += float(np.sum(s_wght))

metric_func(y_true, y_pred) abstractmethod

Compute the sample-wise metric for a single batch.

This method is called by the update one, which adds optional sample-weighting and updates the metric's states to eventually compute the average metric over a sequence of input batches.

Parameters:

Name Type Description Default
y_true np.ndarray

True labels or values that were to be predicted.

required
y_pred np.ndarray

Predictions (scores or values) that are to be evaluated.

required

Returns:

Name Type Description
scores numpy.ndarray

Sample-wise metric value.

Source code in declearn/metrics/_mean.py
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
@abc.abstractmethod
def metric_func(
    self,
    y_true: np.ndarray,
    y_pred: np.ndarray,
) -> np.ndarray:
    """Compute the sample-wise metric for a single batch.

    This method is called by the `update` one, which adds optional
    sample-weighting and updates the metric's states to eventually
    compute the average metric over a sequence of input batches.

    Parameters
    ----------
    y_true: numpy.ndarray
        True labels or values that were to be predicted.
    y_pred: numpy.ndarray
        Predictions (scores or values) that are to be evaluated.

    Returns
    -------
    scores: numpy.ndarray
        Sample-wise metric value.
    """