-
I'm computing DICE metrics over a set of tensors. These tensors have shape My implementations: class CustomDice(torch.nn.Module): def __init__(self, include_background=False, squeeze_batch=False): super().__init__() self.include_background = include_background self.squeeze_batch = squeeze_batch def forward(self, y_pred, y_true): if self.squeeze_batch: y_pred = y_pred.squeeze(0) y_true = y_true.squeeze(0) if not self.include_background: y_pred = y_pred[1:] y_true = y_true[1:] self.value = 2 * torch.sum(y_true * y_pred, dim=[-1,-2,-3]) / torch.sum(y_true**2 + y_pred**2, dim=[-1,-2,-3]) return self.value def aggregate(self): return self.value class ComputeDice(torch.nn.Module): def __init__(self, include_background=False, squeeze_batch=False): super().__init__() self.include_background = include_background def forward(self, y_pred, y_true): self.value = m.compute_meandice(y_pred, y_true, include_background=self.include_background) return self.value def aggregate(self): return self.value Instantiations: metrics = { "COMPUTE_DICE": ComputeDice(include_background=False), "CUSTOM_DICE": CustomDice(include_background=False, squeeze_batch=True), "DICE": m.DiceMetric(include_background=False, reduction="mean") } I'm computing the metrics as follows: # metric_ is a value of metrics Dict metric_(y_pred, y_true) value = metric_.aggregate() Results: |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 9 replies
-
Are you using |
Beta Was this translation helpful? Give feedback.
-
@Nic-Ma I might have a similar problem that couldn't figure out. Would you please help? I am not sure why I am getting different results from DiceMetric and compute_meandice. See the code below. I am breaking the loop after first example, so the get_buffer method of dice metric is giving out this number [0.8170, 0.4066, 0.4384] for the three classes. Compute mean dice is giving me 208 numbers and nans. The shapes of labels and outputs are [3, 208, 230, 172]. Ignore empty in both cases are set to True, so the nans are produced where ground truth is empty. When I do average of these 208 numbers (torch.nanmean), the result is different than the above three numbers. It is [0.7121, 0.2461, 0.2598]. So my question is how exactly this array is being reduced to those three numbers above? Code:
|
Beta Was this translation helpful? Give feedback.
-
@Nic-Ma I might have a similar problem that couldn't figure out. Would you please help? I am not sure why I am getting different results from DiceMetric and compute_meandice. See the code below. I am breaking the loop after first example, so the get_buffer method of dice metric is giving out this number [0.8170, 0.4066, 0.4384] for the three classes. Compute mean dice is giving me 208 numbers and nans. The shapes of labels and outputs are [3, 208, 230, 172]. Ignore empty in both cases are set to True, so the nans are produced where ground truth is empty. When I do average of these 208 numbers (torch.nanmean), the result is different than the above three numbers. So my question is how exactly this array is being reduced to those three numbers above? Code:
|
Beta Was this translation helpful? Give feedback.
Are you using
metric_.reset()
for the last one?