You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How to find the recall of class 0 and class 1 for this code. Sorry the documentation is not clear to me. I can set micro but how to identify overall P and R by class name?
Motivation
class CocoDNN(L.LightningModule):
def __init__(self):
super().__init__()
self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
self.metric = MeanAveragePrecision(iou_type="bbox",average="macro",class_metrics = True, iou_thresholds=[0.5, 0.75],extended_summary=True)
def training_step(self, batch, batch_idx):
#### Some code here
def validation_step(self, batch, batch_idx):
imgs, annot = batch
targets ,preds = [], []
for img_b, annot_b in zip(imgs, annot):
if len(img_b) == 0:
continue
if len(annot_b)> 1:
targets.extend(annot_b)
else:
targets.append(annot_b[0])
#print(f"Annotated : {len(annot_b)} - {annot_b}")
#print("")
loss_dict = self.model(img_b, annot_b)
#print(f"Predicted : {len(loss_dict)} - {loss_dict}")
if len(loss_dict)> 1:
preds.extend(loss_dict)
else:
preds.append(loss_dict[0])
#preds.append(loss_dict)
self.metric.update(preds, targets)
map_results = self.metric.compute()
#self.log_dict('logs',map_results)
print("RECALL")
print(map_results['recall'])
#print(map_results['map_50'].float().item())
self.log('map_50', map_results['map_50'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
self.log('map_75', map_results['map_75'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
return map_results['map_75']
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
The text was updated successfully, but these errors were encountered:
Its overall score and its in negative, how come its in negative? Also since I'm finetuning the model for binary class therefore I think mean is eventually not suitable for binary class here and I must take mean for 2 classes instead.
@shanalikhan is it the mean average recall you are looking for e.g. the MAR value per class?
I assume so because that is one of the more commonly used metrics within detection tasks. If this is the case then you just need to set class_metrics=True and then look for the map_results["mar_100_per_class"] which is the mean average recall at 100 detections per image (maximum number of detection per class with default settings) per class. Assuming that your classes are simply 0 and 1 then
map_results["mar_100_per_class"][0] # mar value for class 0map_results["mar_100_per_class"][1] # mar value for class 1
@SkafteNicki
Thanks for sharing the details. One quick question:
Why the map_* values are negative sometimes, Is it really possible to have negative MAP. for example:
🚀 Feature
How to find the recall of class 0 and class 1 for this code. Sorry the documentation is not clear to me. I can set micro but how to identify overall P and R by class name?
Motivation
The text was updated successfully, but these errors were encountered: