-
First of all, great work! I found this work very interesting! I have a question related to PatchCore implementation. When there is both training, and testing dataset (images and ground truth) available, then the PatchCore works good and I think during the validation step, it is using the testing data (GT and images) to calculate coreset subsampling. But imagine a realworld scenario when only good images are available to teach the anomaly detection algorithm, and no defective images are available (GT and images), then how to train the model? I've tried limit_val_batches and limit_test_batches to zero, so during training, no validation and testing are performed. But with this setup, during inference time, the model is producing this error
Any guidance will be highly appreciated! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 18 replies
-
Hi @luliuzee , yes this a known issue. In fact, almost none of the existing anomaly detection algorithms addresses this issue. We are currently working on a solution that wouldn't require any validation or test images to find the threshold in an unsupervised manner. We have some promising results. We'll make it available here once we publish our results. Meanwhile, what you could do is to choose manual threshold and set it in the config.yaml file. By default, config.yaml file uses adaptive threshold that uses validation/tests sets. |
Beta Was this translation helpful? Give feedback.
-
@djdameln would you be able to comment here? |
Beta Was this translation helpful? Give feedback.
-
Setting the adaptive threshold parameter to Please note however that even when adaptive thresholding is disabled, Anomalib still expects some abnormal images to be provided. This is because by design, anomalib evaluates each model after training in order to present some performance metrics to the user. This design is based on the assumption that for any use case, there will be at least a few examples available of the anomalous class. I feel it would be good to remove this constraint, and allow training a model without performing evaluation afterwards. This would at least require some changes to the dataset classes, but possibly other parts of the pipeline will be affected as well. |
Beta Was this translation helpful? Give feedback.
Setting the adaptive threshold parameter to
false
should be sufficient to prevent anomalib from computing the optimal threshold value based on the validation set. It will then use the entered values for theimage_default
andpixel_default
parameters instead. The values of these parameters should be chosen by trial and error, and may vary between models and datasets.Please note however that even when adaptive thresholding is disabled, Anomalib still expects some abnormal images to be provided. This is because by design, anomalib evaluates each model after training in order to present some performance metrics to the user. This design is based on the assumption that for any use case, there …