How to call onnx model to detect the presence of defects in images #1031
Replies: 6 comments 5 replies
-
Hello, If you want to use onnx model, you can use
Regarding the GPU usage, I can't test it right now but I think you can set |
Beta Was this translation helpful? Give feedback.
-
Hello, thank you very much for all your help in the last week, this week I found a strange problem when training data with Please take a look at the following graph, it may have some Chinese information, but it does not affect the observation of CPU and GPU calls. How can I solve this problem? As you can see from the graph, my CPU takes up a lot of memory during training, but the GPU utilization is 0%, which I think is due to the fact that I am not calling CUDA correctly. This is also the case when I change the "num_workers" to 1. I am looking forward to your answer, thank you very much! |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
@laogonggong847, padim model is a memory-bank based model, so it is a bit memory intensive. If you could refer to the trainining logs, you will see if you utilize your GPU. Global seed set to 42
2023-04-24 10:57:57,953 - anomalib.data - INFO - Loading the datamodule
2023-04-24 10:57:57,954 - anomalib.data.utils.transform - INFO - No config file has been provided. Using default transforms.
2023-04-24 10:57:57,954 - anomalib.data.utils.transform - INFO - No config file has been provided. Using default transforms.
2023-04-24 10:57:57,954 - anomalib.models - INFO - Loading the model.
2023-04-24 10:57:57,954 - anomalib.models.components.base.anomaly_module - INFO - Initializing PadimLightning model.
2023-04-24 10:57:58,202 - timm.models.helpers - INFO - Loading pretrained weights from url (https://download.pytorch.org/models/resnet18-5c106cde.pth)
2023-04-24 10:57:58,301 - anomalib.utils.loggers - INFO - Loading the experiment logger(s)
2023-04-24 10:57:58,302 - anomalib.utils.callbacks - INFO - Loading the callbacks
!!![HERE] 2023-04-24 10:57:58,327 - pytorch_lightning.utilities.rank_zero - INFO - GPU available: True (cuda), used: True
2023-04-24 10:57:58,327 - pytorch_lightning.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores
2023-04-24 10:57:58,327 - pytorch_lightning.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs
2023-04-24 10:57:58,327 - pytorch_lightning.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs
|
Beta Was this translation helpful? Give feedback.
-
Regarding the onnx inference, you could use the openvino inferencer. |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm using onnx obtained from anomalib and then reasoning with onnxruntime, the input image shape is [1, 3, 256, 256] but the output is [1, 1, 256, 256], isn't it supposed to be the number of categories? Is there a problem with the .onnx file? |
Beta Was this translation helpful? Give feedback.
-
What is the motivation for this task?
Many thanks to all the members of the Anomalib team for open sourcing such an excellent tool. We have no doubt that the emergence of Anomalib will push the development of anomaly detection, and once again, we salute all the authors and contributors who have improved the Anomalib community.
After using
padim
in Anomalib to train my own dataset, I ended up with anonnx model
because I set the export_mode in the corresponding config.yaml file to "onnx"(export_mode: "onnx")
. However, I was confused about how to call this model to inference.I know that Anomalib provides 4 programs for inference under
. /tools/inference
, which provides four programs for inference, and I am grateful to the developer for being so careful and comprehensive. But I'm new to onnx models, and I'm not sure if one of these four programs can be used directly to call onnx models, and if so, can you tell me explicitly which one it is. If not, can you tell me what I should do. Thank you very much!Describe the solution you'd like
I would like to understand the method and steps of calling
onnx model
to achieve detection, thank you very much.Besides that I have another question,
why do I use a lot of CPU and memory resources when training padim models. While the GPU (RTX2080Ti) only takes up about 1%, sometimes even 0
Additional context
pytorch: 1.12.1+cu113
cuda: 11.3
(torch.cuda.is_available = True)
python: 3.8
anomalib: 0.5.0.dev0 (latest)
Yaml:
Beta Was this translation helpful? Give feedback.
All reactions