Skip to content

Export to OPENVINO #741

Answered by djdameln
glucasol asked this question in Q&A
Nov 18, 2022 · 1 comments · 2 replies
Discussion options

You must be logged in to vote

Models are converted to OpenVINO IR using the Model Optimizer, which applies several default optimizations out of the box. These optimizations are meant to speed up computations on all intel hardware.

In addition to these default optimizations, the Model Optimizer also supports several optional optimizations for more advanced use cases or specific hardware configurations, such as FP16 compression. Please note that Anomalib's export functionality currently only supports the default optimizations (supporting the full range of MO options is a WIP).

Whether the OpenVINO model runs faster than the Torch model depends to a large extent on the device on which the models are deployed. When runnin…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@glucasol
Comment options

@glucasol
Comment options

Answer selected by glucasol
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #720 on November 29, 2022 09:34.