Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while running benchmarking.ipynb #10

Open
jiminbot20 opened this issue Sep 11, 2024 · 0 comments
Open

Error while running benchmarking.ipynb #10

jiminbot20 opened this issue Sep 11, 2024 · 0 comments

Comments

@jiminbot20
Copy link

jiminbot20 commented Sep 11, 2024

from plapt import Plapt
import pandas as pd
from scipy.stats import spearmanr, pearsonr
import numpy as np

plapt = Plapt()

Some weights of the model checkpoint at Rostlab/prot_bert were not used when initializing BertModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.bias']

  • This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.decoder.bias', 'lm_head.dense.bias', 'lm_head.decoder.weight', 'lm_head.bias']
  • This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

Fail Traceback (most recent call last)
/tmp/ipykernel_247/173641455.py in
4 import numpy as np
5
----> 6 plapt = Plapt()

/home/irc/WELP-PLAPT/plapt.py in init(self, prediction_module_path, device)
38 self.mol_encoder = RobertaModel.from_pretrained("seyonec/ChemBERTa-zinc-base-v1").to(self.device)
39
---> 40 self.prediction_module = PredictionModule(prediction_module_path)
41 self.cache = {}
42

/home/irc/WELP-PLAPT/plapt.py in init(self, model_path)
8 class PredictionModule:
9 def init(self, model_path: str = "models/affinity_predictor.onnx"):
---> 10 self.session = onnxruntime.InferenceSession(model_path)
11 self.input_name = self.session.get_inputs()[0].name
12 self.mean = 6.51286529169358

/opt/conda/envs/vits/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in init(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
281
282 try:
--> 283 self._create_inference_session(providers, provider_options, disabled_optimizers)
284 except ValueError:
285 if self._enable_fallback:

/opt/conda/envs/vits/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in _create_inference_session(self, providers, provider_options, disabled_optimizers)
308 session_options = self._sess_options if self._sess_options else C.get_default_session_options()
309 if self._model_path:
--> 310 sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
311 else:
312 sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)

Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from models/affinity_predictor.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 15 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 14.
Uploading image.png…

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant