You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from plapt import Plapt
import pandas as pd
from scipy.stats import spearmanr, pearsonr
import numpy as np
plapt = Plapt()
Some weights of the model checkpoint at Rostlab/prot_bert were not used when initializing BertModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.bias']
This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.decoder.bias', 'lm_head.dense.bias', 'lm_head.decoder.weight', 'lm_head.bias']
This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Fail Traceback (most recent call last)
/tmp/ipykernel_247/173641455.py in
4 import numpy as np
5
----> 6 plapt = Plapt()
Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from models/affinity_predictor.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 15 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 14.
The text was updated successfully, but these errors were encountered:
from plapt import Plapt
import pandas as pd
from scipy.stats import spearmanr, pearsonr
import numpy as np
plapt = Plapt()
Some weights of the model checkpoint at Rostlab/prot_bert were not used when initializing BertModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.bias']
Some weights of the model checkpoint at seyonec/ChemBERTa-zinc-base-v1 were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.decoder.bias', 'lm_head.dense.bias', 'lm_head.decoder.weight', 'lm_head.bias']
Fail Traceback (most recent call last)
/tmp/ipykernel_247/173641455.py in
4 import numpy as np
5
----> 6 plapt = Plapt()
/home/irc/WELP-PLAPT/plapt.py in init(self, prediction_module_path, device)
38 self.mol_encoder = RobertaModel.from_pretrained("seyonec/ChemBERTa-zinc-base-v1").to(self.device)
39
---> 40 self.prediction_module = PredictionModule(prediction_module_path)
41 self.cache = {}
42
/home/irc/WELP-PLAPT/plapt.py in init(self, model_path)
8 class PredictionModule:
9 def init(self, model_path: str = "models/affinity_predictor.onnx"):
---> 10 self.session = onnxruntime.InferenceSession(model_path)
11 self.input_name = self.session.get_inputs()[0].name
12 self.mean = 6.51286529169358
/opt/conda/envs/vits/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in init(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
281
282 try:
--> 283 self._create_inference_session(providers, provider_options, disabled_optimizers)
284 except ValueError:
285 if self._enable_fallback:
/opt/conda/envs/vits/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in _create_inference_session(self, providers, provider_options, disabled_optimizers)
308 session_options = self._sess_options if self._sess_options else C.get_default_session_options()
309 if self._model_path:
--> 310 sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
311 else:
312 sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from models/affinity_predictor.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 15 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 14.
The text was updated successfully, but these errors were encountered: