-
Notifications
You must be signed in to change notification settings - Fork 643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Torch var stride_width.1 not found in context #1788
Comments
The Are you sure your PyTorch model is valid for the input shape you are using? |
Thank you @TobyRoseman for reaching out.
I'm using correct valid input shape. Here is Thats what I'm using it |
Right, as I said, all of the output tensors are empty. Also the fact that the first tensor is of shape Do you have a |
Sure, I've included example image here. Thank you @TobyRoseman.
|
----> 1 import transforms as T
ModuleNotFoundError: No module named 'transforms' Where is this package coming from? I'm certainly familiar with the This same model is also causing a different error in #1790 |
Sorry I forgot to add. You need to install |
I have the There is no
|
@TobyRoseman , Okay I refined code to remove import coremltools as ct
import torch, torchvision
from torchvision.transforms import functional as F, InterpolationMode, transforms as T
import requests
from PIL import Image
import numpy as np
from typing import Dict, Tuple, Optional
# Image conversion tools:
class PILToTensor(torch.nn.Module):
def forward(
self, image: torch.Tensor, target: Optional[Dict[str, torch.Tensor]] = None
) -> Tuple[torch.Tensor, Optional[Dict[str, torch.Tensor]]]:
image = F.pil_to_tensor(image)
return image, target
class ConvertImageDtype(torch.nn.Module):
def __init__(self, dtype: torch.dtype) -> None:
super().__init__()
self.dtype = dtype
def forward(
self, image: torch.Tensor, target: Optional[Dict[str, torch.Tensor]] = None
) -> Tuple[torch.Tensor, Optional[Dict[str, torch.Tensor]]]:
image = F.convert_image_dtype(image, self.dtype)
return image, target
# Load the torchvision model
detector_model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)
detector_model = detector_model.eval()
# Get a sample image
toTensor = T.PILToTensor()
toFloatTensor = T.ConvertImageDtype(torch.float)
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
example_image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
example_image_np = np.array(example_image)
example_image_pt = toFloatTensor(toTensor(example_image))
example_image_pt = example_image_pt.unsqueeze(0)
# Run the sample through the model to demonstrate the model works
y = detector_model(example_image_pt)
# Make an adaptor to convert the model outputs to a tuple
class FasterRCNN_MobileNetV3_AdapterModel(torch.nn.Module):
"""This adapter is only here to unbox the first output."""
def __init__(self, model, w=2):
super().__init__()
self.model = model
def forward(self, x):
result = self.model(x)
return result[0]['boxes'], result[0]['labels'], result[0]['scores']
adapted_detector_model = FasterRCNN_MobileNetV3_AdapterModel(detector_model)
# Trace and convert the model using coremltools
model_to_trace = adapted_detector_model
with torch.inference_mode():
out = model_to_trace(example_image_pt)
traced_model = torch.jit.trace(model_to_trace, example_image_pt).eval()
detector_mlmodel = ct.convert(traced_model, inputs=[ct.ImageType(shape=example_image_pt.shape)])
detector_mlmodel.save("segmenter.mlmodel") |
I met similar issues when converting FasterRCNN before. This @ivyas21 Could you try to use torch 1.11.0 with torchvision 0.12.0 to see if it works? |
@junpeiz , Thank you. Let me try it. Hope it works as you suggest!! |
@junpeiz , I just downgraded torch and vision as you suggested. I'm getting this error now:
|
@ivyas21 Good, now we can confirm that For the Let's keep this thread focus on the |
@junpeiz, Sounds good. Thank you for clarifying it. I'll open new issue for |
@junpeiz I tried with main (e0f8918) and torch==1.13.1, I got
|
When
convert
ing a tracedtorchvision
model, an expected input to amul
operation is not found:ValueError: Torch var stride_width.1 not found in context
Stack Trace
Steps To Reproduce
System environment:
coremltools
version: 6.2Linux foohostname 4.19.0-23-cloud-amd64 #1 SMP Debian 4.19.269-1 (2022-12-20) x86_64 GNU/Linux
)coremltools
:Please advise. Thank you!
The text was updated successfully, but these errors were encountered: