You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry for posting my question here since it's neither a bug or feature request. I would really appreciate some help with my question. Please let me know if there is a better channel to post general questions about the package.
I have a large dataset (e.g. 5 chunks) - each chunk fits in memory but not all 5. What would be the best way to train TabNet model on such a dataset? There might be easier to do this (please thread your comments!) but one idea would be to use PyTorch's IterableDataset. In that case, my question would be if there is anyway to switch out this package's TorchDataset with a TorchIterableDataset?
The text was updated successfully, but these errors were encountered:
Sorry for posting my question here since it's neither a bug or feature request. I would really appreciate some help with my question. Please let me know if there is a better channel to post general questions about the package.
I have a large dataset (e.g. 5 chunks) - each chunk fits in memory but not all 5. What would be the best way to train TabNet model on such a dataset? There might be easier to do this (please thread your comments!) but one idea would be to use PyTorch's IterableDataset. In that case, my question would be if there is anyway to switch out this package's
TorchDataset
with aTorchIterableDataset
?The text was updated successfully, but these errors were encountered: