-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Torch.Tensor in the variable explorer #7042
Comments
Thanks for reporting. We'll take a look at this for Spyder 4. |
To unpack this a tad, it seems the proximate issue is support for PyTorch tensors in the Variable Explorer, which could be solved with a few straightforward changes to CollectionsEditor editor selection or within ArrayEditor. However, it seems the broader feature request the user was making, as was reflected in the original title and not in the modified one, was the ability for users to add support for custom types to the Variable Explorer via simple conversion scripts to known times for which we already offer Editors for. Not sure the latter is realistic or feasible, but wanted to make sure we were all clear on what specifically the user was asking for. |
Thanks for your reply! |
@ccordoba12 would be the one to ultimately decide that; based on his re-titling of your issue, it seems he's focused more on the former. However, the latter could always be potentially added at some point in the (likely further) future, in the form of some type of modular "plugin" support for the variable explorer's Editor classes. It would be really need to have, but would take some work, requiring refactoring the existing Editors into a modular system and creating an API to interact with third-party editors, etc. As somewhat of a stopgap between the quick but specific and general but more involved approaches could be Pythonic duck typing, converting unknown types to known ones (principally The more custom types we supported though, the more we'd want to have the option to view any object as a generic low level |
The narrow issue is similar to, perhaps the same as, #5375. I think there should be an issue about making it easier to support new data types, with how to achieve this to be discussed. |
They're related, and both fit under the broader issue of PyTorch tensor support. but not the same (narrowly speaking, at least). Fixing this one (displaying tensors as arrays with ArrayEditor rather than as generic objects) would work around the other one, but not the converse; the latter would be a very simple fix along the lines of #6284 and a few similar issues I've fixed, while the former would be a tad more involved (though not that much more, at least in theory).
Indeed, agreed on my part. @ccordoba12 ? |
I don't think it's necessary to wait for ccordoba12 to give permission to open a new issue. |
Sorry :( Thanks for opening it. |
Now i am using spyder 4.0.0b1, but the problem is still existing :( |
@donglee-afar We've been busy implementing the overhauled completion/introspection/analysis architecture and the new debugger for the next betas, but more modular support for new Variable Explorer datatypes is something we'd like to add in the future. In the meantime, though, it should actually not be too difficult to add support for viewing and even edit Tensorflow tensors as Numpy arrays, so long as the output arrays are <4D (same as with Numpy arrays). We already do something similar for Other than that, there's only one complexity—you'll need to add a short try-except block here in Spyder-Kernels trying to import After you test this and ensure its working with dev versions of Spyder-Kernels and Spyder, there's only one other problem—it will make Spyder crash if it doesn't have the latest dev version of Spyder-Kernels (once your changes are merged). To fix that, at the end when you want to add some backwards compatibility, you can add a try-except around the |
Thank's for the detailed guide! Anyway, I'm not very familiar with pytorch, just tensorflow, so I don't know about side effects if pytorch is basically always imported at kernel startup! For tensorflow that could cause some headace since it would e.g. prohibit to blank out graphics cards before the import etc. Furthermore, for tensorflow the initial import can sometimes take quite long, and as said I don't have the experience for pytorch. However, I do have another suggestion that could improve daily work for ML developers in tensorflow and pytorch, without the need of importing these libraries at startup. Most of the time, I as ML developer would already be happy to see basic info about my tensors like datatypes and shapes inside the variable explorer. Of course an array editor would be even better, but lets say this is step 2. So I did a quick experiment by expanding the "get_size(item)" function inside nsview.py with an extra elif, that checks if the item has a shape attribute. If it's string representation matches a kown pattern it would be returned. if hasattr(item, "shape"):
try:
shape_str_repr = repr(item.shape)
if shape_str_repr[0:13] == 'TensorShape([':
return str(item.shape)
elif shape_str_repr[0:12] == "torch.Size([":
return "({:s})".format(shape_str_repr[12:-2])
except:
return 1 I know that this is hacky, but it could be a good help for daily work. And in the worst case it should just fall back to the current status quo. |
Hello, I am a newcomer to PyTorch, I tried to see the tensor variable on the spyder ide, I tried to do what @CAM-Gerlach directed but still don't know how to solve it. Can I get more detailed steps or teaching? |
Hi Spyder team, any update on this matter? Like current developments or even if supporting Pytorch tensors in the debugger and in the variable explorer is a priority? It would be really helpful. Thanks. |
Sorry, there are not updates about this. We'll try to do that in the coming months. |
I am sure the list of todos is long for spyder but I just wanted to bump this again since it seems like an important feature |
Note: This StackOverflow answer seems relevant to implement this feature. However, it shows that it won't be easy to do given the many ways in which you can declare a Torch tensor. |
It actually doesn't seem that difficult after having that detailed answer to reference, since what matters is how to convert the tensor to Numpy, not how it is declared. It boils down to a few sequential steps:
In fact, it appears that if neither of the first two are true, tensors should theoretically be editable in-place as Numpy ndarrays (though in practice, that presumably won't work without non-trivial changes on the kernel side, since right now AFAIK objects are pickled back and forth and replaced rather than actually editing them in-place). |
Problem Description
I'm a pytorch user, and I'm always thinking that type torch.Tensor might be represented in a proper way. What torch.Tensor type data need is just '.numpy()' to be converted to numpy array objects.
So, is it possible to apply simple scripts for the unsupported data types inside of Variable explorer?(i.e. If Variable explorer saw Tensor type objects, then a corresponding function/method for the type is applied and show the result of it on Variable explorer with distinguishable color.)
Package Versions
Dependencies
The text was updated successfully, but these errors were encountered: