-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complex type promotion rules are unclear #478
Comments
- Add a table with promotion rules - Update type promotion lattice diagram Closes data-apisgh-477 Closes data-apisgh-478
@leofang I opened gh-491 with a table with the rules. It looks like the main thing to discuss on this issue is:
That looks to me like cross-kind promotion, which is in general not mandated to exist. This is analogous to integer and real mixed-kind promotion. Libraries may implement it, but not supporting it is also fine. Am I missing something here? |
I would argue that mixed real (floats) and complex should be allowed and defined. The "kind" issues really pop up with promoting uints, ints, or "inexact" (using inexact as numpy to denote both real and complex). |
Thanks, Ralf. I interpreted the table mistakenly when I raised it in the meeting. This is what I
Basically, any interaction between a real array and a complex array always results in a complex array. |
This is only part of the picture. Another major reason is that cross-kind casting may not be implemented in existing libraries. Sometimes by design, for example to avoid unintended type promotions to blow up memory usage silently, or to limit the implementation effort or binary size (for binary ops, you need a lot of kernels for combinations of supported dtypes). That said, I just checked and JAX and PyTorch fully support mixed real-complex dtype promotion. That leaves only TensorFlow, which is the most limited library in this respect. But I'd say that likely this shouldn't stop us either way. |
As I expected, TensorFlow doesn't allow this: >>> import tensorflow.experimental.numpy as tnp
>>> x = tnp.ones(3, dtype=tnp.float32)
>>> y = tnp.zeros(3, dtype=tnp.complex64)
>>> x + y
Traceback (most recent call last):
Cell In [21], line 1
x + y
File ~/mambaforge/envs/tf/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py:153 in error_handler
raise e.with_traceback(filtered_tb) from None
File ~/mambaforge/envs/tf/lib/python3.9/site-packages/tensorflow/python/framework/ops.py:7209 in raise_from_not_ok_status
raise core._status_to_exception(e) from None # pylint: disable=protected-access
InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a float tensor but is a complex64 tensor [Op:AddV2] I couldn't test MXNet, because its latest release does not yet include complex dtype support. The current type promotion design - which does not support any "cross-kind" casting - was partly informed by TensorFlow not supporting this, partly by other libraries not supporting integer-float cross-kind casting. The "how many kernels do I need for this" is also a valid concern - @oleksandr-pavlyk brought that up again yesterday in a different context. For 2 complex and 2 real dtypes, it expands the combinations from 8 to 16; if you add half-precision dtypes, it goes from 18 to 36. JAX has a good discussion about the various types of cross-kind promotion here: https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#mixed-promotion-float-and-complex. It is brief about complex-real promotion: there are no numerical issues, so it chooses to support this feature. Overall I'd say that a large majority of users expect this to work, and that it's convenient (prevent having to do dtype checks and explicit promotion). This probably outweighs the drawbacks, especially given that TensorFlow seems to not have concrete plans to implement the standard. @edloper, @shoyer any thoughts on this from the TF perspective? |
Let me also add the alternative type promotion diagram from the current one in gh-491 (which is without cross-kind promotion): |
I would hope it at least works for Python scalars. |
I know it's very common to do this (also |
If there are implementation issues, that is a fair reasoning orthogonal to the end-user API perspective. Would be good to hear the argument from tensorflow directly, though. From a user point of view, I don't see much of an argument. Memory bloat is no worse than
|
@rgommers Hi Ralf, you can enable the NumPy behaviors in TF via (See https://www.tensorflow.org/guide/tf_numpy#setup for details) |
Sounds like we have a clear green light to proceed 🙂 To me |
This table needs to be updated to include mixed real and complex dtypes (making it a 4x4 table). Currently the complex type promotion is completely unspecified, and the lattice diagram (which is currently missing, see #477) is not very helpful.
The text was updated successfully, but these errors were encountered: