Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complex type promotion rules are unclear #478

Closed
leofang opened this issue Sep 11, 2022 · 11 comments · Fixed by #491
Closed

Complex type promotion rules are unclear #478

leofang opened this issue Sep 11, 2022 · 11 comments · Fixed by #491
Labels
topic: Complex Data Types Complex number data types. topic: Type Promotion Type promotion.
Milestone

Comments

@leofang
Copy link
Contributor

leofang commented Sep 11, 2022

This table needs to be updated to include mixed real and complex dtypes (making it a 4x4 table). Currently the complex type promotion is completely unspecified, and the lattice diagram (which is currently missing, see #477) is not very helpful.

@leofang leofang added the topic: Complex Data Types Complex number data types. label Sep 11, 2022
rgommers added a commit to rgommers/array-api that referenced this issue Oct 6, 2022
- Add a table with promotion rules
- Update type promotion lattice diagram

Closes data-apisgh-477
Closes data-apisgh-478
@rgommers
Copy link
Member

rgommers commented Oct 6, 2022

@leofang I opened gh-491 with a table with the rules. It looks like the main thing to discuss on this issue is:

This table needs to be updated to include mixed real and complex dtypes (making it a 4x4 table).

That looks to me like cross-kind promotion, which is in general not mandated to exist. This is analogous to integer and real mixed-kind promotion. Libraries may implement it, but not supporting it is also fine.

Am I missing something here?

@rgommers rgommers added this to the v2022 milestone Oct 6, 2022
@seberg
Copy link
Contributor

seberg commented Oct 6, 2022

I would argue that mixed real (floats) and complex should be allowed and defined. The "kind" issues really pop up with promoting uints, ints, or "inexact" (using inexact as numpy to denote both real and complex).

@leofang
Copy link
Contributor Author

leofang commented Oct 6, 2022

Thanks, Ralf. I interpreted the table mistakenly when I raised it in the meeting. This is what I had have in mind:

+---------+-----+-----+-----+-----+
|         |  f4 |  f8 |  c8 | c16 |
+=========+=====+=====+=====+=====+
| **f4**  |  f4 |  f8 |  c8 | c16 |
+---------+-----+-----+-----+-----+
| **f8**  |  f8 |  f8 | c16 | c16 |
+---------+-----+-----+-----+-----+
| **c8**  |  c8 | c16 |  c8 | c16 |
+---------+-----+-----+-----+-----+
| **c16** | c16 | c16 | c16 | c16 |
+---------+-----+-----+-----+-----+

Basically, any interaction between a real array and a complex array always results in a complex array.

@rgommers
Copy link
Member

rgommers commented Oct 6, 2022

The "kind" issues really pop up with promoting uints, ints, or "inexact" (using inexact as numpy to denote both real and complex).

This is only part of the picture. Another major reason is that cross-kind casting may not be implemented in existing libraries. Sometimes by design, for example to avoid unintended type promotions to blow up memory usage silently, or to limit the implementation effort or binary size (for binary ops, you need a lot of kernels for combinations of supported dtypes).

That said, I just checked and JAX and PyTorch fully support mixed real-complex dtype promotion. That leaves only TensorFlow, which is the most limited library in this respect. But I'd say that likely this shouldn't stop us either way.

@rgommers
Copy link
Member

rgommers commented Oct 7, 2022

As I expected, TensorFlow doesn't allow this:

>>> import tensorflow.experimental.numpy as tnp
>>> x = tnp.ones(3, dtype=tnp.float32)
>>> y = tnp.zeros(3, dtype=tnp.complex64)
>>> x + y
Traceback (most recent call last):
  Cell In [21], line 1
    x + y
  File ~/mambaforge/envs/tf/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py:153 in error_handler
    raise e.with_traceback(filtered_tb) from None
  File ~/mambaforge/envs/tf/lib/python3.9/site-packages/tensorflow/python/framework/ops.py:7209 in raise_from_not_ok_status
    raise core._status_to_exception(e) from None  # pylint: disable=protected-access
InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a float tensor but is a complex64 tensor [Op:AddV2]

I couldn't test MXNet, because its latest release does not yet include complex dtype support.

The current type promotion design - which does not support any "cross-kind" casting - was partly informed by TensorFlow not supporting this, partly by other libraries not supporting integer-float cross-kind casting. The "how many kernels do I need for this" is also a valid concern - @oleksandr-pavlyk brought that up again yesterday in a different context. For 2 complex and 2 real dtypes, it expands the combinations from 8 to 16; if you add half-precision dtypes, it goes from 18 to 36.

JAX has a good discussion about the various types of cross-kind promotion here: https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#mixed-promotion-float-and-complex. It is brief about complex-real promotion: there are no numerical issues, so it chooses to support this feature.

Overall I'd say that a large majority of users expect this to work, and that it's convenient (prevent having to do dtype checks and explicit promotion). This probably outweighs the drawbacks, especially given that TensorFlow seems to not have concrete plans to implement the standard.

@edloper, @shoyer any thoughts on this from the TF perspective?

@rgommers
Copy link
Member

rgommers commented Oct 7, 2022

Let me also add the alternative type promotion diagram from the current one in gh-491 (which is without cross-kind promotion):

dtype_promotion_lattice

@asmeurer
Copy link
Member

asmeurer commented Oct 7, 2022

I would hope it at least works for Python scalars. x + 0j seems like the simplest and most obvious way to convert a real array to a complex one.

@rgommers
Copy link
Member

rgommers commented Oct 7, 2022

x + 0j seems like the simplest and most obvious way to convert a real array to a complex one.

I know it's very common to do this (also * 1.0 to convert integer to real). However, it doesn't seem like great practice irrespective of the decision here - there's an explicit astype function for converting to other dtypes.

@seberg
Copy link
Contributor

seberg commented Oct 7, 2022

If there are implementation issues, that is a fair reasoning orthogonal to the end-user API perspective. Would be good to hear the argument from tensorflow directly, though.

From a user point of view, I don't see much of an argument. Memory bloat is no worse than int32 + int64 and Aaron's example explicitly wants it. This also looks like it should work to me:

arr = arr / abs(arr)  # normalize magnitude

@JW1992
Copy link

JW1992 commented Oct 9, 2022

@rgommers Hi Ralf, you can enable the NumPy behaviors in TF via
import tensorflow.experimental.numpy as tnp
tnp.experimental_enable_numpy_behavior()
x = tnp.ones(3, dtype=tnp.float32)
y = tnp.zeros(3, dtype=tnp.complex64)
x + y # complex64

(See https://www.tensorflow.org/guide/tf_numpy#setup for details)

@leofang
Copy link
Contributor Author

leofang commented Oct 17, 2022

Sounds like we have a clear green light to proceed 🙂 To me complex + real is not "cross-kind" -- they are both floating point types and intra-type casting should be allowed. I do not believe it is a problem in terms of code/binary bloating, as in practice this must be supported at low (C/C++) level, and any reasonable implementation must be able to handle it natively. For example, C++ standard permits such behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: Complex Data Types Complex number data types. topic: Type Promotion Type promotion.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants