Skip to content

[ty] Allow gradual lower/upper bounds in a constraint set#21957

Merged
dcreager merged 4 commits intomainfrom
dcreager/gradual-bounds
Dec 13, 2025
Merged

[ty] Allow gradual lower/upper bounds in a constraint set#21957
dcreager merged 4 commits intomainfrom
dcreager/gradual-bounds

Conversation

@dcreager
Copy link
Member

We now allow the lower and upper bounds of a constraint to be gradual. Before, we would take the top/bottom materializations of the bounds. This required us to pass in whether the constraint was intended for a subtyping check or an assignability check, since that would control whether we took the "restrictive" or "permissive" materializations, respectively.

Unfortunately, doing so means that we lost information about whether the original query involves a non-fully-static type. This would cause us to create specializations like T = object for the constraint T ≤ Any, when it would be nicer to carry through the gradual type and produce T = Any.

We're not currently using constraint sets for subtyping checks, nor are we going to in the very near future. So for now, we're going to assume that constraint sets are always used for assignability checks, and allow the lower/upper bounds to not be fully static. Once we get to the point where we need to use constraint sets for subtyping checks, we will consider how best to record this information in constraints.

@dcreager dcreager added the internal An internal refactor or improvement label Dec 12, 2025
@dcreager dcreager requested a review from sharkdp as a code owner December 12, 2025 20:05
@dcreager dcreager added the ty Multi-file analysis & type inference label Dec 12, 2025
@dcreager
Copy link
Member Author

As written, note that this PR still takes the materializations of gradual typevar bounds and constraints. That deserves its own think, but is not as simple of a change.

@astral-sh-bot
Copy link

astral-sh-bot bot commented Dec 12, 2025

Diagnostic diff on typing conformance tests

No changes detected when running ty on typing conformance tests ✅

@astral-sh-bot
Copy link

astral-sh-bot bot commented Dec 12, 2025

mypy_primer results

No ecosystem changes detected ✅

No memory usage changes detected ✅

@carljm
Copy link
Contributor

carljm commented Dec 12, 2025

Reviewing now... I'm seeing one failing test in CI? Is this something non-deterministic?

I think materializing gradual upper bounds / constraints is likely fine for now (and maybe for always).

Copy link
Contributor

@carljm carljm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

reveal_type(generic_context(bounded).specialize_constrained(ConstraintSet.always()))
# revealed: ty_extensions.Specialization[T@bounded = Base]
reveal_type(generic_context(bounded).specialize_constrained(ConstraintSet.range(Never, T, object)))
# revealed: ty_extensions.Specialization[T@bounded = Base & Any]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this seems like the right solution here

Comment on lines 166 to +169
# revealed: None
reveal_type(generic_context(constrained).specialize_constrained(ConstraintSet.range(Never, T, object)))
# revealed: None
reveal_type(generic_context(constrained).specialize_constrained(ConstraintSet.range(Never, T, Any)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are None because we have no basis for picking between Base and Unrelated, so we'll just fall back to Unknown?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right

@carljm
Copy link
Contributor

carljm commented Dec 12, 2025

I think materializing gradual upper bounds / constraints is likely fine for now (and maybe for always).

Well, maybe not for always -- I just realized that not doing this may be part of the fix for all those TODOs in the tests around bound-by-gradual and constrained-by-gradual typevars, where we want to solve to the actual gradual constraint. But I think this is much lower priority.

@carljm
Copy link
Contributor

carljm commented Dec 13, 2025

The failing test is an intersection ordering, and it passes for me locally, so it looks like a non-deterministic Salsa IDs issue.

Not sure if there's a cleverer solution inside the constraint-set implementation, but an easy (and I think cheap) solution would be to just normalize the ordering of those intersection types via union_or_intersection_elements_ordering.

* origin/main:
  [ty] disallow explicit specialization of type variables themselves (#21938)
  [ty] Improve diagnostics for unsupported binary operations and unsupported augmented assignments (#21947)
  [ty] update implicit root docs (#21955)
Copy link
Member Author

@dcreager dcreager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if there's a cleverer solution inside the constraint-set implementation, but an easy (and I think cheap) solution would be to just normalize the ordering of those intersection types via union_or_intersection_elements_ordering.

Done

Comment on lines 166 to +169
# revealed: None
reveal_type(generic_context(constrained).specialize_constrained(ConstraintSet.range(Never, T, object)))
# revealed: None
reveal_type(generic_context(constrained).specialize_constrained(ConstraintSet.range(Never, T, Any)))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right

@dcreager dcreager merged commit b413a6d into main Dec 13, 2025
42 checks passed
@dcreager dcreager deleted the dcreager/gradual-bounds branch December 13, 2025 03:18
dcreager added a commit that referenced this pull request Dec 13, 2025
* origin/main: (22 commits)
  [ty] Allow gradual lower/upper bounds in a constraint set (#21957)
  [ty] disallow explicit specialization of type variables themselves (#21938)
  [ty] Improve diagnostics for unsupported binary operations and unsupported augmented assignments (#21947)
  [ty] update implicit root docs (#21955)
  [ty] Enable even more goto-definition on inlay hints (#21950)
  Document known lambda formatting deviations from Black (#21954)
  [ty] fix hover type on named expression target (#21952)
  Bump benchmark dependencies (#21951)
  Keep lambda parameters on one line and parenthesize the body if it expands (#21385)
  [ty] Improve resolution of absolute imports in tests (#21817)
  [ty] Support `__all__ += submodule.__all__`
  [ty] Change frequency of invalid `__all__` debug message
  [ty] Add `KnownUnion::to_type()` (#21948)
  [ty] Classify `cls` as class parameter (#21944)
  [ty] Stabilize rename (#21940)
  [ty] Ignore `__all__` for document and workspace symbol requests
  [ty] Attach db to background request handler task (#21941)
  [ty] Fix outdated version in publish diagnostics after `didChange` (#21943)
  [ty] avoid fixpoint unioning of types containing current-cycle Divergent (#21910)
  [ty] improve bad specialization results & error messages (#21840)
  ...
dcreager added a commit that referenced this pull request Dec 15, 2025
…21983)

In #21957, we tried to use
`union_or_intersection_elements_ordering` to provide a stable ordering
of the union and intersection elements that are created when determining
which type a typevar should specialize to. @AlexWaygood [pointed
out](#21551 (comment))
that this won't work, since that provides a consistent ordering within a
single process run, but does not provide a stable ordering across runs.

This is an attempt to produce a proper stable ordering for constraint
sets, so that we end up with consistent diagnostic and test output.

We do this by maintaining a new `source_order` field on each interior
BDD node, which records when that node's constraint was added to the
set. Several of the BDD operators (`and`, `or`, etc) now have
`_with_offset` variants, which update each `source_order` in the rhs to
be larger than any of the `source_order`s in the lhs. This is what
causes that field to be in line with (a) when you add each constraint to
the set, and (b) the order of the parameters you provide to `and`, `or`,
etc. Then we sort by that new field before constructing the
union/intersection types when creating a specialization.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

internal An internal refactor or improvement ty Multi-file analysis & type inference

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants