-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix #31833 Relate two indexed access types covariantly #31837
Conversation
Hmm, I'm not sure this is 100% sound. Intuitively it sounds right at first--but, hypothetically, if we have two unknown keys both of type If |
It's not - but it wasn't before. To get this right I think you really need to drive the simplifications by the expression forms, not by the relation. The biggest give-away here is that you can construct cases where it is unsound for |
Thinking about it some more, I’m not sure it’s a decidable problem. We have an unknown key on both sides of the relation, so the question then becomes analogous to my favorite variance conundrum: given we have an animal of type Interesting thing to think about. |
Whether So when we have something like: interface Carnivore { food: "meat"; }
interface Herbivore { food: "plants"; }
type Ref = Carnivore | Herbivore;
declare const x: Ref;
declare const y: Ref;
x.food = y.food; We need to use the write type of But the import thing is that the selection of types is driven by the expression, not the typing relation. So we never actually ask whether The read/write distinction does not show up in the typing relation here. When you want to relate functions that accept indexed accesses, such as in the original issue, the types are just related like normal because there is no assignment going on. |
And FWIW my PR is a hack - I'm really throwing it out there as a quick fix if lots of people are hitting the issue. |
You would think so, right? But AFAICT the only way you can make this decision with certainty is if you can prove So I guess this is part of what you meant about looking at the expressions instead of the types: If we know something about the origin of |
(Sorry if I’m a being a pest - the implications of #30769 have been really thought-provoking.) function foo<T, K extends keyof T>(arg: T[K]): void;
function bar<T, K extends keyof T>(arg: T[K]): void;
foo = bar;
bar = foo; In type theory terms, would I be correct to assume the reason the above is sound is that |
The problem is that So in the cases you highlight where there is trickiness you aren't truly relating Once you have the distinction then both
Yep, though it's not so much about the origin as opposed to the use (I guese the same thing?). Given: l.x = r.y The type of the LHS should be the write type The uncertainty you refer to is encoded in the type constructors EDIT: Just saw your edit.
Not at all! This is all super interesting. Regarding your question. I believe that is sound because there is no mutation of the object |
I guess I just subconsciously split it into two types in my head without realizing it, so thanks for highlighting that and making it explicit. What’s really interesting about |
Yep I believe that observation is correct. Everything I've said has been ignoring narrowing of the object type. If you can narrow the object prior to narrowing you can effectively prove that everyone else must have a pointer to the object at that that narrowed type, therefore allowing you to write at the more specific type. In your analogy. If I can narrow my reference of |
It’s particularly problematic in the case of functions too: type Carnivore = (food: Meat) => void;
type Herbivore = (food: Plant) => void;
function feedMe(eater: Carnivore | Herbivore, food: Meat | Plant) {
// let’s assume we can discriminate between Meat and Plant.
// how do we know what 'eater' is? there’s no way to tell.
eater(food); // type error
} Which means in most cases if you have a union of functions with |
Anyway if it wasn’t clear: 👍 to making |
Having done some experiments, I'm more and more convinced this is a good change. The #30769 behavior breaks this perfectly reasonable code: declare function frob<T>(value: T): T;
interface Thing
{
pig: string;
cow: number;
ape: boolean;
}
const t: Thing = { pig: "it's fat", cow: 1208, ape: true };
const ks = Object.keys(t) as (keyof Thing)[];
for (const k of ks) {
t[k] = frob(t[k]); // what do you mean, "error"?!
} The compiler knows exactly what As discussed above it's not possible to prove this is sound in general without:
Generics rule out the former path almost entirely: We have no way to narrow a type parameter (cf. #31672), so So what the compiler now basically does is, as it can't prove this is safe, takes the lazy way out with an existential quantification: there exists some subset of But I digress. Given the inherent design limitations, I'm not thinking it makes much sense in general to be pessimistic about |
Fixes #31833
I think the predicate I've added it probably too permissive, but it works as a short-fix if that is what is needed. I would be curious to explore trying to apply the write simplification during expression checking rather that type relating.