-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IR Refactor #199
IR Refactor #199
Conversation
4cf8af7
to
5dbc3f8
Compare
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] | ||
#[allow(dead_code, missing_docs, clippy::large_enum_variant)] // Some variants might not be used with different flags | ||
pub enum Operation { | ||
Assign(Variable), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe Identity
is better here? Assign without an output is a bit strange.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally it would be called copy but that was already used. Maybe we should just rename Copy
to CopyMemory
(which is in line with the SPIR-V name and memcpy
), and call this one Copy
. Because that's also how it's still used: to make a copy of a mutable variable.
Refactors IR to make it more ergonomic by separating the output of an operation from the operation itself, and separating the type of any given variable from the kind/identifier.
Moves builtin variables into a separate enum so they no longer fill the
VariableKind
enum with a bunch of semantically identical duplicates. This is more in line with other variable kinds that use IDs to indentify the specific variable instance.Moves atomic operations to their own enum.
Moves
Select
fromBranch
toOperation
.Adds explicit
Cast
operator to avoid confusing implicit casting onAssign
.Makes
Assign
a first classOperation
since now thatCast
is explicit it's essentially a register copy and thus has different semantics compared to other operations (for example, it could be safely inlined and removed without affecting execution).There's a lot of churn but it's overall cleaner, and actually lowers LOC by almost 400. It also simplifies any future optimization passes that may be added to
cubecl-opt
. Things that only care about the output can now access it directly, and vice versa for the operation and inputs.Testing
All tests pass in both
burn
andcubecl
. I've already prepared aburn
version where all legacy kernels are migrated to the refactored variables.