-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve TTNN Dialect to better model TTNN lib #624
Conversation
̶T̶O̶D̶O̶:̶ ̶A̶d̶d̶ ̶n̶e̶w̶ ̶o̶p̶s̶ ̶t̶o̶ ̶r̶u̶n̶t̶i̶m̶e̶ ̶c̶o̶d̶e̶ ̶b̶e̶f̶o̶r̶e̶ ̶m̶e̶r̶g̶i̶n̶g̶.̶ Edit: Done. |
94936d0
to
ec4994d
Compare
afe5181
to
85f83c9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice changes, few cosmetic comments.
}]; | ||
} | ||
|
||
def TTNN_CoreRangeArrayAttr : TypedArrayAttrBase<TTNN_CoreRangeAttr, "">; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this map to ttnn::CoreRangeSet
? We might need a proper def for this because they are kind of non-trivial to construct and depends on TensorMemoryLayout
. Happy to tackle this in a follow on change though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure! All the CoreRange* attrs here show as additions because I separated Types and Attrs, and these were previously in TTNNOpsTypes.td
file.
2d8a622
to
183e56c
Compare
I talked to TTNN folks to better understand how they put tensors onto the device. Prevailing sentiment was that we should copy the behaviour of their
from_torch
function, found here. There are some edge cases to account for, but for now I've simplified to 3 calls:ttl.tensor.Tensor(tensor, dtype)
- this call is used to change tensor typettnn.to_layout(tensor, layout, device=device)
- this call controls tensor layout: row major vs tiledttnn.to_device(tensor, device, memory_config=memory_config)
- this call controls how a given tensor is distributed across memory units of the chip: l1 vs dram, sharded vs interleaved, shardspec, etc.)TTNN Dialect:
ToLayout
opToDevice
opMemoryConfigAttr
TTIR to TTNN:
TT_Layout
properties inttnn::ToLayout
op conversionTTNN to EmitC:
Other: