[Backend] Remove inner tensor type inside tensordesc#9851
Conversation
The shared memory layout encoding attribute was previously embedded inside tensordesc's blockType as a RankedTensorType encoding. This was confusing since the layout describes shared memory, not the tensor. Add an explicit optional $sharedLayout attribute to TensorDescType and TensorDescIm2ColType, and a getSharedLayout() method on TensorDescInterface. Syntax change: `!tt.tensordesc<tensor<128x64xf16, #enc>>` becomes `!tt.tensordesc<tensor<128x64xf16>, #enc>`
a7c57b5 to
ed050a7
Compare
|
can you review this one @peterbell10 ? |
peterbell10
left a comment
There was a problem hiding this comment.
If we want to go in this direction, I think the correct format should be:
!tt.tensordesc<1x2x128xf32, #shared1>
i.e. no RankedTensorType at all, in the same way that memdesc works.
|
Yup makes sense. I modified to drop the intermediate tensor type wrapper entirely. Please take another look @peterbell10. |
peterbell10
left a comment
There was a problem hiding this comment.
Mostly LGTM, just a few questions.
| auto blockSize = | ||
| ttng::getTMABlockShape(blockType, /*packedSize=*/false, tmaMode); | ||
| auto shapePerCTA = ttg::getShapePerCTA(encoding, descTy.getShape()); | ||
| auto blockSize = ttng::getTMABlockShape(encoding, shapePerCTA, | ||
| /*packedSize=*/false, tmaMode); |
There was a problem hiding this comment.
I see, the non-RankedTensor overload takes shapePerCTA. Can't you just keep using descTy.getBlockType() since you added it as an interface method?
There was a problem hiding this comment.
The interface getBlockType() just returns a RankedTensorType without encoding attached now (given we want to avoid associate shared layout with tensors). But the getTMABlockShape variant that takes RankedTensorType still queries layout from the tensor so it won't work. Actually I think we should just remove it given now (done w/ 90946c5) no users.
There was a problem hiding this comment.
Okay, then give getTMABlockShape an overload takingTensorDescInterface?
a7f5f75 to
e66e7d9
Compare
This commit changes `tt.tensordesc` type to be directly composed of shape, element type, and shared layout. It drops the previous tensor type wrapper around them. The main reason is that the shared memory layout encoding attaching inside tensor type was confusing since the layout describes shared memory, not the tensor. `!tt.tensordesc<tensor<128x64xf16, #shared>>` now becomes `!tt.tensordesc<128x64xf16, #shared>`. This is also closer to `ttg.memdesc`.
This commit changes
tt.tesnordesctype to be directlycomposed of shape, element type, and shared layout.
It drops the previous tensor type wrapper around them.
The main reason is that the shared memory layout encoding
attaching inside tensor type was confusing since the layout
describes shared memory, not the tensor.
!tt.tensordesc<tensor<128x64xf16, #shared>>now becomes!tt.tensordesc<128x64xf16, #shared>. This is also closer tottg.memdesc.