Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve TTNN Dialect to better model TTNN lib #624

Merged
merged 9 commits into from
Sep 13, 2024

Conversation

svuckovicTT
Copy link
Contributor

@svuckovicTT svuckovicTT commented Sep 6, 2024

I talked to TTNN folks to better understand how they put tensors onto the device. Prevailing sentiment was that we should copy the behaviour of their from_torch function, found here. There are some edge cases to account for, but for now I've simplified to 3 calls:

  • ttl.tensor.Tensor(tensor, dtype) - this call is used to change tensor type
  • ttnn.to_layout(tensor, layout, device=device) - this call controls tensor layout: row major vs tiled
  • ttnn.to_device(tensor, device, memory_config=memory_config) - this call controls how a given tensor is distributed across memory units of the chip: l1 vs dram, sharded vs interleaved, shardspec, etc.)

TTNN Dialect:

  • Add ToLayout op
  • Add ToDevice op
  • Add MemoryConfigAttr
  • Add couple enums and attributes to support above ops
  • Separate Attrs and Types into own files

TTIR to TTNN:

  • Pipe all TT_Layout properties in ttnn::ToLayout op conversion

TTNN to EmitC:

Other:

  • Update ttnn-standalone to match the tensor device-push behaviour

@svuckovicTT
Copy link
Contributor Author

svuckovicTT commented Sep 6, 2024

̶T̶O̶D̶O̶:̶ ̶A̶d̶d̶ ̶n̶e̶w̶ ̶o̶p̶s̶ ̶t̶o̶ ̶r̶u̶n̶t̶i̶m̶e̶ ̶c̶o̶d̶e̶ ̶b̶e̶f̶o̶r̶e̶ ̶m̶e̶r̶g̶i̶n̶g̶.̶

Edit: Done.

Copy link
Contributor

@rpavlovicTT rpavlovicTT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice changes, few cosmetic comments.

@svuckovicTT svuckovicTT linked an issue Sep 12, 2024 that may be closed by this pull request
}];
}

def TTNN_CoreRangeArrayAttr : TypedArrayAttrBase<TTNN_CoreRangeAttr, "">;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this map to ttnn::CoreRangeSet? We might need a proper def for this because they are kind of non-trivial to construct and depends on TensorMemoryLayout. Happy to tackle this in a follow on change though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually not sure! All the CoreRange* attrs here show as additions because I separated Types and Attrs, and these were previously in TTNNOpsTypes.td file.

@svuckovicTT svuckovicTT force-pushed the svuckovic/ttir-ttnn-layout-attrs-3 branch from 2d8a622 to 183e56c Compare September 13, 2024 10:03
@svuckovicTT svuckovicTT merged commit 930c18a into main Sep 13, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants