From 7a49faf6227818bef41daee6314c7c11c35cf87f Mon Sep 17 00:00:00 2001 From: svuckovic Date: Wed, 15 Jan 2025 12:31:17 +0000 Subject: [PATCH 1/3] Add TTNN Dialect guidelines --- docs/src/SUMMARY.md | 5 +- docs/src/adding-an-op.md | 2 + docs/src/guidelines.md | 6 + docs/src/specs/tensor-layout-interactive.md | 1 + docs/src/ttnn-dialect-guidelines.md | 148 ++++++++++++++++++++ 5 files changed, 160 insertions(+), 2 deletions(-) create mode 100644 docs/src/guidelines.md create mode 100644 docs/src/specs/tensor-layout-interactive.md create mode 100644 docs/src/ttnn-dialect-guidelines.md diff --git a/docs/src/SUMMARY.md b/docs/src/SUMMARY.md index fcc96410d6..5eae772051 100644 --- a/docs/src/SUMMARY.md +++ b/docs/src/SUMMARY.md @@ -22,12 +22,13 @@ - [Project Structure](./project-structure.md) - [Dialects Overview](./dialects-overview.md) -- [Coding Guidelines](./coding-guidelines.md) +- [Guidelines](./guidelines.md) + - [Coding Guidelines](./coding-guidelines.md) + - [TTNN Dialect Guidelines](./ttnn-dialect-guidelines.md) - [Adding an op](./adding-an-op.md) - [Decomposing an op in TTIR](./decomposing-an-op-in-ttir.md) - [Doxygen](./doxygen.md) - [Specifications](./specs/specs.md) - [Runtime Stitching](./specs/runtime-stitching.md) - [Tensor Layout](./specs/tensor-layout.md) - - [TTNN Interactive Visualizer](./specs/tensor-layout-interactive.md) - [Device](./specs/device.md) diff --git a/docs/src/adding-an-op.md b/docs/src/adding-an-op.md index cbe4a44c10..2a41397d56 100644 --- a/docs/src/adding-an-op.md +++ b/docs/src/adding-an-op.md @@ -99,6 +99,8 @@ section for details, the process is the same. {{#include ../../../lib/Dialect/TTNN/IR/TTNNOps.cpp:adding_an_op_matmul_ttnn_verify}} ``` +For more details on adding ops to the TTNN dialect, refer to [TTNN Dialect Contribution Guidelines](./ttnn-dialect-guidelines.md). + ## 3. Convert / Implement the Op in the TTNN passes Next we will implement the conversion from the TTIR `matmul` Op to the TTNN `matmul` Op. diff --git a/docs/src/guidelines.md b/docs/src/guidelines.md new file mode 100644 index 0000000000..63eb46d970 --- /dev/null +++ b/docs/src/guidelines.md @@ -0,0 +1,6 @@ +# Guidelines + +This page contains a collection of guidelines to help maintain consistency and quality across our project. Please refer to the following documents for detailed instructions on coding practices, as well as specific dialect guidelines. + +- [Coding guidelines](./coding-guidelines.md) +- [TTNN Dialect guidelines](./ttnn-dialect-guidelines.md) diff --git a/docs/src/specs/tensor-layout-interactive.md b/docs/src/specs/tensor-layout-interactive.md new file mode 100644 index 0000000000..3989296783 --- /dev/null +++ b/docs/src/specs/tensor-layout-interactive.md @@ -0,0 +1 @@ +# TTNN Interactive Visualizer diff --git a/docs/src/ttnn-dialect-guidelines.md b/docs/src/ttnn-dialect-guidelines.md new file mode 100644 index 0000000000..10177a6cd1 --- /dev/null +++ b/docs/src/ttnn-dialect-guidelines.md @@ -0,0 +1,148 @@ +# TTNN Dialect Contribution Guidelines + +This document provides clear and consistent guidelines for contributing to the TTNN dialect, including operations, attributes, types, and other components. Following these ensures a streamlined development process, faster code reviews, and higher-quality code with fewer bugs. + +## General Principle: Model TTNN Library Closely + +The TTNN dialect should closely reflect the TTNN library wherever practical, serving as the **core guiding principle** when contributing to the dialect. Whenever there's a need to deviate from this principle, it should be discussed with stakeholders. + +## Ops and Operands + +### Signature Selection + +Ops in TTNN may have multiple signatures available - it's important to choose the right one when creating its model in the TTNN dialect. Going through an example, these are the available signatures for the `ttnn::transpose` op: + +```C++ +struct ExecuteTranspose { + static ttnn::Tensor invoke( + uint8_t queue_id, + const ttnn::Tensor& input_tensor, + const int64_t& dim1, + const int64_t& dim2, + const std::optional& memory_config_arg, + const std::optional& pad_value = 0.0f); + + static ttnn::Tensor invoke( + const ttnn::Tensor& input_tensor, + const int64_t& dim1, + const int64_t& dim2, + const std::optional& memory_config, + const std::optional& pad_value = 0.0f); + + static ttnn::Tensor invoke( + const ttnn::Tensor& input_tensor, + const int64_t& dim1, + const int64_t& dim2, + const std::optional& pad_value = 0.0f); +}; +``` + +The first and second signature differ only in the `queue_id` parameter - we don't model queues today, so the second signature has priority here. The second and third signature differ in `memory_config` parameter - the second signature is preferred as it is more robust: the parameter is optional so it can remain unused if it isn't needed. + +Only one signature should be chosen. If the need would arise for more than one signature, it would be a precedent, and should be discussed with stakeholders. + +### Operand ordering + +Operands in the TTNN dialect ops should match the ordering of the signature of the op being modelled. For the chosen signature of the `ttnn::transpose` op, the operands should look like this: + +```mlir +let arguments = (ins AnyRankedTensor:$input, + SI64Attr:$dim0, + SI64Attr:$dim1, + OptionalAttr:$memory_config, + OptionalAttr:$pad_value); +``` + +Mixing types and attributes within the ordering is **not** an issue, this is valid: + +``` +let arguments = (ins TTNN_ShapeAttr:$shape, + OptionalAttr:$dtype, + OptionalAttr:$layout, + Optional:$device, + OptionalAttr:$memory_config); +``` + +Following this guideline provides consistency with the TTNN lib. + +### Optional operands + +If an operand is optional in the TTNN lib, it should be modelled as optional in the dialect. + +### Default-valued operands + +If an operand has a default value in the TTNN lib, it should have a default value in the dialect. + +`ttnn::permute` as an example: + +```C++ +static ttnn::Tensor invoke( + const ttnn::Tensor& input_tensor, + tt::stl::Span dims, + const std::optional& memory_config, + const std::optional& pad_value = 0.0f); +``` + +```mlir +let arguments = (ins AnyRankedTensor:$input, + DenseI64ArrayAttr:$permutation, + OptionalAttr:$memory_config, + DefaultValuedOptionalAttr:$pad_value); +``` + +### Numerical operands + +Numerical operands should match in signedness and bit width. If an operand is a signed integer of width of 32 bits, `SI32Attr` should be used to model it. + +### Pointers and references + +Pointers and references should be ignored. We do not want to model this level of detail at this point in time. + +There were very few issues with these previously, and they were caused by inconsistencies in TTNN lib APIs. + +### Attrs vs Types + +Some operands, like ints & floats, are naturally represented as Attrs in MLIR, so they should be used. Other operands might be a bit of a gray area, like `ttnn::Shape` - in TTNN lib, these need to be constructed, so it naturally follows that they should have their own SSA value within the IR, implying that they should be implemented as Types. However, there are several downsides to this: +- More IR is produced +- Diminished readability as they're not attached to the object whose shape they're describing +- Not as easy to construct in code +- Runtime would need to keep track of all the Shape objects (it currently maps all SSAs, which are just tensors and devices) + +One upside for implementing `ttnn::Shape` as a type is that it would enable optimizing out multiple constructor calls for the same Shape. + +It is agreed that we should prefer using Attrs in these scenarios. However, this guideline is not set in stone - stakeholders should be notified if anyone believes there's a need to implement an object as a Type. + +### Destination-passing style (DPS) + +If the op in TTNN lib has the destination tensor, is should be modelled as DPS op. + +An example signature, where the last operand is a destination tensor: + +```C++ +static Tensor invoke( + const Tensor& input_tensor, + float exponent, + const std::optional& memory_config = std::nullopt, + const std::optional& optional_output_tensor = std::nullopt); +``` + +### Variadic operands + +`Variadic<>` type constraint should only be used for operands that are variadic in nature, e.g. a vector of tensors, like in `ttnn::concat`: + +```C++ +static ttnn::Tensor invoke( + const std::vector& input_tensors, + int dim, + const std::optional& memory_config = std::nullopt, + const std::optional& optional_output_tensor = std::nullopt, + unsigned int groups = 1); +``` + +### Operand naming + +Operands should be named as they are in the TTNN lib. However, this guideline is not strict, and some reasonable deviations are acceptable. + +### Operand namespaces + +Some operands are defined in a namespace nested within the TTNN namespace, i.e. `ttnn::ccl::Topology`, and some are in other but related namespaces, i.e. `tt::tt_metal::MemoryConfig`. While it would be ideal to model these completely accurately, it doesn’t provide value and we should pretend they’re all in the `ttnn::` namespace for the sake of simplicity. From cf543fcc1d5b7ee1a1e4b9b0f235600bc4d0268d Mon Sep 17 00:00:00 2001 From: svuckovic Date: Thu, 16 Jan 2025 15:06:44 +0000 Subject: [PATCH 2/3] remove added tensor-layout-interactive.md --- docs/src/specs/tensor-layout-interactive.md | 1 - 1 file changed, 1 deletion(-) delete mode 100644 docs/src/specs/tensor-layout-interactive.md diff --git a/docs/src/specs/tensor-layout-interactive.md b/docs/src/specs/tensor-layout-interactive.md deleted file mode 100644 index 3989296783..0000000000 --- a/docs/src/specs/tensor-layout-interactive.md +++ /dev/null @@ -1 +0,0 @@ -# TTNN Interactive Visualizer From ba5f7feb4210ecfe12075e6f21ad7d52df4812ed Mon Sep 17 00:00:00 2001 From: svuckovic Date: Fri, 17 Jan 2025 13:42:35 +0000 Subject: [PATCH 3/3] address comments on attrs vs types --- docs/src/ttnn-dialect-guidelines.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/src/ttnn-dialect-guidelines.md b/docs/src/ttnn-dialect-guidelines.md index 10177a6cd1..700e11dfaa 100644 --- a/docs/src/ttnn-dialect-guidelines.md +++ b/docs/src/ttnn-dialect-guidelines.md @@ -102,15 +102,17 @@ There were very few issues with these previously, and they were caused by incons ### Attrs vs Types -Some operands, like ints & floats, are naturally represented as Attrs in MLIR, so they should be used. Other operands might be a bit of a gray area, like `ttnn::Shape` - in TTNN lib, these need to be constructed, so it naturally follows that they should have their own SSA value within the IR, implying that they should be implemented as Types. However, there are several downsides to this: +General guideline is that if a value is known at compile time, it should probably be an `Attr`. Example: dims in transpose op, pooling windows in a conv, etc. If the value is unknown at compile time (e.g. tensor) it should be a `Type`. + +There's another consideration to account for: does the value need its own SSA? Remember, `Attr`s need something to latch onto, like an op or a `Type`, but `Type`s need to be constructed, i.e. have their own SSA, in order to exist. Let's look at `ttnn::Shape` for example - in TTNN lib, these need to be constructed, so it naturally follows that they should have their own SSA value within the IR, implying that they should be implemented as `Type`s. However, there are several downsides to this: - More IR is produced - Diminished readability as they're not attached to the object whose shape they're describing - Not as easy to construct in code -- Runtime would need to keep track of all the Shape objects (it currently maps all SSAs, which are just tensors and devices) +- Runtime would need to keep track of all the Shape objects (it currently maps all SSAs, which are currently only tensors and devices) -One upside for implementing `ttnn::Shape` as a type is that it would enable optimizing out multiple constructor calls for the same Shape. +One upside for implementing `ttnn::Shape` as a `Type` is that it would enable optimizing out multiple constructor calls for the same Shape. -It is agreed that we should prefer using Attrs in these scenarios. However, this guideline is not set in stone - stakeholders should be notified if anyone believes there's a need to implement an object as a Type. +It is agreed that we should prefer using `Attr`s in these scenarios. However, this guideline is not set in stone - stakeholders should be notified if anyone believes there's a need to implement an object as a `Type`. ### Destination-passing style (DPS)