Skip to content

Commit

Permalink
Moe doc fixes
Browse files Browse the repository at this point in the history
Signed-off-by: Alexandros Koumparoulis <[email protected]>
  • Loading branch information
akoumpa committed Aug 7, 2024
1 parent e879330 commit d12d679
Showing 1 changed file with 8 additions and 6 deletions.
14 changes: 8 additions & 6 deletions docs/source/features/moe.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Mixture of Experts
Overview
--------

NeMo supports Mixture of Experts (MoE) in the transformer layer for NLP models.
NeMo supports Mixture of Experts (MoE) in the feedforward block of the transformer layer.

MoE is a machine learning technique where multiple specialized models (experts,
usually multi-layer perceptrons) are combined to solve a complex task. Each expert
Expand Down Expand Up @@ -35,7 +35,7 @@ To balance token distribution across experts:
moe_router_load_balancing_type: aux_loss # to use the auxilary loss, other options include "sinkhorn".
2. Set ``moe_aux_loss_coeff`` to specify the weight of the auxilary loss. Values in the 1e-2 range are a good start, as follows:
2. Set ``moe_aux_loss_coeff`` to specify the weight of the auxilary loss. The auxiliary loss is added to encourage distributing tokens equally among all experts. Values in the 1e-2 range are a good start, as follows:

.. code-block:: yaml
Expand All @@ -52,16 +52,18 @@ Other options include:
1. ``moe_input_jitter_eps`` adds noise to the input tensor by applying jitter with a specified epsilon value.

2. ``moe_token_dropping`` enables selectively dropping and padding tokens for each expert to achieve
a specified capacity.
a specified capacity, similar to GShard, Switch-Transformer, and DeepSpeed-MoE. Briefly, if the number
of tokens routed to an expert exceeds its capacity, then the exceeding tokens are dropped. Note that this is
currently unsupported so should remain False.

3. ``moe_token_dropping`` specifies the token dispatcher type, options include 'allgather' and 'alltoall'.
3. ``moe_token_dispatcher_type`` specifies the token dispatcher type, options include 'allgather' and 'alltoall'.

4. ``moe_per_layer_logging`` enables per-layer logging for MoE, currently support aux-loss and z-loss.

5. ``moe_expert_capacity_factor`` the capacity factor for each expert, None means no token will be dropped. The default is None.
5. ``moe_expert_capacity_factor`` the capacity factor determines the maximum number of tokens that can be routed to each expert in any MoE layer, None means no token will be dropped. The default is None.

6. ``moe_pad_expert_input_to_capacity`` if True, pads the input for each expert to match the expert capacity length, effective only after the moe_expert_capacity_factor is set. The default setting is False.

7. ``moe_token_drop_policy`` the policy to drop tokens. Can be either "probs" or "position". If "probs", the tokens with the lowest probabilities will be dropped. If "position", tokens at the end of each batch will be dropped. Default value is "probs".

8. ``moe_layer_recompute`` if True, checkpointing moe_layer to save activation memory, default is False.
8. ``moe_layer_recompute`` if True, checkpointing moe_layer to save activation memory, default is False.

0 comments on commit d12d679

Please sign in to comment.