Skip to content

Commit

Permalink
fix llama2 70b lora tuning bug (#7622)
Browse files Browse the repository at this point in the history
* fix llama2 70b lora tuning bug

Signed-off-by: Chen Cui <[email protected]>

* Update peft_config.py

brackets

Signed-off-by: Adi Renduchintala <[email protected]>

---------

Signed-off-by: Chen Cui <[email protected]>
Signed-off-by: Adi Renduchintala <[email protected]>
Co-authored-by: Adi Renduchintala <[email protected]>
  • Loading branch information
cuichenx and arendu authored Oct 4, 2023
1 parent af72216 commit 57f56ed
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion nemo/collections/nlp/parts/peft_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,14 @@ def __init__(self, cfg):
else:
kv_channels = cfg.kv_channels
projection_size = kv_channels * cfg.num_attention_heads
num_query_groups = cfg.get("num_query_groups", None)
if num_query_groups is None:
num_query_groups = cfg.num_attention_heads
qkv_projection_size = projection_size + (2 * kv_channels * num_query_groups)

config_args = {
"in_features": cfg.hidden_size,
"out_features": 3 * projection_size,
"out_features": qkv_projection_size,
"dim": lora_cfg.adapter_dim,
"norm_position": None,
"norm_type": None,
Expand Down

0 comments on commit 57f56ed

Please sign in to comment.