Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random Core issues when training the model, when using AugmentedMemoizationPolicy #8623

Closed
nico-sergeyssels-kbc opened this issue May 6, 2021 · 5 comments · Fixed by #8646
Assignees
Labels
area:rasa-oss 🎡 Anything related to the open source Rasa framework area:rasa-oss/ml/policies Issues focused around rasa's dialogue management policies area:rasa-oss/ml 👁 All issues related to machine learning type:bug 🐛 Inconsistencies or issues which will cause an issue or problem for users or implementors.

Comments

@nico-sergeyssels-kbc
Copy link

nico-sergeyssels-kbc commented May 6, 2021

Rasa version:
2.4.5

Python version:
3.7.9
Operating system (windows, osx, ...):
mac 10.15.7
Issue:

Normally when training this story AugmentedMemoizationPolicy should always kick in, we have noticed that after retraining and trying to do this story, it fails to use augmentation policy on the 3rd step once in a while. We checked the hashes and noticed that every few builds of the model these change for some reason.

Command used for training:

rasa train --augmentation 0

Content of configuration file (config.yml) (if relevant):

# Configuration for Rasa Core.
policies:
  - name: AugmentedMemoizationPolicy
    max_history: 6
  - name: TEDPolicy
    max_history: 6
    epochs: 30
  - name: FormPolicy
  - name: RulePolicy
    core_fallback_threshold: 0.3
    core_fallback_action_name: "action_fallback_core"
    enable_fallback_prediction: True
    check_for_contradictions: True

**Content of domain file (domain.yml)** (if relevant):
```yml
slot_one is a categorical one

training story

version: "2.0"
stories:
- story: example story
  steps:
  - intent: intent_one
  - action: action_one
  - slot_was_set:
    - slot_one: 1+
  - action: utter_one
  - intent: intent_two
    entities:
    - entity_one: entity_one_value
    - entity_two: entity_two_value
  - action: action_three

@nico-sergeyssels-kbc nico-sergeyssels-kbc added area:rasa-oss 🎡 Anything related to the open source Rasa framework type:bug 🐛 Inconsistencies or issues which will cause an issue or problem for users or implementors. labels May 6, 2021
@sara-tagger
Copy link
Collaborator

Thanks for the issue, @degiz will get back to you about it soon!

You may find help in the docs and the forum, too 🤗

@akelad
Copy link
Contributor

akelad commented May 7, 2021

I've done some initial investigation of this and have narrowed it down to an issue with the way entities are featurized for dialogue prediction. The short of it is that the order in which the entities are present seems to matter to the MemoizationPolicy (and the AugmentedMemoization policies), which should not be the case. So if you have 2+ entities, your model is likely to fail if the entities are present in a different order than written in your stories.

@TyDunn TyDunn added area:rasa-oss/ml 👁 All issues related to machine learning area:rasa-oss/ml/policies Issues focused around rasa's dialogue management policies labels May 7, 2021
@akelad
Copy link
Contributor

akelad commented May 7, 2021

@dakshvar22
Copy link
Contributor

@samsucik we can close this issue now, right?

@samsucik
Copy link
Contributor

Sure! In fact, I don't understand why merging the fixing PR didn't close this issue automatically...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:rasa-oss 🎡 Anything related to the open source Rasa framework area:rasa-oss/ml/policies Issues focused around rasa's dialogue management policies area:rasa-oss/ml 👁 All issues related to machine learning type:bug 🐛 Inconsistencies or issues which will cause an issue or problem for users or implementors.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants