Conversation
|
Review updated until commit 183b4f9 Description
Changes walkthrough 📝
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
hmmm. the number again doesn't match the original benchmark. I need to take another look at that. The added benchmark |
|
kernel looks the same. The difference in measured time is coming from:
Since those are coming from the reference implementation. I'm not going to update that. |
|
!test |
benchmarks/python/requirements.txt
Outdated
| @@ -0,0 +1 @@ | |||
| litgpt[all] | |||
There was a problem hiding this comment.
Can we make this requirement local?
We have several benchmark files that can run okay without this module. So this can be a hassle when trying to run unrelated benchmarks.
One way could be to use @pytest.mark.skipif to check for the presence of this module in relevant benchmarks.
There was a problem hiding this comment.
Sounds fair. I guess I should have stick to what the other benchmarks are doing and rely on the module installed in the container. At least we do have litgpt in our CI containers. I'll remove this file.
|
Can you add a marker? #4290 |
|
I don't have any comment anymore. Thanks @jjsjann123 for adding these benchmarks. I'll let @Priya2698 to give a final stamp. |
|
!test --pybench |
| "litgpt-gemma-2-9b", | ||
| "litgpt-mistral-7b", | ||
| "litgpt-meta-llama-3-8B", | ||
| "litgpt-phi3.5-mini", |
There was a problem hiding this comment.
@xwang233 do we need to manually add new entries in dashboard?
There was a problem hiding this comment.
it might not show up in the PR benchmark results perhaps it will work automatically (let's see), but it will show up in nightly benchmark results once merged
|
hmmm... seeing an import error. Let me investigate. |
|
errr. that's coming from the cross_entropy benchmark. I"ll just patch that. cc'ing @protonu |
|
!build |
| super().__init__("hf_mistral_nemo", dtype) | ||
|
|
||
| def model(self): | ||
| from transformers.models.phi3 import MistralPreTrainedModel |
Fixes #4253