You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This model card mentions that grok-1 has undergone fine-tuning: "The model was then fine-tuned using extensive feedback from both humans and the early Grok-0 models. "
But, this blog mentions that grok-1 is not fine-tuned: "Base model trained on a large amount of text data, not fine-tuned for any particular task."
That is confused, Are the performance metrics displayed here for the Base model or the Chat model? https://x.ai/blog/grok
The text was updated successfully, but these errors were encountered:
This model card mentions that grok-1 has undergone fine-tuning: "The model was then fine-tuned using extensive feedback from both humans and the early Grok-0 models. "
But, this blog mentions that grok-1 is not fine-tuned: "Base model trained on a large amount of text data, not fine-tuned for any particular task."
That is confused, Are the performance metrics displayed here for the Base model or the Chat model?
https://x.ai/blog/grok
The text was updated successfully, but these errors were encountered: