-
None
Pinned Loading
-
openai/mle-bench
openai/mle-bench PublicMLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
-
openai/evals
openai/evals PublicEvals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.