Skip to content

StanfordBDHG/HealthBench

Medicine on the Edge: On-Device LLM Benchmark

Build and Test codecov

Overview

This repository contains the codebase for benchmarking on-device Large Language Models (LLMs) for clinical reasoning. The study evaluates the feasibility, accuracy, and performance of mobile LLM inference using the AMEGA medical benchmark dataset.

The deployment of Large Language Models (LLM) on mobile devices offers significant potential for medical applications, enhancing privacy, security, and cost-efficiency by eliminating reliance on cloud-based services and keeping sensitive health data local. However, the performance and accuracy of on-device LLMs in real-world medical contexts remain underexplored. In this study, we benchmark publicly available on-device LLMs using the AMEGA dataset, evaluating accuracy, computational efficiency, and thermal limitation across various mobile devices. Our results indicate that compact general-purpose models like Phi-3 Mini achieve a strong balance between speed and accuracy, while medically fine-tuned models such as Med42 and Aloe attain the highest accuracy. Notably, deploying LLMs on older devices remains feasible, with memory constraints posing a greater challenge than raw processing power. Our study underscores the potential of on-device LLMs for healthcare while emphasizing the need for more efficient inference and models tailored to real-world clinical reasoning.

Contributing

Contributions to this project are welcome. Please make sure to read the contribution guidelines and the contributor covenant code of conduct first.

License

This project is licensed under the MIT License. See Licenses for more information.

Our Research

For more information, check out our website at biodesigndigitalhealth.stanford.edu.

Stanford Byers Center for Biodesign Logo Stanford Byers Center for Biodesign Logo

About

HealthBench

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published