MediScan gives you easy access to wait times for hospitals nearby, allows you to self diagnose so you can have the best medical experience, and connect with doctors remotely through telemedicine🚀!
- Source Code for Website: https://github.com/TheDevvers/www
- Source Code for AI Model and API: https://github.com/TheDevvers/model-api
- Source Code for Video Conferencing Server: https://github.com/TheDevvers/video-server
When Navin's Mom broke her arm, she contacted their nearest hospital to get an estimate of how long the ER wait would be, and they said they can't give public wait times. They ended up waiting at the hospital for 8 hours, and Navin's Mom had nobody to consult with for what actions she could take to reduce the pain, not injuring herself further. This is when we got the idea for Mediscan, so patients can be safer and smarter in their experience.
We built Mediscan (https://mediscan.care) to give medical patients a platform where they can completely learn about their specific situation and take the right steps. Each user either signs in as a doctor or patient, and fills out their information. Using our telemedicine feature, users can actually consult with doctors on video conference calls about what they can do in their situation. If a doctor is not available, users can go to the self-diagnose page and talk to an interactive LLM AI powered assistant that prompts the user for their information and tells them what to do. We also have our own built and trained AI image classification models that detect skin, mouth, and nail (the most common places for external diseases/conditions) conditions. When users upload a photo of their condition, a disease prediction is given along with some steps to take. Finally, there is an emergency room wait times page, which provides accurate and regularly updated wait times, so when a user has to visit the ER, they know where they should go to get treated quickest.
For the image diagnosis models, we often ran into problems with overfitting and had to retrain the models on different epochs in order to get them to optimal accuracy. We learned how to optimally train CNN’s quicker. After retrieving the hospital data from the Centers for Medicare & Medicaid Services API, we realized that the data was very unstructured and difficult to interpret in JSON. So we wrote a script that formatted all of the data to display it properly on our site.
We plan to consult with local hospitals and actually implement Mediscan, increasing convenience for both doctors and patients. We want to onboard doctors and slowly roll out the application to patients and users willing to try us out.
We are proud of the fact that for our first major hackathon, we were able to bridge 2 different backends (for the Convolutional Neural Networks to Flask API and for the telemedicine WebRTC API to Mediscan).
Our data is completely dynamic. The dataset from CMS is updated every few weeks, so our hospital wait times are updated automatically.
For the best demo experience, visit our site :)
##🚀Website Tech Stack
- ✅ Bootstrapping: create-t3-app.
- ✅ Framework: NextJS 14 + Typescript.
- ✅ Styling: TailwindCSS + RadixUI.
- ✅ Animations: AOS + HeadlessUI.
- ✅ Component Library: shadcn/ui.
- ✅ Initial Landing Page Template: Cruip.
- ✅ Database: MongoDB.
- ✅ Schema Validation: Zod.
- ✅ File Uploads: Uploadthing.
- ✅ Data Caching: Redis + Upstash.
- ✅ Medical Data: Centers for Medicare and Medicaid Services.
- ✅ Reverse Geocoding APIs: Opencage + Geonames.
- ✅ Geocoding APIs: Google Maps + MapBox Geocoding.
- ✅ Map Renderer: MapBox Maps.
- ✅ Hosting: Vercel.
##🤖AI Model Tech Stack
- ✅ Language: Python.
- ✅ ML Library: Tensorflow.
- ✅ Datasets: Kaggle.
- ✅ Framework: Flask + Gunicorn.
- ✅ Hosting: DigitalOcean.
##🎥Video Server Tech Stack
- ✅ Bootstrapping: create-t3-app.
- ✅ Framework: NodeJS + Typescript.
- ✅ Hosting: DigitalOcean.
- ✅ WebSockets: Socket.IO.
- ✅ Video Communication: WebRTC + PeerJS.
Navin Narayanan, Rachit Patel, Rishi Madhavan, and Vivek Maddineni