We had the idea to use machine learning in order to solve a common problem for people diagnosed with Alzheimer’s: helping them to remember things.
Iris is a virtual assistant (and wearable device) that helps you remember people that you know and things you have to do by providing non-invasive cues through an intelligent voice interface.
We used a Raspberry Pi connected to a camera, a Flask app as the back end, and an Angular app as the front end. We used the Microsoft Azure Face API for facial recognition as well as an abundance of Google Cloud Platform services to support our application: including Google Compute Engine, Google App Engine, Firebase Realtime Database, Google Cloud Storage, and Google Cloud Text-to-Speech.
One problem our team on the backend ran into was learning new APIs and successfully incorporating them into our final product. The time crunch was also an issue, but due to teamwork we overcame this obstacle.
Another challenge was training the speaker recognition using snowboy, a DNN based hotword and wake word detection toolkit.
The final product looks amazing and we're extremely proud to present a web app in the time allotted.
We learned how to use the Microsoft Azure Face API, storing data into the realtime database, and images into Google Cloud Storage. We also learned how to perform realtime data updates on a web app using firebase.
We would like to allow a user to benefit from our app with a mobile device. Being able to incorporate the hardware into a smaller more discrete device, like glasses and an earpiece, is also a future goal.
- http://irisassist.org/
- https://devpost.com/software/iris-io
- https://github.com/CruzHacks2019/iris-backend
- https://github.com/CruzHacks2019/iris-frontend
- https://github.com/CruzHacks2019/iris-raspi
- https://www.youtube.com/watch?v=bZMzjwBbdrw