Skip to content

Software and Hardware prototype for giving voice to those who are unable to say or hear

Notifications You must be signed in to change notification settings

yatharthagr7/Pi_Giving_Voice_to_Voiceless

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Let's Connect with each other

Team Pi : UNMUTE YOURSELF

ABOUT THE PROJECT

So our project deals with empowering those special people who are unable to speak and hear, who aren't able to express their thoughts to the people surrounding them. They usually feel left out which leads to depression and loneliness since no one is their to listen to them. Here we bring our Team Pi (hardware + software) prototype i.e. UNMUTE YOURSELF which is giving voice to the people who are in need.

PROBLEM

So around 466 million people worldwide are having disability of hearing and speaking loss where 34 million of them are children. People who are born deaf and mute face problems in professional life wherein they become a part of discrimination and biasness. The people who tend to apply for the jobs are not able to express themselves to the recruiter properly and there are cases wherein the recruiters tend to face some difficulty in hiring some certified translators too.

Solution

So we Team Pi came up with a unique solution of empowering these special human beings in order to ease their life in their professional as well as personal life. We made a hardware as well as a software prototype to help them. The hardware prototype tends to give voice to the wearer whereas the software prototype tends to train the sign languages with the specific words.

Here the wearer is the person who isn't able to speak or facing hearing loss.

This can help the recruiters to get a perfect response from these special people and the wearer can easily tell what all actually is going on through his/her mind.

How does it Work ?

  • Here first we built a website
  • Then we proceeded by deploying the KNN image Classifier using the tensorflow.js
  • K-Nearest Neighbours (k-NN) is a supervised machine learning algorithm i.e. it learns from a labelled training set by taking in the training data X along with it's labels y and learns to map the input X to it's desired output y
  • Then in order to convert the text to speech which tends to display after detecting the sign we used the google's text to speech conversion
  • For the output of speech we then made a hardware model comprising of two speakers connected to the ROYQUEEN X200 board via jumper wires.
  • We got the board from an old dismantled audio speaker
  • And then Eureka! we got a perfect sign detection along with words .

LEARNING RESOURCES (What all we referred)

  1. Google Machine Learning
  2. KNN exercises using Colab
  3. Medium post By Paarth Bir
  4. Research Paper

FUTURE GOALS

So what future expects from Team Pi is that we will make use of much small and more durable speakers with a really comfortable design. We'll be using Posture recognition more much more accuracy in recognizing the gestures. We'll make our hardware much more economical. We also thought of adding a bluetooth connection to avoid the use of wires and give much flexibility to the wearer.

..We tried to make a difference and hope you all like it..

About

Software and Hardware prototype for giving voice to those who are unable to say or hear

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 71.4%
  • CSS 16.7%
  • HTML 11.9%