Our project uses an ArduinoNano33Sense board to detect 'yes' and 'no' in conversation. The outputs are then displayed to a screen.
Why did you decide to make it?In the modern age, with the increasing popularity of Alexas and Google Homes, we are changing the way we interact with technology around us. Fascinated by IoT, speech recognition, and natural language processing, our team, The Good Simaritans, wanted to explore a simple 'yes' and 'no' speech-detection algorithm.
How does it work?Architecture
1. Obtains an input 2. Pre-processes the input to extract features suitable to feed into a model 3. Runs inference on the processed input 4. Post-processes the model’s output to make sense of it 5. Uses the resulting information to make things happen
TheModel
The project would be nothing without the machine learning aspect. Our model was 'trained on a dataset called the Speech Commands Dataset (https://oreil.ly/qtOSI), [which]consists of 65, 000 one-second long utterances of 30 short words crowdsourced online.'
The Full System
Demo:






Comments