I am currently developing on a local speech assistant which utilizes constant transcription and adapting commands inspired by rhasspy. As rhasspy provides the option to include a custom component I was wondering if the opposite was also true. As I am relatively new to Python i couldn’t identify it while browsing through the code.
My current setup is the following: I have set up deepspeech and porcupine locally on a pi4 and wrote some custom code so that after the wake phrase recognition i have the sentence leading up to as well as the one trailing it available transcribed and ready for command recognition (i.e. It is dark [rhasspy] Turn the lamp on | both correct commands). Is there any way I could hand these over to rhasspy via webhook.
My hope was by including rhasspy i could utilize the flexibility of software it provides. Also if there is a way, as I’m planning to tinker around with command recognition a little is there a way to access the recognition probability in form of a response?
I would be super happy if somebody could point me to the correct file maybe with a few explanatory words
Thanks in advance
Have a look at voice2json Rhasspys little sister which is made for modular building and bootstrapping approaches.