Wakeword as direct command

Ok I finally have an external wakeword program.
Firstly trying to add the program details via the web interface just doesn’t work.
Nothing is saved to the profile json
I have to write the section manually.
I wanted to run it standalone.
Rhasspy has control of the sound device all the time that is something I was trying to avoid.
I would like rhasspy to only listen only after the wakeword has been triggered.
Is this even possible given the current architecture.

Or do I have to do the whole process via mqtt.

Those are two different things.
When Rhasspy is started and you have enabled audio recording, the sound device is always occupied by Rhasspy.
But you can disable the wakeword and trigger it with MQTT:

Thanks @romkabouter

ok so it all or nothing.

so this has to be done at the audio recording level.
ie write a wrapper that controls audio
directing it to rhasspy or the wake word code depending on the wake state.

I tried disabling the wake word section but rhasspy still has control of the audio

Yes, when Rhasspy is running the microphone is busy. Unless you disable Audio Recording.
But then Rhasspy does not work anymore :wink:

I do not know the reason why you seem to overcomplicate things, but what might be a solution is to enable and disable the audio recording by writing the profile.json and restart Rhasspy. Both via your external program.
Hard to say if that is a possible or desired solution, but just trying to come up with some ideas :slight_smile:

All I am trying to do is use a tensorflow based wake word system on the same computer as rhasspy.
My experience in sound io is very limited and as such piping the output from rhasspy to my code is eluding me. ie trying to get the code running in rhasspy world.
It is rock solid with access to the microphone.
rhasspy doesn’t need to work if the wake word is not heard.
I presumed that seeing rhasspy wasn’t listening it would do the right thing and release the microphone.

The documentation does say
Calls a custom external program to listen for a wake word, only waking up Rhasspy when the program exits.

Sorry I am just a little frustrated it seemed like a simple way to create a solid wake word handler. But I was wrong.

Yes, but that program is getting its audio from Rhasspy, not the Mic.
So you need to feed your tensorflow based wake word system with the audiostream from Rhasspy, which says so here:

“Raw audio chunks will be written to standard in”

Basically, your program must use the chunck on standard in as microphone, not the mic device itself.
Then you’re good to go :slight_smile:

yep thats the bit i’m stuck on at the moment.

the pcm is the same but I’m stuck and there are no examples out there to guide me.
the code I’m working with talks straight to alsa and it works fine.

I don’t understand why this is so complex
it is just a two byte stream.

Everything I try just does nothing.

Oh well I did get it half solved.
The code works now.
It’s using the alsa pulse plugin.
While rhasspy is running
I start the code
it runs until i say the wake word and exits as it should.
this is at the same time as rhasspy is running raven.
Well it is run by rhasspy it either doesn’t run it or is ignored.

something fishy is going on,

At least now I can add a mqtt component to the code and trigger the mqtt directly.
Thanks for your help @romkabouter

Ok I finally have what I was after.
by adding a bogus UDP port into the audio input section and handling the audio via command. Rhasspy no longer floods the mqtt server all the time.
the wake word script triggers rhasspy via mqtt then the audio starts going over mqtt again. ie only when required.

Rhasspy still uses the mic and listens for the hotword all the time. If you wanted to only stop the audio stream to mqtt the indeed the upd is a good way. It is also documented.

Yep I did find that bit of documentation eventually. :slight_smile:
At least now the mosquito is not being hit with unnecessary traffic.
I wanted a solution that minimised waste and still worked within the rhasspy framework.

Hi Greg,

Will you at some point be able to put together a step by step tutorial and explainer. I can se a lot of people being interested in your work around…eeerm solution for wake words;-) Well done BTW

Well I sort of did in

There is a simple shell script that connects the edgeimpulse wakeword with appropriate mqtt calls to trigger rhasspy, then wait until rhasspy has basically finished recording the next part.

I can elaborate it a bit more but without the edgeimpulse training (which takes a lot of time and patience ) and the c++ implementation ( that’s possibly not so easy for people unfamiliar with c++) it’s a completely different ball game.
My version is only really one way that I tailored to suit my Acussis S mic ( which cancels out the sound output -iie I listen to the radio and the wakeword triggers cleanly at the same time) These units aren’t cheap so it’s not a solution for everyone and my solution is pretty specific to my environment.
But I will write a bit of a rundown in a show and tell type format in the next week or so if that’s what people would like.

Right get ya. But I am a programmer so am eager to see how you did it and then maybe we can work on an easier implementation :slight_smile:

It’s pretty easy really. Just time consuming.
If you program in Python ( not my preferred language )
If you went through the edge training stage then followed the python example. Maybe then you could make it easier for people around here.

I posted the steps I went through here

Thanks for posting it. I will run through and and see what I can do.

1 Like