No reaction on wakewords

I’m trying to activate Rhasspy with a wakeword. But anything I do - there is no reaction when I say the words… The recognition as is works…

I have set up Porcupine with the alexa (alexa_linux.ppn) keyword. I’m running Rhasspy on a docker container on a Synology NAS.



Any hints for me? What could it be that the recognition works but not the wakup? I tried different egnines with no success.
I would be very thankful.

One test I found helpful at the start of using rhasspy to see what rhasspy was actually getting. Was to
choose raven.
Record your wake word then listen to each one.
Firstly you can see if it’s a microphone problem.
A sound server problem.
or a recognition problem.
Raven false triggers a bit but as it’s more closely tied to your own voice and wake word it does trigger so it’s a great point to start.
Because of my accent I had trouble witjh the other more generalised wake word solutions.

How do you know this?
Can you post your settings?

This is my 2nd problem. Until now I couldn’t get the playback to work (see Playback doesn't work).

I click the yellow wakeup button and spoke “wie spät ist es?” into the microphone. And is is written then into the textbox:

Or I’m completely wrong and this isn’t really recognised. But If i use arecord I can record a sample from my remote compute (where the micro is connected).

“dialogue”: {
“system”: “rhasspy”
“intent”: {
“system”: “fsticuffs”
“microphone”: {
“arecord”: {
“udp_audio_host”: “christoph”,
“udp_audio_port”: “4713”
“pyaudio”: {
“device”: “0”,
“udp_audio_host”: “”,
“udp_audio_port”: “4713”
“system”: “pyaudio”
“mqtt”: {
“enabled”: “true”,
“host”: “Mosquitto”
“sounds”: {
“command”: {
“play_program”: “/usr/bin/paplay”
“system”: “aplay”
“speech_to_text”: {
“system”: “kaldi”
“text_to_speech”: {
“system”: “flite”
“wake”: {
“pocketsphinx”: {
“udp_audio”: “”
“porcupine”: {
“keyword_path”: “alexa_linux.ppn”,
“sensitivity”: “0.5”,
“udp_audio”: “”
“snowboy”: {
“model”: “jarvis.umdl”,
“sensitivity”: “0.1”
“system”: “raven”

press the record button
record your wakeword
press the play button next to it and see if you can hear it clearly.
then you know what you recorded is actually recorded and if it’s clear and can be played back.

My setup is quite specific to my setup and wouldn’t help here.

I presume you are getting a response when you trigger it manually. (on the home page). If that’s not happening then the problems not the wakeword.
Sorry @romkabouter I didn’t notice it was your reply. I’m an idiot.:slight_smile:

Yes. That works. I can hear the recorded, when I use the “download wav” feature…

I tested that. When I press record, it seems that it records, but then it runs into a timeout…

And one question to that. Is there a feedback in the Webinterface if the wakeword works?

if you watch the logs
ie tail -f /var/log/syslog
you will see output when the wakeword is recognised
[DEBUG:2021-08-22 21:43:26,841] rhasspydialogue_hermes: ← HotwordDetected(model_id=‘default’…

also if you have mosquitto clients installed you could watch the audio going over the mqtt bus with
mosquitto_sub -h localhost -t rhasspy/audioServer/#
if you don’t have udp enabled on the audio all the sound picked up by rhasspy goes on that topic

I connected to the log over websocket. But no luck. Seems that rhasspy simply doesn’t listen or doesn’t react. I’ve now setup porcupine. Do I have to do some manual training? There is sadly 0,0 reaction…

Is there anything other I can try? Why is there a difference between wakeword recording and Raven recording?

Might be something with the udp setting, I suggest you first try something without those settings, but I do not know much about pyaudio

These are my seetings… So I have to setup UDP in this case? I was thinking that pyaudio / pulse audio are indpendent from rhasspy. It should always work. Doesn’t porcupine use the pyaudio interface?

Is 0.1 a high sensitivity? Oder is 0.9 a high sensitivity? And what I have to speak into to mic is “hey google?”. Is there an optical feedback in the gui?

Perhaps some of the developers can say something? We are celebration a guessing hour, aren’t we?


Can you subscribe to your mqtt broker and see if messages are flowing to hermes/audioServer/#
There should be a lot of them, the audioFrames topic ( a subtopic) shouild receive those.
I think you will see messages, because you say the recognition works. Which implies correct workings of the audio recording

Do you still have the UDP settings on pyadio? Might be best to remove them for now


This are my settings @recording:

This @playback:

And this @porcupine:

Ok, seems ok.
I see that you use alexa_linux.ppn as keyword file, can you try porcupine.ppn (not porcupine_linux.ppn)

You are running on a Synology NAS, so the models might be incompatible.

Sorry. I don’t get that. Where do I get this “porcupine.ppn”? I only find files like porcupine_[platform].ppn here:

Sorry, but it is probably visible in the dropdown for the wakewords:

Hmm not in mine…
I pressed “refresh” before…

Thanks for the help BTW

Darn it, it really think that is the problem, because you say recognition works.
That implies a correct working mic

Have you tried other engines like snowboy or raven?
Rave might be a good effort, because that records first