Server-Satellite not sending to Home Assistant

Is it possible to run the Server with Satellite tutorial but handling intent with home assistant at server?

The intent is handled properly in HomeAssistant when recorded from server, but no from satellite. The long lived token is set. The tutorial is working and the intents from Satellite works in server

Any ideas?

Running with Docker and Rhasspy version 2.5

So just to be sure i understand what you mean:
You have a Satelite and a “normal” instance of rhasspy up and running.
Your server is passing the intent handling to home assistant but your satelite isn’t.

If thats the case, most probably you have something setup wrong either with intent handling or recognition. Most times you only have wakeword and stt with sometimes intentrecognition running on the satelite.

And you are using the eventbased API for home asisstant?

Let me explain it more, the satellite is passing the stt to the server, and the intent is recognized in the server (like in the tutorial), but no handled to home assistant (yes, i’m using the event based API).

But with the same configuration (at server), and recording the audio in the server, works properly in home assistant with the automation

Let me show the configuration

Every green field in server has the satellite id.

Wait. So the Intent is recognized but not handled if it comes from the satelite? Can you paste some logs?

This is similar to the following post:

I’m not using a rhasspy based satellite as it is a ESP32 Matrix Voice and its scope is much more limited. It does appear as though a fix might be in place:

Give it a shot and see!

I’m also interested in a fix to this issue. (+1) :grinning:
I did see the issue on github, and that it has been resolved and closed.

So I tried the latest 2.5 Docker (released 6/7 days ago) but unfortunately intent handling does not work on the Server for me either. (i have a pi4 server and a couple of pi0 satellites).

I am using an external MQTT server, and i publish ‘events’ to Home assistant. It works flawlessly if i do intent handling on the satellites. As i say, i tried it on the server again after the last docker release but it doesn’t work from the server.

I would like to get it working on a server as i have a Matrix voice that i want to try. I imagine i will need server based intents for this to work although i have not really looked closely as to how this should be configured.

Satellite Log

[DEBUG:2020-06-21 15:50:42,892] rhasspyserver_hermes: <- NluIntent(input=‘Haz café’, intent=Intent(intent_name=‘Cafe’, confidence_score=1.0), site_id=‘satellite’, id=None, slots=[], session_id=‘satellite-porcupine-8d571eae-b306-4181-9216-2a60913207cc’, custom_data=None, asr_tokens=[[AsrToken(value=‘Haz’, confidence=1.0, range_start=0, range_end=3, time=None), AsrToken(value=‘café’, confidence=1.0, range_start=4, range_end=8, time=None)]], asr_confidence=None, raw_input=‘haz café’, wakeword_id=‘porcupine’, lang=None)
[WARNING:2020-06-21 15:50:39,307] rhasspyserver_hermes: Dialogue management is disabled. ASR will NOT be automatically enabled.
[DEBUG:2020-06-21 15:50:39,305] rhasspyserver_hermes: <- HotwordDetected(model_id=’/usr/lib/rhasspy/lib/python3.7/site-packages/rhasspywake_porcupine_hermes/porcupine/resources/keyword_files/raspberrypi/porcupine.ppn’, model_version=’’, model_type=‘personal’, current_sensitivity=0.5, site_id=‘satellite’, session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-06-21 15:50:31,633] speech_to_text.pocketsphinx.mix_weight >0 0 = False
[DEBUG:2020-06-21 15:50:31,631] speech_to_text.pocketsphinx.open_transcription True False = False
[DEBUG:2020-06-21 15:50:31,630] speech_to_text.system pocketsphinx hermes = False
[INFO:2020-06-21 15:50:31,465] rhasspyserver_hermes: Started
[DEBUG:2020-06-21 15:50:31,464] rhasspyserver_hermes: Subscribed to rhasspy/asr/satellite/satellite/audioCaptured
[DEBUG:2020-06-21 15:50:31,462] rhasspyserver_hermes: Subscribed to hermes/intent/#
[DEBUG:2020-06-21 15:50:31,461] rhasspyserver_hermes: Subscribed to hermes/audioServer/satellite/audioSummary
[DEBUG:2020-06-21 15:50:31,460] rhasspyserver_hermes: Subscribed to hermes/asr/textCaptured
[DEBUG:2020-06-21 15:50:31,459] rhasspyserver_hermes: Subscribed to hermes/nlu/intentNotRecognized
[DEBUG:2020-06-21 15:50:31,457] rhasspyserver_hermes: Subscribed to hermes/hotword/+/detected
[DEBUG:2020-06-21 15:50:31,456] rhasspyserver_hermes: Subscribed to hermes/audioServer/satellite/audioSummary
[DEBUG:2020-06-21 15:50:31,454] rhasspyserver_hermes: Subscribed to rhasspy/asr/satellite/satellite/audioCaptured
[DEBUG:2020-06-21 15:50:31,453] rhasspyserver_hermes: Subscribed to hermes/nlu/intentNotRecognized
[DEBUG:2020-06-21 15:50:31,452] rhasspyserver_hermes: Subscribed to hermes/intent/#
[DEBUG:2020-06-21 15:50:31,450] rhasspyserver_hermes: Subscribed to hermes/asr/textCaptured
[DEBUG:2020-06-21 15:50:31,449] rhasspyserver_hermes: Subscribed to hermes/hotword/+/detected
[DEBUG:2020-06-21 15:50:31,448] rhasspyserver_hermes: Connected to MQTT broker
[DEBUG:2020-06-21 15:50:31,429] rhasspyserver_hermes: Connecting to (retries: 0/10)
[DEBUG:2020-06-21 15:50:31,428] rhasspyserver_hermes: Starting core
[DEBUG:2020-06-21 15:50:31,419] rhasspyprofile.profile: Loading default profile settings from /usr/lib/rhasspy/lib/python3.7/site-packages/rhasspyprofile/profiles/defaults.json
[DEBUG:2020-06-21 15:50:31,417] rhasspyprofile.profile: Loading /profiles/es/profile.json
[DEBUG:2020-06-21 15:50:31,413] rhasspyprofile.profile: Loading /usr/lib/rhasspy/lib/python3.7/site-packages/rhasspyprofile/profiles/es/profile.json
[DEBUG:2020-06-21 15:50:30,664] rhasspyserver_hermes: Restarting Rhasspy

Server Log

[DEBUG:2020-06-21 15:50:32,531] speech_to_text.pocketsphinx.mix_weight >0 0 = False
[DEBUG:2020-06-21 15:50:32,530] speech_to_text.pocketsphinx.open_transcription True False = False
[DEBUG:2020-06-21 15:50:32,530] Skipping acoustic_model/variances (/profiles/es/acoustic_model/variances)
[DEBUG:2020-06-21 15:50:32,530] Skipping acoustic_model/transition_matrices (/profiles/es/acoustic_model/transition_matrices)
[DEBUG:2020-06-21 15:50:32,529] Skipping acoustic_model/sendump (/profiles/es/acoustic_model/sendump)
[DEBUG:2020-06-21 15:50:32,529] Skipping acoustic_model/noisedict (/profiles/es/acoustic_model/noisedict)
[DEBUG:2020-06-21 15:50:32,528] Skipping acoustic_model/mixture_weights (/profiles/es/acoustic_model/mixture_weights)
[DEBUG:2020-06-21 15:50:32,528] Skipping acoustic_model/means (/profiles/es/acoustic_model/means)
[DEBUG:2020-06-21 15:50:32,528] Skipping acoustic_model/mdef (/profiles/es/acoustic_model/mdef)
[DEBUG:2020-06-21 15:50:32,527] Skipping acoustic_model/feat.params (/profiles/es/acoustic_model/feat.params)
[DEBUG:2020-06-21 15:50:32,527] Skipping g2p.fst (/profiles/es/g2p.fst)
[DEBUG:2020-06-21 15:50:32,526] Skipping base_dictionary.txt (/profiles/es/base_dictionary.txt)
[DEBUG:2020-06-21 15:50:32,526] speech_to_text.system pocketsphinx pocketsphinx = True
[INFO:2020-06-21 15:50:32,412] rhasspyserver_hermes: Started
[DEBUG:2020-06-21 15:50:32,412] rhasspyserver_hermes: Subscribed to hermes/asr/textCaptured
[DEBUG:2020-06-21 15:50:32,411] rhasspyserver_hermes: Subscribed to hermes/hotword/+/detected
[DEBUG:2020-06-21 15:50:32,411] rhasspyserver_hermes: Subscribed to hermes/nlu/intentNotRecognized
[DEBUG:2020-06-21 15:50:32,410] rhasspyserver_hermes: Subscribed to hermes/intent/#
[DEBUG:2020-06-21 15:50:32,410] rhasspyserver_hermes: Subscribed to rhasspy/asr/master/master/audioCaptured
[DEBUG:2020-06-21 15:50:32,409] rhasspyserver_hermes: Subscribed to hermes/audioServer/master/audioSummary
[DEBUG:2020-06-21 15:50:32,409] rhasspyserver_hermes: Subscribed to hermes/audioServer/master/audioSummary
[DEBUG:2020-06-21 15:50:32,408] rhasspyserver_hermes: Subscribed to rhasspy/asr/master/master/audioCaptured
[DEBUG:2020-06-21 15:50:32,408] rhasspyserver_hermes: Subscribed to hermes/nlu/intentNotRecognized
[DEBUG:2020-06-21 15:50:32,407] rhasspyserver_hermes: Subscribed to hermes/intent/#
[DEBUG:2020-06-21 15:50:32,406] rhasspyserver_hermes: Subscribed to hermes/asr/textCaptured
[DEBUG:2020-06-21 15:50:32,405] rhasspyserver_hermes: Subscribed to hermes/hotword/+/detected
[DEBUG:2020-06-21 15:50:32,404] rhasspyserver_hermes: Connected to MQTT broker
[DEBUG:2020-06-21 15:50:32,401] rhasspyserver_hermes: Connecting to (retries: 0/10)
[DEBUG:2020-06-21 15:50:32,401] rhasspyserver_hermes: Starting core
[DEBUG:2020-06-21 15:50:32,395] rhasspyprofile.profile: Loading default profile settings from /usr/lib/rhasspy/lib/python3.7/site-packages/rhasspyprofile/profiles/defaults.json
[DEBUG:2020-06-21 15:50:32,394] rhasspyprofile.profile: Loading /profiles/es/profile.json
[DEBUG:2020-06-21 15:50:32,393] rhasspyprofile.profile: Loading /usr/lib/rhasspy/lib/python3.7/site-packages/rhasspyprofile/profiles/es/profile.json
[DEBUG:2020-06-21 15:50:31,939] rhasspyserver_hermes: Restarting Rhasspy

And About @Zoso suggestion, yes, it appears to be the same issue. And with the HTTP alternative is working properly, thanks for that

I know I’m late to this discussion … but … surely setting the satellite’s Rhasspy “Intent Handling” method to [Disabled] prevents the satellite ever asking the HA server to action anything ?

I have assumed that the image in the documentation at is wrong … but what I have been trying to find is what Remote URL I need to direct the satellite to. Unfortunately the Documentation referenced gives only generic "url": "http://<address>:<port>/path/to/endpoint" without giving a clue as to whether directing to the HA or Rhasspy port, let alone what endpoint address to use.

Quite frankly, I am bamboozled by the Intent Handling options. On a satellite, do we use “Remote HTTP” like the other services we are requesting another machine to do … or the “Home Assistant” option, calling directly to the HA/base machine on port 8123 ?
Or should the satellite use “Remote HTTP” to the base’s Rhasspy on port 12101 - which in turn calls “Home Assistant”? No, the tutorial shows the Base station’s Intent Handling as also “Disabled”.

I believe this is the same issue I ran into where the Rhasspy Base was stripping the Satallite’s ID when passing the intent off to HA.

You can see the GitHub issue I opened here:

Appologies if this is a different issue.

Sorry for not getting back earlier when i got it working.

I note that the Server with Satellites tutorial in the documentation shows neither machine handling the intents :frowning:

Yes it is the satellite which has to call the HA server

and Intent Handling on the Base station can be disabled.

After months of confusion, my conclusion is that it is basically a client-server architecture where the Satellite hears the voice so initiates the transaction and supervises it through all the steps. Some of the steps can be done remotely as services on the Base Rhasspy machine.

The odd thing is “Dialogue Management”, which isn’t one of the actual steps, and I found needs to happen on the Base station and not the satellite.

I am also very unsure what are the correct settings for base + sat + HA + ext. MQTT.
I have snips: in my HA config and it gets the intents for HA. Before adding sinps: nothing happened.
No matter if i click send intents or send events, i only get intents.
It doesnt matter if i set intend handling on sat, base or both.
I would like to switch to automations on HA like @romkabouter advised, but i dont know how to get rhasspy to send an event.

Hi vajdum,
Have you got the intent recognition working ? No point actioning intents that haven’t been recognised properly, so I will start by summarising my Base-satellite rhasspy settings:

  • On all machines, set MQTT to External MQTT with the hostname or IP address of the machine running the MQTT server (in my case it’s the MQTT add-on to Home Assistant OS).

  • On the Satellite, configure and test “Audio recording” and “Audio Playing”.

  • I recommend doing the Wake Word processing on the satellite to minimise the audio packets being sent over your Wi-Fi. Porcupine also has a UDP option which is even more efficient.

  • On the satellites, set Speech to Text, Intent Recognition and Text to Speech all to “Hermes MQTT”

  • On the Base machine’s Rhasspy configuration, you can choose whichever Speech to Text, Intent Recognition, and Text to Speech modules work best for you. Make sure to list all the satellite SiteID’s in the “Satellite siteIds:” fields - the base station processes the requests from these satellites only.

  • On the base machine,set “Dialogue Management” to “Rhasspy”, and enter all your Satellite SiteID’s.

  • if the base machine has no microphone or speaker, you can set Audio recording, Audio Playing and Intent Handling all to “Disabled”

  • In Rhasspy on the Base machine, set some values like these in the sentences.ini; save and click [Train]

what time is it
(What is | whats) the time
tell me the time

light_name = (living room light | study light | bedroom light) {name}
light_state = (on | off) {state}

turn <light_state> [the] <light_name> [please]
turn [the] <light_name> <light_state> [please]

Now when you speak the Wake Word, pause, and give a command you should see it in the Rhasspy Home screen, like:

vajdum, have you got this far ?


If you have the Satellite and Base recognising the intent … when the words “Porcupine, turn on the study light” are spoken, Rhasspy determines that the “LightState” intent is to be called with slot “name” having a value of “study light” and slot “value” having a value of “on” … as seen here:

… the next step is to get Rhasspy & Home Assistant to action it.

You may need to use a Long-lived Access Token. This is done at the very bottom of your profile page in Home Assistant

Click [Create Token], give it a name (I called mine “Rhasspy”), and a long sequence of characters (well beyond the left and right sides of the window) will be shown.
Screenshot from 2022-01-13 15-20-58

Click your mouse anywhere in the line of code, press the [Home] key to move the cursor to the start of the code, press [Shift-End] to highlight to the end of the code, and [Ctrl-C] to copy the code to clipboard.

In Rhasspy on the Satellite, set [Intent Handling] to “Home Assistant”, [Save], add the URL for Home Assistant – the “http://” protocol, hostname or IP address of the Home Assistant machine, and port number :8123.

Go to the browser window with your Satellite and paste that long code above into the “Access Token” field and click [Save Settings].

I am using intents in HA, so selected the “send intents”… option.

There, that should send your intents to Home Assistant.

Now to Home Assistant and we will add the intents into configuration.yaml so that Home Assistant will action them.

There are several ways to edit the .yaml files, and I have installed the “File editor” Add-on from Configuration > Add-ons.

I have added into the configuration.yaml page:

intent_script: !include intents.yaml

The keywords “intent:” and “intent_script:” are required. In my case I have chosen to place the actual intents in a separate file to make them easier to edit; but you can simply list the intents directly after “intent_script:”.

In my new intents.yaml file I have:

# intents.yaml - the actions to be performed by Rhasspy voice commands 
    text: The current time is {{ now().strftime("%H %M") }}

    text: Turning {{ name }} {{state }} 
    - service: light.turn_{{ state }} 
        entity_id: light.{{ name | replace(" ","_") }}

And that’s it !

When the words “Porcupine, turn on the study light” are spoken, Rhasspy determines that the “LightState” intent is to be called with slot “name” having a value of “study light” and slot “value” having a value of “on” … as seen in the first image above.

This intent is passed (in a JSON file) to Home Assistant, where the intent name is looked up in the configuration.yaml (or intents.yaml) file.

Intent name “LightState” matches with “LightState:”, and parameters are substituted, so that Home Assistant effectively runs:

    text: Turning study light on 
    - service: light.turn_on 
        entity_id: light.study_light

thank you very much for your good explanation, @donburch!
I have an working setup using intents. HA reacts to intents sent by MQTT since i added snips: to my HA config, even if i disable Intent Handling on all rhasspys.
The problem is, i prefer to use events and automations because they are easier to debug and no need to always restart HA.
If i disable snips i dont get any response from HA, nothing in the log regarding missing automation.
So far its all set up like you describe, only i had no http:// before my ip address, but this seems not the culprit.

Intent settings are:
Access Token •••••••••••••••••••••••••••••••••••••
Send events to Home Assistant (/api/events)
no siteid

1 Like

Can you post you profile.json of base and sat?

Also, did you create an eventListener in the developer tool to debug?
What intents do you have in Rhasspy?

I have intents like GetTime, Get Date and some more.
I can see hermes topics, but not sure how to listen to events.
But i expect to see a log entry about not existing automation, which not appears in case of automations, only for missed intents.
Since i have rhasspy it has never worked before i added snips:
Shortly i changed my config according the base / sat config from doc, but nothing changes, i get nothing into HA without snips:
I already tried it with a new created Token.

Base settings

    "command": {
        "webrtcvad": {
            "max_sec": "8",
            "min_sec": "0.7"
    "dialogue": {
        "satellite_site_ids": "sat",
        "system": "rhasspy",
        "volume": "0.7"
######### intent handling is disabled on settings page! ###############
    "handle": {
        "remote": {
            "url": ""
        "satellite_site_ids": "homeassistant,addon,assistant,hasso"
    "home_assistant": {
        "access_token": "•••••••••••••••••••••••••••••••••",
        "handle_type": "event",
        "url": ""
    "intent": {
        "satellite_site_ids": "sat",
        "system": "fsticuffs"
    "mqtt": {
        "enabled": "true",
        "host": "",
        "password": "•••",
        "site_id": "base",
        "username": "•••"
    "speech_to_text": {
        "kaldi": {
            "min_confidence": "0.6"
        "satellite_site_ids": "sat",
        "system": "kaldi"
    "text_to_speech": {
        "espeak": {
            "volume": "1"
        "nanotts": {
            "volume": "0.65"
        "satellite_site_ids": "sat, homeassistant, assistant",
        "system": "nanotts"

Sat settings

    "handle": {
        "system": "hass"
    "home_assistant": {
        "access_token": "••••••••••••••••••••••••••••••",
        "handle_type": "event",
        "url": ""
    "intent": {
        "satellite_site_ids": "homeassistant,base",
        "system": "hermes"
    "microphone": {
        "arecord": {
            "device": "plughw:CARD=seeed2micvoicec,DEV=0",
            "siteId": "sat",
            "udp_audio_host": "",
            "udp_audio_port": "12202"
        "system": "arecord"
    "mqtt": {
        "enabled": "true",
        "host": "",
        "password": "•••",
        "port": "1883",
        "site_id": "sat",
        "username": "•••"
    "sounds": {
        "aplay": {
            "volume": "0.8"
        "system": "aplay"
    "speech_to_text": {
        "system": "hermes"
    "text_to_speech": {
        "nanotts": {
            "volume": "0.7"
        "satellite_site_ids": "sat",
        "system": "hermes"
    "wake": {
        "porcupine": {
            "keyword_path": "alexa_raspberry-pi.ppn",
            "sensitivity": "0.3",
            "udp_audio": "12202"
        "satellite_site_ids": "sat",
        "snowboy": {
            "apply_frontend": true,
            "model": "neoya.umdl",
            "sensitivity": "0.7,0,7"
        "system": "porcupine"

no difference if i delete this part, @romkabouter :

######### intent handling is disabled on settings page! ###############
    "handle": {
        "remote": {
            "url": ""
        "satellite_site_ids": "homeassistant,addon,assistant,hasso"
    "home_assistant": {
        "access_token": "•••••••••••••••••••••••••••••••••",
        "handle_type": "event",
        "url": ""

Go to developer tools:

There is indeed no such thing :slight_smile:

I think the intent handling should be set to Home Assistant on the base, not disabled.
But if it is working with intents, it might work.

First try to listen to events and see if you get someting.

Also, have a glance at the wiki:

Ok, its working now :slight_smile:

i changed intent handling from base to sat regarding someone stating this is the right way to do so. If i use base for that i dont have to add tokens to each sat.

So the rest command rhasspy_speak is only needed for intents? or can i replace it there too?
How can i use rhasspy for speech from HA without being triggered from rhasspy?

And now i again have the problem how to pass to scripts.

This seems not to work:


service: script.lms_play_playlist
  eventdata: '{{ }}'


entity_id: '{{ eventdata.player | string }}'

I remembered the git wiki page but couldnt find it.
The page uses payload_template and service_template.
Isnt service_template not needed anymore? Whats about payload?

The rest command is not needed at all. You can add a mqtt publish right in the automation as stated in the wiki here:

You can post a message to ttsSay with a mqtt.publish service call
Almost the same as the endSession, but with some other data found here:
Remember to put use the siteId if you want speech on a satellite.

Depends, the example is so that you can use 1 automation for turning on AND off.
The service_template is used to get either a light.turn_on or a light.turn_off. Depending on the state which is coming form the event (as a slot)

I think you want {{ }} on your service call, where player is the slotname.
Hard to tell how you have it, but something like this should work

play my playlist on (livingroom | kitchen){player} will hold livingroom or kitchen.
You should then be able to use that in the script, passed as eventdata

service: script.lms_play_playlist
  eventdata: '{{ }}'

In the script

entity_id: ‘{{ eventdata }}’