External mqtt config preventing start up for add on

I am trying to start setting up a satellite/base server set up. I have a rasp pi that is mostly working except intent handling in home assistant. First question, should I scrap the idea of a satellite/base setup for now and just use the pi? I was having issues with connecting to HA for intent handling this way, but I am having severe problems with the HA add on (as below)

I installed mosquito add on in HA for MQTT and the rhasspy add on also I’m HA. I was having trouble for some reason connecting to the MQTT when switching to external, so I tried changing the port to 1884 in the rhasspy config. Now the web GUI never loads and I get these lines continually in the log, it never stops trying. I cannot figure out where I can change the config manually to get out of this loop since the web GUI never starts. How do I manually change this to get out of this nasty loop? Or, how do I wipe the config (these settings are not in the add on config)? I tried to uninstall and reinstall, but the config persisted.

rhasspyserver_hermes: Connecting to 192.168.1.198:1884 (retries: 0/10)

You can change your settings in the profile.json. On my pi I used docker so I have a mounted volume and it’s located here: ~/docker_data/rhasspy/profiles/en

On home assistant it’s located here:
/root/share/rhasspy/profiles/en

You can access it from the terminal addon in home assistant.

This worked! Thank you! I still don’t have intent handling working, but I will start another thread.

I can help you with that as well. Are you using node red?

I am just starting out, so open to suggestions. I am currently trying intents in ha, but originally I was missing the port number for the ha url. The default url didn’t work for me from the addon. It is working now, but it is cumbersome to test on ha since I have been restarting the whole ha service to reload the intent_script section. Is node red the recommended way?

There is not a recommended way to go in my opinion. Just a matter of preference; I have never used the Home Assistant intent model but I prefer Node Red;

Here is a node red flow that I started with:

If you want to start from scratch you just need setup an MQTT listener for the topic: “hermes/intent/#”

When Rhasspy receives an intent from a satellite it will publish it to MQTT under the above topic.

Thanks for the suggestions, I find the intent based handling hard to debug. I prefer having fewer add ons to manage as it always seems to add an extra layer of problems, but the intent based handling in home assistant is very difficult to debug and test for me.

For my understanding, I thought that typically the base station does the intent recognition. I have the satellite doing wake word, audio recording and playback. I’m still not sure why the satellite needs to know about the other steps through MQTT. Still learning about this great system, but what I have working works well.

Sorry. I probably poorly worded that.

The satellite performs wake word detection/recording and sends all audio to the Main Rhasspy server; The Rhasspy server processes the audio and does the intent recognition and then posts the MQTT topic.

I actually removed Rhasspy from the satellites and decided to go this direction for my Satellites: (It’s a Satellite replacement that I wrote that works with Rhasspy)

My wife did not approve of the Raven wake word and I was not onboard to use Picovoice. So I had to write something that worked better for her; Where she didn’t have to repeat the wake word and the false positives were gone. (ie a better user experience for her and works for my family members)

My #1 rule now is … stick to one learning curve at a time - otherwise they add up and become a mountain.

External MQTT is necessary for Base + Satellite configuration; node-RED is great - but it can be left till later.

Sorry guys, but I think you are both assuming that the base station is controlling the processing.

The documentation doesn’t do a good job of explaining Base + Satellite mode, because it was added on afterwards; and using the same GUI for both doesn’t help either. It took me a while to realise it is a classic “client / server architecture” where the server (base station) is only providing services to the cllent (satellite).

In the classic all-on-one Rhasspy, the Rhasspy machine hears the sound, listens for wake word, and performs all the processing phases down to sending the intent to the sytem (eg home Assistant) to action the command.

The satellite hears noise through its microphone, listens for wake word, and supervises the whole process, including calling services on the base station to do some of the more cpu-intensive operations. The last steps are to send the intent to a system (such as Home Assistant) to action the command, and optionally to say some response to the user.

The base system needs to be configured ONLY with External MQTT and the services which it provides to the satellites. In fact the system is modular so you could have several base systems providing different services.

As for passing intents to Home Assistant, you might like to check my recent post in another thread.

It is. There are only 2 functions that I had to implement in alice_satellite to get everything working. (Function one) send mqtt HotWordDetected to Rhasspy, Once Rhasspy Server receives the request it asks the (Function two) Satellite to stream audio to it. The Satellite performs no other processing or functionality.

This is not accurate in my setup. Everything besides what I mentioned above is being processed on the Rhasspy server. With node red you should have intent handling disabled on the Rhasspy server;

This is my server config:

I am the new guy to this space only been here a month so I am happy to be corrected.

That is correct, you can also let the hotword detection be done by the server. But then the satellite should stream audio the whle time to the base, as is the case in the esp32 rhasspy satellite.

Depends, you can enable that as well.

I have written some documents in the wiki here Home · rhasspy/rhasspy Wiki · GitHub
There are several ways documented that, closely following Home Assistant as IntentHandler.
Using NodeRed, Event or Intens as options for processing.
In no way that is “the way to go”, but it might help new people a lot

Thank you for this link. This info is much more helpful than the web documentation. I don’t intend to use other voice assistants and I am very comfortable writing automations in yaml or GUI, so this guide has cemented to me that the event based handling is better for my needs. Thank you!

I was not involved when the Base + Satellite approach was bolted on to all-on-one Rhasspy, so I do not know. I was myself rather confused trying to piece together snippets in the official documentation - and things seemed to fall into place thinking of classic IT client-server artchitecture. Well, except for “Dialogue Management”.

Snap ! My Rhasspy Base is configured exactly the same - External MQTT plus only those services which it is providing to the clients. And yes, Node-RED can use the MQTT messages and so does not require any “Intent Handling” configured anywhere in Rhasspy.

I have no knowledge of Project Alice, and a quick look at the documentation I could find was exceedingly confusing, particularly the TIP

You need the AliceSatellite installed and running on your main unit!

I understand that @romkabouter was involved contributing to Rhasspy at the time of change to Satellite + Base, so I defer to his knowledge on the subject …

My view is that Rhasspy is intended as a modular toolkit which can be utilised in very many different ways. You not only have choice of which module to use at each stage of the Rhasspy processing, but even how to call and link to the modules. The only fixed requirement is MQTT.

Lots of choices mean that there is no one correct way to do something, and when they are all documented together equally it becomes difficult to work out which options are intended to work together. Hence the need for user documentation including tutorials.

@romkabouter are you just acknowledging that ProjectAlice has a user interface with fewer controls ? Or saying that Rhasspy does NOT use a client-server architecture ? If so can you please explain the flow of control in a Rhasspy Satellite + Rhasspy Base configuration.

Does the satellite pass control to the Base station when wakeword is detected, and doesn’t get it back until the Base is ready to speak the reply ? So which minimum settings on the Satellite are actually used ?

Is that why Dialogue management runs on the Base station, and not on the Satellite ?

I think you have confused my project with another.

I have no idea what this is from.

alice_satellite was just posted a few days ago.

ProjectAlice is a totally different software setup. Once controlled by snips and developed by @maxbachmann
I think currently abandoned, but I am not sure.

Rhasspy can be client server, but can also just be a single instance setup.
Actually client server does not cover it, because the server can also be a device with a mic attached (a client)

Correct although “control” is a bit of a heavy word. For a satellite the bare minimum is audio in.
If you want some feedback on the sat, audio out is also needed.
That is the core of the esp32 satellite, but that has also some more logic controlling the leds and such. However it is better to have wakeword detection on the sat as well, because otherwise you need to have a constant audio stream over the network.

First a definition of satellite needs some nuance, because it can be another Pi running Rhasspy or maybe just an esp32 device supported by the streamer.
Also it is not really control. The base is subscribed to all hermes/audioServer topics, so if a sat is publishing to hermes/audioServer/satname/audioFrame, Rhasspy knows about it.
Depending on the fact that you have set satname in the satelliteId’s in the various base settings, the base actually does something with it. On those messages the siteID is a property, so if something is published, that siteID is also across all messages. It is a just a message bus so to speak and the satellite must have to code to process it if needed or wanted.
Check here for the reference: Reference - Rhasspy

These is also a nice picture off two different sats, the second picture has only input, wakeword and output. And even wakeword can be done on the base.

Covered in depth in the tutorials
https://rhasspy.readthedocs.io/en/latest/tutorials/

Yes.

ProjectAlice is a totally different software setup. Once controlled by snips and developed by @maxbachmann
I think currently abandoned, but I am not sure.

I was one of the maintainers until around 2 years ago, when rhasspy was getting traction. Since I prefer the architecture of rhasspy I stopped working on it back then. Last time I checked the author (Psychokiller1888 (Psycho) · GitHub) was still working on it, but I agree that development has at the very least slowed down a lot.