Rhasspy 2.5 Pre-Release

Well the Snips NLU is fully open source and at least I really like how its performing. The snips wakeword detection worked pretty well, but as far as i remember that is not open sourced. (Would love to be proven wrong on this xD)

The main problem people appear to have with it is that cross compiling their rust stuff for the pi sucks a bit. :thinking:
Are the slots/entities of Rhasspy any different to the slots/entities of the snips NLU? There is basically a json structure the Snips NLU is using for the training examples, that you would have to generate and then there should be a training feature of the NLU you can pass it to that will train the model for the NLU.

1 Like

Hi all,
Just pulled the latest Docker of Rhasspy 2.5.0-pre and very impressed by the general speed of response - it’s much slicker than the earlier version I was trying before. Thanks @synesthesiam and all involved in this great project!
I’ve got just one issue which I’m hoping for advice about - When I speak with an intent which involves feedback speech (from Home Assistant, such as GetTime), I get more than one (sometimes upto 10) repeats of the end of speech sound, and then the same number of repeats of the reply speech. In the log in Portainer for my Rhasspy container, I see this warning:-

[WARNING:2020-04-16 19:27:58,685] rhasspytts_cli_hermes: Did not receive playFinished before timeout

How can I diagnose/fix this?

1 Like

It has good language support and seems like a nice complement to fsticuffs and fuzzywuzzy. Not as strict as fsticuffs, but intelligently flexible instead of literally matching text.

Rhasspy’s slots can have substitutions and conversions inside them, so I’ll do my best to take those into account. A Rhasspy slot value like foo:bar!upper will match foo from the ASR, but should output BAR in the intent.

Yikes, that’s really bad. I’ll test speech again from HA and see if I have the same issue. Is this with master/satellite or just a single system?

The audio output issues are much more difficult to diagnose than in 2.4, where everything was in the same Python process. The dialogue manager pauses the wakeword/ASR systems when TTS or audio is played to avoid feedback, but what to do if the playFinished message never comes (service crash, etc.)? So I guess the WAV duration and timeout then. Maybe I should add a small delay to give the audio service more time…

Answering my own question - something made me try switching from the internal MQTT broker, to External broker (which is the Mosquitto 5.1 MQTT Addon of Home Assistant, running in a different Docker container on the same machine) - and Dada! the multiple repeats stopped! (For completeness, I had also just rebooted the whole machine about 10 mins earlier).
I have the feeling that the broker built in to Rhasspy is somehow more sluggish, but no idea why.
Hope this experience may help others. :smile:

The snips nlu has the concept of slot value and synonyms. So you should be able to build this using

"entities": {
    "slot_name": {
      "data": [
          "value": "BAR",
          "synonyms": [
      "use_synonyms": true,
      "automatically_extensible": false,
      "matching_strictness": 1.0

For more complex converters this might not be applicable -> you might have to go over the results of the snips nlu and run the converter afterwards. I suppose thats what your doing for the other solutions aswell.


Hi all,

First thank you for your great Work!

I want to try Deepspech, but it’s not working. I made fresh install (Docker) on a Raspberry Pi 4 with german Profil. But i get a error ‘TrainingFailedException: CreateModel failed with error code 12288’ after Download the files for Deepspeech. How can handle with this?

Thank you for your help

1 Like

Can you check in the deepspeech directory of your profile and tell me how big the output_graph file is?

Also please let me know the output of this command on the Pi 4:

python3 -c 'import platform; print(platform.machine())'

I’m guessing the Pi 4 reports something other than arm64.

armv7l is the usual for raspbian pi4 Pi3 is just armv7 I think which may cause you probs also zero might report something different but forgot.

1 Like

Its armv71

1 Like

Can you check in the deepspeech directory of your profile and tell me how big the output_graph file is?

it’s 188.939.505 Bytes

python3 -c 'import platform; print(platform.machine())'


Thank you for your help!


I ran into an issue with uploading Slots. Here’s the logging from my code:

04-22 12:08:49 rhasspy.updater.RhasspyUpdater INFO     {"Room": ["Dining", "Kitchen", "Landing", "Guest room", "Study","Backyard"]}
04-22 12:08:49 urllib3.connectionpool DEBUG    Starting new HTTP connection (1):
04-22 12:08:49 urllib3.connectionpool DEBUG "POST /api/slots?overwrite_all=true HTTP/1.1" 500 44
04-22 12:08:49 rhasspy.updater.RhasspyUpdater ERROR    b"TypeError: 'NoneType' object is not iterable" 

The first line is the content I’m sending to the slots endpoint, the second shows the url, and the last line is a print of response.content (I’m using the requests library in python).

slightly off topic… i recently got the 2.5-pre image running. As i am moving forward with it and have questions ect, should we just put all the 2.5 questions on this thread? or regular posts?

I don’t want to create a bunch of posts regarding 2.5 if we aren’t there yet :slight_smile:

1 Like

for the mqtt option in 2.5. I don’t want to push all my audio frames to an external broker. So i was trying to connect to the internal mqtt. But I can’t connect to it. It seems the mqtt server runs on only. Is there a way to connect directly to the mqtt? or would i be better off spinning up an mqtt docker on the same box and just configure rhasspy to use that one but others can connect at the same time?


OK, this makes sense now. The German DeepSpeech model I found did not include a graph for Tensorflow Lite. I’m not sure how to do this, but it’s going to be necessary for running it on a Raspberry Pi.

Make sure to use port 12183. The internal broker doesn’t use the standard MQTT port to avoid conflicting.

Does the Rhasspy log contain any more info (a line number)? I’m not seeing yet where the problem could be.


Log contains this:

[ERROR:2020-04-24 21:28:49,248] rhasspyserver_hermes: 'NoneType' object is not iterable
Traceback (most recent call last):
File "/usr/lib/rhasspy-voltron/.venv/lib/python3.7/site-packages/quart/app.py", line 1821, in full_dispatch_request
    result = await self.dispatch_request(request_context)
File "/usr/lib/rhasspy-voltron/.venv/lib/python3.7/site-packages/quart/app.py", line 1869, in dispatch_request
    return await handler(**request_.view_args)
File "/usr/lib/rhasspy-voltron/rhasspy-server-hermes/rhasspyserver_hermes/__main__.py", line 1600, in api_slots
    save_slots(slots_dir, new_slot_values, overwrite_all=overwrite_all)
File "/usr/lib/rhasspy-voltron/rhasspy-server-hermes/rhasspyserver_hermes/__main__.py", line 428, in save_slots
    for name in new_slot_values:
TypeError: 'NoneType' object is not iterable

I’ve had a bit of a look at the code. The old code (2.4.x, app.py) does:

new_slot_values = json5.loads(await request.data)

The new code (rhasspyserver_hermes/main.py) does not do a json.loads():

new_slot_values = await request.json (line 1598)

At first I wasn’t sure if that might work or not, because the request.json() might return a ready-to-use dict, but as an example, in api_slots_by_name, json.loads is used again:

data = await request.data (line 1641)
slot_path = slots_dir / name
slot_values = set(json.loads(data))

So, could that be the problem?

Trying to reproduce this, but failing so far. Is it possible the client is not setting Content-Type to application/json? I could see that causing request.json to return nothing, otherwise I’d expect it to be a dict.

The example from line 1641 should be able to be await request.json as well. In my head, I was accepting a list instead of a JSON object, so I manually parsed it…

Ok, I got it working. Initial docker setup didn’t have the port being passed in to the container. I updated it and now I am able to connect into the mqtt. This is exactly what I needed!

1 Like