Training issue with Rhasspy 2.5: Missing online.conf

Hi all,
Thanks for this nice work. I am formal user of snips, so Rhasspy is definitely the way to go next !

I have switch to this pre-release to get full compatibility with hermes protocol. I am using PI3 + matrixvoice with docker based rhasspy for audioserver / wakeword, sending frames to docker-based rhasspy master with kaldi (open transcription). Intent is process by a docker-based RasaNLU and TTS by a docker based MaryTTS.
It was pretty much working so far until a recent rhasspy docker pull where kaldi is now showing a stack trace looking for an online.conf file. I scanned through the file in /profile/fr/* but I did not see the file in other location (only online_cmvn.conf in a conf dir)
I did not see such issue in previous post. Any idea?

[DEBUG:2020-04-05 08:11:30,171] rhasspyasr_kaldi_hermes: Receiving audio
[DEBUG:2020-04-05 08:11:30,183] rhasspyasr_kaldi.transcribe: error (online2-tcp-nnet3-decode-faster[5.5]:readconfigfile() cannot open config file: /profiles/fr/kaldi/model/online/conf/online.conf
[DEBUG:2020-04-05 08:11:30,183] rhasspyasr_kaldi.transcribe: [ stack-trace: ]
[DEBUG:2020-04-05 08:11:30,183] rhasspyasr_kaldi.transcribe: /usr/lib/rhasspy-voltron/.venv/tools/kaldi/ const+0xa71) [0x7f1195c214cf]
[DEBUG:2020-04-05 08:11:30,183] rhasspyasr_kaldi.transcribe: /usr/lib/rhasspy-voltron/.venv/tools/kaldi/online2-tcp-nnet3-decode-faster(kaldi::messagelogger::logandthrow::operator=(kaldi::messagelogger const&)+0x11) [0x55e3d638b9bf]
[DEBUG:2020-04-05 08:11:30,184] rhasspyasr_kaldi.transcribe: /usr/lib/rhasspy-voltron/.venv/tools/kaldi/<char, std::char_traits<char>, std::allocator<char> > const&)+0x2b9) [0x7f11960f01a1]
[DEBUG:2020-04-05 08:11:30,184] rhasspyasr_kaldi.transcribe: /usr/lib/rhasspy-voltron/.venv/tools/kaldi/, char const* const*)+0x13a) [0x7f11960f07ea]
[DEBUG:2020-04-05 08:11:30,184] rhasspyasr_kaldi.transcribe: /usr/lib/rhasspy-voltron/.venv/tools/kaldi/online2-tcp-nnet3-decode-faster(main+0x799) [0x55e3d63882cd]
[DEBUG:2020-04-05 08:11:30,187] rhasspyasr_kaldi.transcribe: /lib/x86_64-linux-gnu/ [0x7f1194edf1e3]
[DEBUG:2020-04-05 08:11:30,191] rhasspyasr_kaldi.transcribe: /usr/lib/rhasspy-voltron/.venv/tools/kaldi/online2-tcp-nnet3-decode-faster(_start+0x2a) [0x55e3d63866ca]
[DEBUG:2020-04-05 08:11:30,191] rhasspyasr_kaldi.transcribe: kaldi::kaldifatalerror
[DEBUG:2020-04-05 08:11:30,276] rhasspyasr_kaldi_hermes: <- AsrStopListening(siteId='master', sessionId='82b85751-5756-4afd-8590-c6aef994e385')
[DEBUG:2020-04-05 08:11:50,288] rhasspyasr_kaldi_hermes: -> AsrTextCaptured(text='', likelihood=0, seconds=0, siteId='master', sessionId='82b85751-5756-4afd-8590-c6aef994e385', wakewordId='')
[DEBUG:2020-04-05 08:11:50,295] rhasspyasr_kaldi_hermes: Publishing 134 bytes(s) to hermes/asr/textCaptured


Perhaps something went wrong with the installation of Kaldi when the fr profile was downloaded and extracted. In my similar Docker based setup, that specific file on the server contains the following content


Maybe you can delete your profile folder and start over again by restarting your server container.

There is a Backup button on the Settings tab to export your current setup. Although in my case recreating the profile is not that much work.

Hi @oggyjack, this file should be generated during training. Do you have an errors then?

Thanks for your feedback. You put me on the right track. In fact, I did renewed my master rhasspy docker hosting kaldi a couple times, but always the same missing file issue. I then changed the way to setup rhasspy. I first loaded kaldi without the “open transcription” ticked, trained the instance, verify with a default sentence and it worked. I then ticked again the “open transcription” which triggered some 14 downloads, but somehow those downloads all timed out few times. I then copied “base_graph” dir from my older profile and change the setting in supervisord.conf. Indeed the path in that conf does not match installation. Mine must be --graph-dir /profiles/fr/kaldi/model/model/base_graph with extra “model” dir in path. This is then working.

Now I face another issue which I need to investigate. When triggering a session start, the message is queued with some long delay. Then text is being played, but somehow the played text is also captured by the next audio listening (the bot is listening to itself). I’ll try to investigate further.


Thanks for catching this. The extra “model” dir was part of the problem, but I needed to also do the prepare_online_decoding step even for open transcription.

A fix for this should be on its way tonight. If you want to save re-downloading the base_graph contents, just make sure they’re at fr/kaldi/model/base_graph (without the extra “model”).