I’ve noticed a “lag” between my saying a command “turn on the living room lamp” and the lamp turning on. I’m using pocketsphinx for STT on a Rpi 3 if that is relevant. When I run htop, I’m seeing a lot of CPU being consumed by the precise engine. Is this to be expected? I’m very happy with “hey mycroft” as the wake word, almost no false positives, and great sensitivity, but if I can get faster responses to commands if there is a different wake word that is less resource intensive. (if indeed the cpu issue is responsible for the lag). Thanks!!
Keep in mind that 60% is not absolute.
On 4 cores in htop the maximum load is 400 as 100 is relative to one core.
Pocketsphinx can run really slow on a pi 3 depending on which type of model is used. I think this will be the bottle neck. I run precise on a pi 3a+ without a problem but its to slow running asr with kaldi for my taste.
I would probably recommend upgrading to a pi 4 sometime in the future as this will make a huge difference and if you want to achieve sub real time performance.
Thanks! a pi4 is in the plan.
Apart from ASR.
Its really just Precise as it doesn’t export to tensorflow-lite as far as I know.
Could be much lighter just isn’t exported and quantised down as it would lose very little accuracy for much performance gain.
Dunno why but doesn’t really tickle a Pi4