Oh, yeah. Jeedom uses the HTTP and websocket APIs extensively.
Can you tell if the CPU usage is coming specifically from the web server’s Python process?
Oh, yeah. Jeedom uses the HTTP and websocket APIs extensively.
Can you tell if the CPU usage is coming specifically from the web server’s Python process?
Yes this is definitely coming from the webserver because even with an install from scratch (deleted .config/rhasspy directory), the load is high (25% on Pi-zero without anything is really unexpected) despite there is no running service.
If you have any clue how to dig into the Quart webserver server, I can give it a try…
Not really. Anyone know a good Python profiler?
well the webserver could be disabled only for sattelites ( but it would be nice to have a way to manage/config the sattelites from the server )
Can confirm that there are no supervisord “left-behind” processes for me either. On my Pi Zero, the CPU usage is coming from a python
process, not a rhasspy-*
process. So when I comment out the app.run()
lines in rhasspyserver_hermes/__main__.py
and restart the container, the CPU usage is much more manageable. I don’t use Jeedom and have a purely external MQTT setup, so it has made my satellites much more responsive.
The web server shouldn’t be burning so much CPU, so this may be a bug. I’m worried it’s processing a lot of unnecessary MQTT messages.
Are you using UDP for you wake word audio, or MQTT (the default)?
I’m using UDP on my satellites.
Pulled the latest Docker image and the new switch --noweb-server
is working great. Thanks @synesthesiam!
I confirm that the load comes fro mthe web server. Again, the issue is observed even on a “debugging” PI with no “applicative” service running (everything disabled) : only mqtt and webserver.
I gave a try to cProfile. Here are the ouputs after sorting on tottime
ncalls | tottime | percall | cumtime | percall | filename:lineno(function) |
---|---|---|---|---|---|
13598 | 14.656 | 0.001 | 14.656 | 0.001 | {method ‘poll’ of ‘select.epoll’ objects} |
12956 | 3.006 | 0 | 5.776 | 0 | pathlib.py:62(parse_parts) |
23041 | 2.755 | 0 | 4.726 | 0 | {built-in method posix.stat} |
8126 | 2.301 | 0 | 6.399 | 0.001 | :1356(find_spec) |
13598 | 2.241 | 0 | 43.216 | 0.003 | base_events.py:1679(_run_once) |
13533 | 2.063 | 0 | 15.471 | 0.001 | utils.py:102(observe_changes) |
14972 | 1.77 | 0 | 24.293 | 0.002 | {method ‘run’ of ‘Context’ objects} |
106918 | 1.313 | 0 | 1.313 | 0 | {built-in method sys.intern} |
1273 | 1.133 | 0.001 | 1.133 | 0.001 | {built-in method builtins.compile} |
814 | 1.12 | 0.001 | 1.12 | 0.001 | {built-in method marshal.loads} |
41464 | 1.054 | 0 | 2.602 | 0 | :56(_path_join) |
12952 | 0.984 | 0 | 7.127 | 0.001 | pathlib.py:633(_parse_args) |
41464 | 0.966 | 0 | 1.279 | 0 | :58() |
12959 | 0.894 | 0 | 1.599 | 0 | pathlib.py:693(str) |
14619 | 0.732 | 0 | 1.404 | 0 | base_events.py:707(_call_soon) |
I applied the Quickstart/Hello-world guide from Quart website… and this is eating 30% of my CPU. Like my “empty” Rhasspy webserver…
So there is clearly an issue coming from Quart itelf (and not even the way it’s used by Rhasspy). If you are in touch with the Quart dev team, can you report them this issue?
That’s weird.
I also use Jeedom and have no problem except for TTS. I think that @KiboOst 's plugin Jeerhasspy uses HTTP API for the TTS command.
So, I use MQTT in Jeedom for TTS and everything works like a charm.
I have also tried to do TTS with squeezelite and LMS plugin as I use my rhasspy satellite as a squeezebox but that isn’t so elegant…
@synesthesiam I have opened a Github issue against Quart.
From the answer and Quart’s documentation I understand that it’s not recommended (for performance reasons because it’s permanently montoring files for reload) to use app.run() which appears to be used by Rhasspy.
Just my 2 cents but maybe there is area for improvement here?
1 step further…
When running Quart hello world I have significantly (divided by 2) my CPU load when upgrading Hypercorn 0.9.5 to 0.10.2 (see my comment in Quart github issue above)
I have built Rhasspy from venv with hypercorn 0.10.2 and I get reduced load
And when checking in Docker container I see that it’s built with Hypercorn 0.9.5. I tried to pip install --upgrade hypercorn in the Docker container but then it fails when Rhasspy starts up. Is it possible to rebuild Docker images with Hypercorn 0.10.2 instead?
Sorry for polluting the thread again
Thanks to Quart’s developer, there could be an even better solution by adding the following argument
use_reloader=False
to app.run. I did the modification in my Docker container and the load of my Rhasspy servers completly disappeared. I will report later if I observe any impact on Rhasspy behaviour…
There is already good improvement on this with the last Docker images using Hypercorn 0.10.1. And a dedicated Github issue is opened to follow-up. Therefore I close this topic…