Wyoming Whisper docker image

Hi everyone,

Coming from HA’s side, I saw that a Wyoming Whisper docker image was recently released. The image is available on docker hub but I couldn’t find the associated GitHub repo. Are you planning on releasing it? I would like to take a look at the code and potentially suggest improvements if I could (GPU support for example).

Thanks and awesome work btw,
Taha

After some research it seems like everything is available here: addons/whisper at master · home-assistant/addons · GitHub. IMO it would be useful to add a link to the GitHub on the docker hub page, since it’s designed to run as a standalone instance as well.

Hi @Vodros I recently got around to uploading the source for the standalone Docker image builds: GitHub - rhasspy/wyoming-addons: Docker builds for Home Assistant add-ons using Wyoming protocol

As always, so many things to do :stuck_out_tongue:

2 Likes

Thanks! Now I’ve got another question: is the source code for the python packages available somewhere? I couldn’t find it :sweat_smile:

Yes, it’s “conveniently” sitting over in a branch of Rhasspy 3 :laughing:

Sorry, a lot of things are in flux right now :slight_smile:

2 Likes

Perfect. Thanks a lot! :slight_smile:

1 Like

I recently got GPU working with Docker/wyoming-whisper. Details here: Home Assistant - Enabling CUDA GPU support for Wyoming Whisper Docker container | tarball.ca

2 Likes

Hello Jeff,

first thanks a lot for providing this guide - I’m also trying to run Whisper with CUDA support.

But after following your guide carefully, I get following error message from Whisper in the logs:

whisper  | Traceback (most recent call last):
whisper  |   File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
whisper  |     return _run_code(code, main_globals, None,
whisper  |   File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
whisper  |     exec(code, run_globals)
whisper  |   File "/usr/local/lib/python3.9/dist-packages/wyoming_faster_whisper/__main__.py", line 136, in <module>
whisper  |     asyncio.run(main())
whisper  |   File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
whisper  |     return loop.run_until_complete(main)
whisper  |   File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
whisper  |     return future.result()
whisper  |   File "/usr/local/lib/python3.9/dist-packages/wyoming_faster_whisper/__main__.py", line 112, in main
whisper  |     whisper_model = WhisperModel(
whisper  |   File "/usr/local/lib/python3.9/dist-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 58, in __init__
whisper  |     self.model = ctranslate2.models.Whisper(
whisper  | ValueError: This CTranslate2 package was not compiled with CUDA support
whisper exited with code 0

Something I’m doing wrong here?

Best regards and thanks for any tip on this
Andreas

Also thanks to the kind support of Jeff, I got it to work!

In case somebody needs that as well, I’ve created now an own CUDA enabled docker image for arm64 architecture:
https://hub.docker.com/r/abtools/wyoming-whisper-cuda

2 Likes

Just thinking outload: would we be able to take it in HA portainer and use it as well as alternative to official adf-on?
Or even better planning to do directly a HA add-on ?