[Announce] node-red-contrib-voice2json (beta)

As i already announced over on the node-red forum last week the nodes me and Bart another developer have been working on to integrate voice2json nicely into node-red for very low code, modular, quick and visual bootstrapping of voice command pipelines are now ready and stable enough to be tried.
They enable you to once everything is installed do everything visually from within node-red.
You can find all the info about how to use them and which features they offer in the documentation in our repository.
So i would be very happy if some of you guys could give them a spin and open issues for any bugs you find or ways the documentation could be made clearer. You can also of course just comment here and i’ll try and get back to you.
Here are some quick impressions of what to expect:

Let me know what you think, Johannes

10 Likes

Just a headsup:
New minor version now also includes the transcribe stream functionality as a node to start transcription already while recording.

Johannes

2 Likes

Bump to 0.7.0

  • The nodes now include the possibility to edit the profile.yml (for example to adjust or change the wake word or recording parameters) straight from the node config screen as an additional tab.
  • several bug fixes including one that could have let to a race condition on slower pis like the 3a+ leading nodered to crash

Have fun, Johannes

Bump to 0.7.1

This update brings enhancements to the voice2json wait-wake wake-word node:

  • the node now supports a number of additional control messages:
    • pause: pauses the wait-wake node for example to realize a mute function without actually stopping the flow
    • forward & stop_forward enable the forwarding of audio to for example the stt process without a wake-word detection. A use case would be to trigger speech recognition with a physical button or a ui button in addition to the wake-word
  • the new modes are fully compatible with the current modes and can be combined
  • This enables a number of new use cases and makes the wait-wake node more flexible. For example it simplifies the building of multi step voice interactions after one wake-word.

Johannes

2 Likes