We’ve had a couple of threads going about this topic but I wanted to share progress to date, a few caveats, and then I have some questions (because I’m now more familiar with streaming audio from a web browser than I am with Rhasspy!)
Here’s a shockingly small amount of JS that works really well. Only tested in Chrome and on a Chromebook tablet:
const webSocket = new WebSocket('ws://192.168.1.200:1880/panel/audio');
webSocket.binaryType = 'blob';
webSocket.onopen = event => {
console.log("[open] Audio Node-RED websocket connection established");
navigator.mediaDevices
.getUserMedia({ audio: true, video: false })
.then(stream => {
const mediaRecorder = new MediaRecorder(stream, {
mimeType: 'audio/webm;codecs=pcm',
});
mediaRecorder.addEventListener('dataavailable', event => {
if (event.data.size > 0) {
webSocket.send(event.data);
}
});
mediaRecorder.start(1000);
});
};
My websocket is pointing to Node-RED right now as a simple way to confirm data was flowing.
Now the question is this: How do I string this up so it feeds my audio stream directly to Rhasspy? Will Rhasspy need a new way to accept this kind of “remote mic” or is it best to use one of the existing methods outlined here?
Right now, I’m not 100% sure what the bitrate or sample rate is but I do know that I’m sending chunks based not on size but on time. That last bit of code says to stream the recorded audio every 1,000 milliseconds and I can’t find a straightforward way of sending chunks any other way.