Rhasspy, MQTT, node-red, proxmox, raspberry pi3, nuc - Part 1

Hi everybody,

I´m very new to Rhasspy. But it is a very interesting tool and I like it.

@jrb5665 I like what you have done with it.
Especially the things with Dark Sky. Can you help me to do the same?
And how is your progress with Alarms and Timers?

Unfortunaly I´m not very familar to programming. And need maybe a lot of Help.

Thank you in advance

Hi @Becks89 and welcome,

I’m happy to help where I can.
As far as the weather from Dark Sky, I will just preface in saying that since Dark Sky has been sold I have moved to the openweathermap (OWM) service for my data, although that doesn’t actually matter to the rest of the implementation as it is just data coming into the system

I contributed a message or two in the thread
https://community.rhasspy.org/t/rhasspy-can-tell-you-the-weather/671
which might be a good place for you to start as there a few ways to approach the same issue discussed there.

For the alarms and timers, I’ve been pretty busy with other things since I posted the original message so other than having node-red receive and identify the intents I haven;t progressed this any further as it will require me to rewrite my existing schedule handler in my node-red code and I haven’t had time to design what I want it to look like.

If you have specific questions I will be happy to share what I have but as you can probably gather it is quite large and complex and there is a fair bit if javascript incorporated into node-red functions.

You haven’t really given any details of what you already have in place, maybe if you share a bit about what you already have or plan to implement I can target my answers a but better

@jrb5665 Thank you for your answer.
I will have a look in the thread you mentioned.

To my system:
I have a RaspberryPi with Rhasspy running in an Docker-Container, which should be my Voice Assistant.
A second one is the “Brain” of my smarthome. There is openhab running on it.
The two systems are communicating over MQTT with each other.
For example I can turn on some lights. Or I can ask after the temerature in my rooms.
My plan is to get Rhasspy working like a real Voice Assistant. For example set a alarm or a timer, or say how the weather is outside or at the next day.

Well this community is a great place to get ideas and solutions

I used to use openhab but moved to an almost entirely node-red based system a while back.
I still have openhab but only for the excellent Z-Wave binding that Mark Jackson built.

I found it to be resource hungry particularly with the combination of Java and the need to scale to the size my system had grown to. This left rules slow and unreliable, although I think the new rule engine there may have made a difference, but I already had replace that part with node-red anyway, so just continued to migrate everything else.

With what I had already done to standardise naming of everything in my system it made it fairly easy (requires coding) to just have node-red generate slot lists for Rhasspy to cover all my home automation.
I send them via MQTT to a node-red instance on the Rhasspy server and once there they are written out to files and retraining initiated with an API call.

The main issue I have struck there is that in generating the amount of entries I have in the slot lists and the number of sentences needed to cover a fairly natural way of controlling them all, it takes about 8 minutes to train my server on a proxmox container with 8GB RAM, 4 vCPU on an AMD 6 core/12 thread CPU. It also sometimes leads to a few more incorrect intent resolutions, like “Add Cranberries” could end up with additional tracks in the current queue on the Sonos or it could end up with an entry in the shopping list, although that specific case hasn’t happened.

I have toyed with the idea of maybe breaking this up a bit into a number of smaller instances and then using multiple wake words with it, maybe saying “Hey Sonos” to control the music, “Hey Shopping” to manage the shopping list, etc. I’d also like to add my Plex library of movie, tv shows and anime into voice control, but at the moment I’m sure that would just be too much.
So then it would be a bit more like the old sci-fi shows where the characters would just say “Lights” or “Music”, etc with no wake word and the computer would just know what they want. You would just speak to the “device” you want the action from.

This would also allow for the possibility of having some very specific controls, like the home automation along side something very free-form like grocery lists or adhoc questions, like “Hey Wiki, What’s the height of Mount Everest” or other wiki queries