@dblanc28 Thank you! I’m definitely happier at Mycroft than I was at my previous job
Yeah, and as @VoxAbsurdis pointed out, their privacy policy is currently awful. Fortunately, the (now) CEO Michael Lewis is having the Mycroft legal team rewrite it from scratch. Going forward, the default is going to be not storing any user data, where possible. Cloud-based STT and TTS are going to require some additional thought, of course.
This is a central feature of the new architecture I’m designing for Mycroft 2.0
The idea is to have a small supervisor process that runs each service as a regular program, and passes events through their stdin/stdout pipes.
With some minor configuration, this lets you reuse or adapt existing programs with minimal effort. For example, arecord
and espeak-ng
can be used as-is for mic/TTS “services”. Your STT could be a curl
call, and your intent recognizer could be grep
This is my impression too.
Hi @kicker10bog, thanks for the feedback!
I’m pushing to include this as an easy option. To me it seems like such a no-brainer; rather than struggling to decide which sentences should trigger which intent for everyone, you can do something crazy: let the user decide
The only thing I would consider changing is how sentences/intents are specified. It’d be nice if the format was portable between open source voice assistants, so people don’t to rewrite if they find something better. But I haven’t found other formats that have as much flexibility as my format
1 Like
I’m late to this conversation but I wanted to chime in. I had Picroft set up for a while, and started working on skill development, but I lost interest quick due to some other issues. I’m a supporter of the Mycroft Community, have donated as well as have the subscription. I had preordered the Mark 2 but some things have kept me from being able to grab it (or the dev kit when it released). I’m keeping an eye out for when they release the custom board that can be used with a DIY setup!
My main problem is the online dependence. It’s not a privacy thing for me though. I want all my smart devices to remain controllable by voice even if my Internet is down. Alexa only does it for Zigbee devices, and only if you use her as the gateway. You can probably figure already I have a universal gateway (going to upgrade it soon tho for full Matter compatibility) so that doesn’t work. I hate having to bring up my HA app every time I want to control certain devices, and I’m a bit miffed that my control panel tablet is overheating and shutting itself off after about 12 hours.
It’s good to hear both projects will continue. I haven’t tried Rhasspy yet, I’ve been meaning to which is how I ended up here today. Planned to install it today I haven’t tried the Mycroft self-hosted back end yet either. One thing that I’m sure is not ready yet is either using SMB, DLNA, or n emby/jellyfin/airsonic/etc plugin to create a json formatted database of your media collection and be able to call up my own music to play on my Yamaha MusiCast receiver (over DLNA). I run an unRaid server and have 5 echo devices in my small 2 bedroom condo also 2 Google nests, a hub and mini. I keep meaning to switch over to Google because Alexa is very limited and there’s still no multi-commamd mode.
Will you basically be porting Rhasspy to combine the best of Rhasspy and the self hosted backend for Mycroft?
I’m still debating how I want to do this. I have a Pi4b 4gb ram collecting dust and a mini PC with a Celeron j4125/8gb ram just running OSSIM currently, but might move that over to my unRAID server, freeing up the extra mini PC. I have a ReSpeaker 2 board and Logitech speaker still from PiCroft, but those will work with the mini PC as well. I’ll need a couple satellites too. Pi Zero W or Zero2 W should be fine for that, no? I also want to finish integrating Tuya into my nodemcu projects scattered around the house
Anyway a late congrats on the job, think I’ll pick back up on skill development too.
1 Like