It looks like project can be at organization level also. Wouldn’t this be easier?
Also, the organization is currently called Michael Hansen 
It looks like project can be at organization level also. Wouldn’t this be easier?
Also, the organization is currently called Michael Hansen 
Will the current rhasspy repository keep its name when it’s migrated later to the rhasspy organization? Or will it vanish after the mqtt rewrite because we’ll have rhasspy-server, rhasspy-client, rhasspy-standalone and so on? If the latter, we can use the rhasspy name for the docs, roadmap, and so on.
Didn’t know that. An organization-wide project seems better for discoverability then 
I added a Roadmap project. To be filled 
For now we’ll probably have to refer to issues in the rhasspy repository too in this project.
@synesthesiam I guess you could also add new colums or even new projects for new major releases, but we’ll have to start somewhere, so the To do/In progress/Done columns should be a good start.
Thanks, @koan! I’ll be adding more to it as I can.
I’ve successfully tested the rhasspy-dialogue-hermes service now with microphone, wake, ASR, NLU, TTS, and playback services. There are still lots of gaps to fill in, like a rhasspy-wake-snowboy-hermes service.
I split out the web UI into a rhasspy-web-vue and rhasspy-server-hermes project. The former is just the Vue html/js/css stuff, and the latter is a (partially done) web server that talks MQTT/Hermes on the back end.
Do we want ingress support in the AddOn?
I added a few items to the Github roadmap project 
I hadn’t heard of ingress before. Looks like something we should be able to support. That would definitely make it easier for users who have SSL, no?
Yes, and you do not have to leave the HassIO environment.
I’ll fork the addon repo and see how far I get
I feel silly for asking, but how did you link an issue to the roadmap?
I use « Add card », search for the issue (uncheck the checkbox limiting to the linked repos only) and drag and drop it in the appropriate column. 
Oh, it must be limiting its search to project repos only. I thought I could get issues from the main rhasspy project in there. I guess I’ll have to transfer them!
hi,
back at home again 
i used to work with django the old days which is convenient to do some CI/CD stuff.
For a collaborative perspectives i propose Gitlab which is rich on Devops, planning and pipelined features …
Ingress support would be great!
@romkabouter how far did you get?
basically we just need the app be able to serve under a subdirectory e.g https://example.com/local_rhasspy/.
The static files (css, js, images etc.) could be referenced using a relative path and the API endpoints could be prefixed using an environment variable
https://blog.aspiresys.pl/technology/building-jarvis-nlp-hot-word-detection/
Homegrown generic lib wakework option for Rhasspy. No black boxes 
The MFCC CNN seems to be a good system for wakeword detection.
Keras model can be converted to Tensorflow lite model and run using Snips Tract for performance on low CPU devices (like Pi 0).
It all comes down to the dataset. The more data you have the more precise the detection will be.
I really hope that the Snowboy guys open source their huge dataset (but I doubt they will).
I think @synesthesiam and @ulno are in contact with the Mycroft guys on the matter.
@Phil, I did not have the time really, I got some page not found errors and gave up 
And then 2.5 pre came out so I thought I’d start with an addon for that 
I started it indeed, but currently have a lot of work at home, so no real progress
Yeah I posted before but the more I think about it, as you say tensorflow-lite but I am thinking it will run on the AIY Vision kit by google.
But it was the idea also of running with https://coral.ai/docs/accelerator/get-started/#3-run-a-model-using-the-tensorflow-lite-api
I was more thinking of the Google AIY Vision and staying extemely generic.
So from raspberry Pi to GPU or software we use a tensorflow model.
I am not sure if Precise is a good idea, its opensource for now, but I don’t think its ligtweight or easy to train or understand.
We don’t have models because its a horid tedious arduious process but it was @synesthesiam who got me thinking about with what he has mentioned of late.
I started with Mycroft and its just an opinion but it worries me how much bloat they have.
The Picroft install with all the libs and everything it contains and prob haven’t endeared myself to the community as have said this as generally a perperplexed additions seem to be made like kids in a sweet shop.
Its like the WordPerfect of opensource AI whilst the rather sparse model of Rhasspy is a suite of flexible services.
So yeah go Precise and others, but I still really like the idea of a more DiY Rasspy implemented none ‘black box’.
I am not really interested in others ‘product’ but actually interested in some community efforts to create a community effort that is more of an experiment free of constraint and looks at ideas of how to create models quickly and easily, profile classification.
The Mozzila common voice project made me wonder why are we not opting in to the services we use and creating training data on the data we provide through use.
You can convert Keras Models to Tensor Flow Lite but actually my main emphasis was to use the core methods of Mel Filter (MFCC) spectograms and use it with vanila tensorflow lite and Google AIY vision projects.
Prob doesn’t need conversion or Keras but again I really like the model of Rhasspy as sort of see it as futile to expect accuracy out of low CPU devices (Like Pi 0) when you don’t need to.
Shelf devices only need moderately accurate detection backed by an authoritive server method.
It doesn’t make sense to run much on a Pi0 when you don’t need to or create or design a system against a platform that might not be its main home.
Rhasspy is brilliant in that the Pi(0) is mentioned as even the Pi(3) starts to creak with Mycroft.
But you don’t need accurate models for shelf devices as the accurate models reside in the edge device they connect to.
Google and Amazon have clicked onto that and hence why they provide cloud services and why I am spending much less time with Mycroft as the Rhasspy model for me looks much better for ‘home cloud’ AI.
Several Pi(0) connected to a Pi4 with an AI accelerator is just going to be more accurate and its a slightly different philosphy that doesn’t have the same needs as ‘shelf’ accuracy. (or X86 & GPU)
Its sort of crazy to how many options the lose framework of Rhasspy can provide but if you have an authoritive edge device it can also train the ‘shelf’ devices it serves.
I am thinking you can easily and quickly create custom models for shelf devices from the environment they serve and that alone has a huge effect on accuracy as it is tailored to fit.
@fastjack Mat do you have any time to play with a tensorflow lite model or run using Snips Tract.
I don’t think it matters about model too much for testing and that low CPU devices use initial wake work detection that can run with relatively low sensitivity as the presumption will be they are satellites to an authoritative server.
I have run Precise and on a Pi3 the load is considerable, accuracy doesn’t really bother me as that can be trained but initial load is likely not to change so much.
But also I am starting to see the Pi0 as an obsolete product that is nearing EOL.
You have seen the RK3308 I posted and its Pi3B perf at ZeroW prices, Raspberry sometimes do drag there feet but I have been wondering if the lack of ZeroW stock might mean something is in the pipeline, this year or next, but the 0-2 is prob odds on.
So I guess Precise is valid as the hardware of the future in terms of time of development and adoption we prob are talking a minimum of Pi3 performance.
Also as a satellite its only an initialization wake word and not an authoritative one but I would be really interested in the load, a generic tensorflow lite system would provide.
I am just hoping it might also of itched the curiosity of one of you that has more knowledge than noob ML knowledge.
Precise is opensource but have seen from others that they can be much lighter and more accurate but they are definitely ‘black boxes’.
If anyone can run up a demo to test load without much concern to initial accuracy, I would just be really interested what a load generic ‘tensorflow’ model would produce.
Anyone know if its still true that installing tensorflow-lite on the Pi is problematic?
This might be a perfect implementation as the Visual wake word via a spectogram would split into a polar classification system of wake word not wake word