Mycroft precise would likely be the best as really accuracy is down to the dataset recipe you feed it.
It uses an old full tensorflow that if they updated and used a different method it would run 10* lighter or faster and if shifted to 64bit its gets another *2 so 20 times lighter than it is.
Its classification structure isn’t great and the proposed training methods are fubar as tensorflow-kws is state-of-art but the average results are usually poor with Precise.
Its fractional I think 0.0-1.0 but getting the last result would make things much easier.
To be honest I am a bit bemused about KWS as for a long time the options have been poor.
I get Raven is a quick setup that captures KW to make datasets but its actually doesn’t capture the following command sense so sort of becomes this limited low accuracy KWS.
Google published a repo of state-of-art kws and they are extremely accurate and lite running under tensorflow lite.
Opensource that you can train yourself and if you get the dataset right wow they are pretty amazing where the only problem is your KW will be recognised under high noise but the command sentence to ASR likely not and maybe why there was never a push for better as it just shifts the cul-de-sac higher up the process tree.
Validation of KWS models have state-of-art pushing 98% on benchmark datasets such as the Google commandset which contains much variation and bad data(10%) to create a benchmark dataset as all would return 100%.
With custom datasets (record your own) its relatively easy to garner 100% validation and create noise tolerant models.
I can have 70dB noise (music or voice) and as long as I raise my voice to similar level a modern model such as a CRNN will actually detect.
I dunno if all streaming KWS use many inferences @ 20ms but in my tests even a simple sum of the inference envelope rather than singular snapshots of a threshold means its also extremely low in regards to false/positive negatives, false negatives are not a problem as often its just resay that KW maybe louder / more clearly. If you train right and use a more modern model it will make porcupine look as antiquated as it does snowboy & pocketsphinx.
Still the problem from there is KW is recognized but ASR not so much.
Mycroft (pycroft) which is competing I guess if such a thing, accuracy wise prob not much difference but has a extensive skill repo the KWS Precise is from Mycroft and a bit of an oxymoron.
If they spent as much time updating there core than shiny web page presentations it could be great but IMHO there is a thick spread of somewhere between optimism & snakeoil, the guitar toting ex ceo and his ballad of the patent troll had me running.
There is an element of ‘gold rush’ AI to these projects whilst really the prospecting days are over and essential skills lacking in an extremely in-depth field.