Collecting positive wakewords on non-raven systems

There is a pull request for precise with a newer version of tensorflow, I have tried it but training does not work well at all and half the scripts are broken. I tried for a day to use it and the best I could do was 50% false-negatives for my wakeword and training was slow. The current version of precise at least trains well and I had only one false-negative with the same training data in 1/100 of the time. And that one was excusable since it was pretty bad quality wise.

I don’t see a big problem with training a model myself, so far i have not have any good experience with any pre-trained model. The only thing that I could get to work was pocketsphinx and that basically reacted to anything I said (and any noise) . The next best results I had was raven, but that still reacted to way to much so now I am training precise. I got it up to roughly the same as raven was, and I only used a pretty small set of samples for wakewords (less than 50) and next to no not wakewords recorded by the same mic rhasspy uses. I only played around with it properly for 3 hours as of yet, thought I spent nearly a week getting a system up and running to even train properly. Once I have more non wakeword data in there, I am pretty sure I can get it to run properly for me.

The problem here is, that few ppl have the knowledge, time and motivation to write something like that. And if something working exists, it happens often enough that it is bought out by some company wanting to play with voice assistants. Precise is not bad, it is just outdated and there is no working version for python 3.8 because of them using an outdated tensorflow library. If they would update that could produce pretty good model.

Another part of the problem is where ppl run their voice assistants. From what I read here, quite a few ppl use anything from a raspberry pi zero to a raspberry pi 4, with a few ppl using it in a vm on their nas. The ppl that have a small server, or anything with more power than a pi4 are rare exceptions, so most systems are geared to run on lower end systems. If it has to run on a potato, there is next to no way it will be perfectly accurate. And if there is a system that is accurate but does not run on the most used hardware, what good is it then?

I am pretty bad with node red, could you upload your flow and tell me what I have to change to use the rhasspy mqtt message instead of voice2json? Once I managed to get a decently running model, I can play around with that.

That will be a bit of a problem, but I might manage. I know I don’t use pulseaudio, with my pi4 being a minimal install. I am not sure if I can manage to switch to dmix without breaking my sound, especially since I don’t know if that will even work with my rhasspy running in a docker container.

If you post your asound.conf i can take a look at it to see what needs to be changed to enable dmix.

Im thankful for that as that mostly complicates everything in my opinion. Im not a big fan of pulseaudio in combination with projects like Rhasspy.

I will do that tonight

There is nothing to Precise but a tensorflow model with a MFCC feed and that is all it is and that is all any modern KWS is purely a streaming neural network.
Since I started talking about MFCC over 6 months ago practically every framework now includes MFCC math libs so that the other part from Mycroft the broke MFCC lib Sonopy is no longer needed.
There is practically zero to write as it 99% the use of a framework such as Keras or Pytorch.

Cutting edge AI tensorflow lite runs on microcontrollers with better accuracy than the KWS we currently emply.
From Arm to Google they have posted many publications on accuracy, what it can be used on and how to use it.

The above is over 3 years old and this is nothing new.

Google even have code labs https://codelabs.developers.google.com/codelabs/sparkfun-tensorflow#0
https://www.tensorflow.org/lite/
Google have vast resources on cutting edge AI for the edge IE microcontrollers.
https://www.tensorflow.org/lite/

Its like the methods of Raven which hark back to early attempts in voice recognition that was no way exclusively Snips and yes it still works but with the massive steps that have been made in AI its archaic and totally defunct as a method.

There is practically nothing to do to provide a KWS now as all is needed is to adopt a framework and model so that you have a code base that you can update and not be constricted with old outdated versions of current software as in Precise.

There are $2 chips such as the ESP32 running KWS with more accuracy than some of the KWS options Rhasppy has and if anyone is telling you anything else its basically a lie. In fact $2 chips are outperforming the newly adopted Snowboy which was a commercial product that has been dropped and is available opensource as the technologies behind it in terms of accuracy are now redundant.
At least Raven is a quick train use sample catcher to train a new model but why effort is being given to Snowboy is such a curve ball I better stop my opinion here as when it comes to KWS and what we use with Rhasspy its such a WTF I don’t like the only angles to the rationale I can deduce.

I will argue my stance with microphones but the KWS direction I have kept quiet about and generally will as the rationales are so odd I think its best to keep quiet as what you said above is actually totally wrong but for some reason that info seems to be coming from this community and its puzzling?

Raven was mainly build to gather samples to train a model with while still having a decently working custom wakeword. It is way better than pocketsphinx ever was and that was the only wakeword I could ever get running before Raven.

There is a fork of precise that is using tensorflow 2.2.0 but it hasn’t been merged yet and to be frank, it is quite broken. Training models is slower than with the official precise release, the train-incremental script is completely broken, the export function was rewritten to export to a tensorflow light model and the model that is trained is now a .pb model instead of a .net but I have found no way to actually use it with rhasspy, just copying it over did not throw any errors, but I could not get a wakeword detected. I can’t diagnose if it is because of the training not working properly and therefore the model just not recognizing me speaking or something else.

I agree with you that none of the wakeword systems we have now are cutting edge and that it could be better but I personally do not know how to write something like that, I had problems just trying to figure out the differences in the precise scripts of those two versions.

For right now we just have to work with what we have, and this thread is not geared towards any system at all, I am just looking for ways to collect data to train models on while using any non-raven KWS since they don’t come with the handy inbuild collection that raven does.

I think the main reason that effort is put into snowboy is that it is open source, rhasspy already can use it and there is a need for an easy to train KWS for ppl that aren’t well versed in linux and just want to have a private voice assistant. Not everyone in this community even knows how to code or understands the more technical parts (I gave up on reading your posts on microphones because I didn’t understand most of it and I know I will continue to use my respeaker4 as long as I can keep it working, so I will try to decipher those if I actually need the information). With snowboy becoming open source it was an easy way to add this missing element without having to reinvent the wheel, so to speak.

@synesthesiam is no expert in AI technology as far as I know and he is working on rhasspy mostly by himself. There are quite a few things that need work and I don’t think it would be in anyone’s interest if he pours all his time into getting a perfect cutting edge KWS system for rhasspy and neglecting everything else. If you know of a working open source framework that can be integrated into rhasspy, has a decent way to train, is kept up to date by the maintainers (since I don’t think there is someone in this community that could keep something like that up to date, otherwise we would have something better already) and, most importantly is stable enough that an update to tensorflow or whatever it uses behind the scenes will not cause new models not to run without manually changing the integration into rhasspy then I am sure it could be integrated. This community has no one with the knowledge of how to and the time to actually work on this project and we can’t just offload everything on synesthesiam, he has enough on his plate as is.

The reason it is coming from this community is because most of the community, me included, are no experts in any of this stuff. We also work on this as a hobby and if I tried to look up every unfamiliar term and technology I will never get anything done. So most ppl either get most of their information from this community or from skimming/half understanding the underlying concepts. So we assume ppl posting here about something know what they are talking about without using extensive research to prove it and if something is posted about often enough it sticks and becomes the norm. So we need experts or ppl with more time and interest to actually do the research to correct that opinion but after it being repeated all over the place it is hard to get it out of ppls mind. If I read about something in 10 different threads, mostly written by 10 different ppl then I assume that they have to at least know some of what they are talking about.

PS: If someone that can move things in this forum comes by, this discussion about KWS systems might be better off in its own thread.

2 Likes

Its called keras or pytorch there is no such thing as cutting edge KWS its merely the cutting edge machine learning platform it is based on.
As in garbage in and garbage out a KWS is the initial voice capture and very much dictates if a garbage stream or not.
I did post last May KWS small lean and just enough for fit for purpose
Of the the various frameworks and the problem is the assumption that it should or needs to fit into rhasspy when a KWS broadcasts from KW till silence and is little more than a stream.

Precise uses keras and tensorflow, the only problem is, that it is out of date.

As for fitting into rhasspy, well, rhasspy is a project that is supposed to be easy to use with minimal knowledge about all the components, so everything has to fit into rhasspy at least in the way that you can select it in the web gui, configure the needed parameters and then use it.

For anything that is not as “beginner friendly” (even rhasspy has a pretty steep learning curve with all the thresholds and sensitivities to be adjusted) as rhasspy you have the custom script you can use as a KWS or the option of using voice2json and build everything yourself.

You obviously are pretty much an expert on this, and you have no problem with training a model without proper guidance, and setting up everything on your own so you can play around all you like and get decent results. Most ppl using rhasspy can’t do that, I would struggle to use the console script option rhasspy provides to even wrap a precise-listen command. I would most likely succeed sometime, but only after sinking a few days into the project and being frustrated to no end by it.

So I hope you can understand why it “needs” to fit into rhasspy for it to be usable by more ppl.

I basically use the config provided by respeaker with their, mostly working, drivers.

Default config for my mic:
# The IPC key of dmix or dsnoop plugin must be unique
# If 555555 or 666666 is used by other processes, use another one

# use samplerate to resample as speexdsp resample is bad
defaults.pcm.rate_converter "samplerate"

pcm.!default {
    type asym
    playback.pcm "playback"
    capture.pcm "ac108"
}

pcm.playback {
    type plug
    slave.pcm "hw:ALSA"
}

# pcm.dmixed {
#     type dmix
#     slave.pcm "hw:0,0"
#     ipc_key 555555 
# }

pcm.ac108 {
    type plug
    slave.pcm "hw:seeed4micvoicec"
}

# pcm.multiapps {
#     type dsnoop
#     ac108-slavepcm "hw:1,0"
#     ipc_key 666666
# }

My main problem is the fact that I don’t understand how alsa works, I just copy paste stuff that is in “guides” that try to explain how it works with just a bunch of examples.

The problem is with Rhasppy is your are thinking like Wordperfect did with Version 5 when an upstart called microsoft split Wordperfect into a suite of interoperable software that also partitioned complexity into ease of use.
A KWS is a HMI (Human Machine Interface) and should be free of system constraints or the obsolesce of the system its used with.
I am no expert and don’t train models because my knowledge believes the manner of all-in infrastructure is wrong, so I don’t waste my time.
A KWS should just be pointed to the IP of the system its a HMI for and that is it and there is no more need than that.

If that’s soon enough I will install a 4mic on a spare pi tomorrow and set up a working dsnoop asound.conf and share it with you. Are you using the default device in Rhasspy or do you choose the 4 mic directly?

It took me a long time to even half understand it. @rolyan_trauts is definitely the one most knowledgeable about alsa here I would say.

This is what I have set my rhasspy to:


I had some trouble with the default device in docker, selecting the card directly fixed that.

I am also not sure if changing the asound.conf of my host system will automatically work for rhasspy inside docker. Looking inside there is no asound.conf but that is something that i am seeing more and more often in current distributions and I have not yet found any explanation how it works without the config file.

As for the time frame, I have been without a working wakeword basically since I started playing around with rhasspy back on 2.4.19, so I am in no hurry. The only reason I am trying to get a wakeword working now is because I am tired of things not working when testing via web gui because it behaves differently and spending hours on debugging until I figure out it is working fine with a wakeword.

Alsa is a pain and every time I don’t use for a couple of month my MS wipes much of what I know.

4 mic is a weird card as its likely summing the channels is detrimental so just use as a single mic in terms of KWS and 1 channel.
Its the respeaker dross that confuses things as it adds much that is not needed.
I have a 4 mic & a linear in my bin of spare parts and forgot even if they have AGC but because they lack audio out (not the usb one) aec will not work (alsa one) (pulse aec is pretty ineffective on a pi)

I only use one channel of the mic since I have this weird issue (again) where at least one random channel produces loud noise at random. It sometimes fixes itself with a restart but mostly it doesn’t. So I turned all but the first channel down in alsamixer. I know it is not the best mic to use, especially with the driver issues it has but it was recommended for snips and I got one. I don’t try to do anything fancy with that pi, I don’t expect to be understood over music that is more than just a bit of background noise and I like the led ring so it has to work.

But I have to disagree that it is the respeaker stuff that makes it hard. I spend hours staring at non respeaker configs in vms and on my laptop without understanding much more than i do with the respeaker stuff.

Yeah 1 channel is fine its the AGC (ALC) that my memory can not remember as that could help much.
As you may have little to no signal at distance.

That complicates things as when you use the card directly it will be blocked for all other applications. I will set up a asound.conf with dsnoop and the dsnoop as default.
You can give it a try.
I’m not a big fan of docker and have no understanding of how it interacts with the hosts audio. I only know that I ve met a lot of people struggling with it :see_no_evil:

1 Like

Docker isn’t that bad its docker syndrome that gets us all that host & container are completely separate instances and either it has to be shared via the docker run or set up internally.
Again if KWS was seperate it would just work via a network connection :slight_smile:

I have no understanding how it interacts either but I tend to break linux systems regularly, like every few hours, when I experiment with stuff, I am just not compatible with it, so I found that using docker to capsulate my applications helps me keep the host running, I just break the docker container instead and can fix that by starting from scratch.

My containers all run as network:host and quite a few of them run as priviledged. i don’t care much about capsulizing for the sake of capsulizing, I just like docker containers better to separate my stuff than venvs

I’ll look at that too tomorrow. I agree that unfortunately the 4 mic really is probably the worst choice from respeaker. It gives no better if not worse performance than the 2mic and doesn’t have an audio output. Only that nice led ring but that’s really it.

Yep and not having a go but so that others might read before getting one that the 4 mic apart from the pixel ring is a whole lot of pointless.