Koala Noise Suppression

Looks interesting but they have compared to RNNoise which they seem to do often as there are other better alternatives avail (RNNoise is pretty bad and a very early model) same with Porcupine where they compare against the likes of Snowboy.

Its a shame DeepFilterNet is single threaded only, but DTLN is a big improvement on RNNoise, so claims are not exactly true.
There is also another thing to check as for the latest and greatest in ASR it is often trained without the filter technology and in the case of Whisper can increase WER (To dicate a filter to KWS & ASR and train in a dataset can have huge increases in accuracy)

1 Like

Hey! Thanks for spreading the word for Koala. (I’m doing product marketing at Picovoice. :))

We use production-ready products in benchmarks, and RNNoise is still widely used. We don’t have Krisp because it’s not publicly available. If you can get the SDK and do not mind benchmarking, please let us know: GitHub - Picovoice/noise-suppression-benchmark: Benchmark for noise suppression engines

The same challenge with Porcupine as you mentioned. If you can get an SDK from Cerence, Nuance, Sensory, or Soundhound and do a benchmark, you’d do me a big favour :slight_smile:

For example, we couldn’t add Soundhound Houndify due to their EULA, but when somebody else did it, we shared: End-to-End Intent Inference from Speech

Installs as a Ladspa plugin and is opensource.

Both Opensource both vastly superior to RNNoise, you are a representative of a commcercial company so maybe you invest in those closed commercial SDK’s or even compare against freely available opensource that isn’t so ancient?

Your reply baffles me.

“you are a representative of a commcercial company so maybe you invest in those closed commercial SDK’s

  1. Their license agreements (EULAs) don’t allow publishing any benchmarks
  2. The idea of an open-source benchmark is to be reproducible.

How is investing (or not) relevant to an open-source benchmark when people cannot reproduce? Nobody can (and should) give away others’ products. It’s their call to make their SDK/API accessible or not.

compare against freely available opensource that isn’t so ancient?

I didn’t say “RNNoise is the state-of-the-art”. It’s not. I agree with you. I said “RNNoise is still widely used" Enterprises know and use RNNoise and Krisp. It’s the market. ¯_(ツ)_/¯

Anyway, I wanted to clarify that (1) We’re not cherrypicking. (2) Benchmarks are open to contributions (3) We support reproducible benchmarks.

If you want benches then you provide them, I know instantly from the samples there are better and I just posted x2 opensource ones and you could get the SDK.

IMO you do cherrypick and its why you have the examples you do, many companies are the same with sales blurb that is often optimistic as that is the nature of sales blurb.
There are much better noise filters and likely a better bench would be the SNR vs load as its looks quite light whilst better filters likely provide much more load.

No-one uses RNNoise apart from some that try PulseEffects and realise how poor/antiquated it is now, enterprises use products like Krisp or RTXVoice.

RTX has hardware requirements, not cross-platform and you had probably known that. All I can do is explain the rationale and I did. Everybody is free to do their own analysis, and also what they want to believe. There is no value in repeating myself.

Thanks for sharing Koala and have a great weekend.

1 Like