Definitely The first step will be adding support for French to gruut. I expect this to be done by next week or so.
The recordings can be done with anything as long as you have WAV files and transcripts. With a little work, gruut will help select a small set of sentences from a large corpora (books, Wikipedia, etc.) that are maximally useful.
Training just needs a CUDA-enabled GPU that’s supported by PyTorch >= 1.5. I’m using a GTX 1060 6GB.
This is the rdh Dutch voice speaking an (accented) English sentence! Because I use IPA for both English and Dutch phonemes, I created a small mapping file that approximates the 14 or so “missing” phonemes from Dutch. I had to guess on some, but it works as a proof of concept
So this means we could re-use some of the voices for other languages, until we get native speakers in that language. To get it right, though, I will need help from people who speak both languages. For example, I don’t really know how Dutch folks pronounce the “th” sounds from “thing” and “the”.
For things like numbers and dates you should probably preprocess your text with something like Lingua Franca to convert them to words that are pronounceable by the TTS.
There are some undocumented features I’m still experimenting with, but I agree that in general a separate library should be used. Some of the features that are in there but disabled for now:
Currency recognition
“$100.12” (sort of works now)
Number types
“1_ordinal” becomes “first” in English
“1902_year” becomes “nineteen oh two” in English
Alternative pronunciations
“read_1” and “read_2” are pronounced like “red” and “reed” respectively
I also have the ability to list abbreviations for a language that are automatically expanded. I’ve got a list for English, like mr -> mister, but I don’t know any for Dutch.
Happy to help and expand on those lists.
If I remeber correctly Mycroft has something like a collaborative system on their website.
(translate.mycroft.ai)
I suppose we could do something similar with just a github directory per language and documentation on what is needed for completing a language.
How should I cope with the following issue?
In the trimming phase of the program, it happens I hear a phrase being pronounced in a way that I don’t found 100% OK, but as I don’t know whether I recorded it twice, I accept it anyway. But then I notice there is indeed a better version. How can I find/delete the first version (without going through all phrases once more)?
The WAV files are all named <id>_<timestamp>.wav where id is from the prompts file. So just open that up and search for the text. Then, take the id and find all WAV files that start with it.
@synesthesiam No words, thank you upfront for all the relentless effort, again
I tried to set it up on an Intel NUC that acts as a Rhasspy server running Ubuntu 18.10 and the webserver perfectly loads, but when I try any word, e.g. oog (=eye), I get the following message:
Error: Failed to fetch Different browsers give a similar error message.
After that, the Docker container just crashes and nothing specific is shown in the output. It seems to go wrong when it tries to invoke the API at http://<ip>:59125/api/tts?text=oog&phonemes=false
Trying to invoke it directly using Postman gives the same behavior.
Let me know if I can help you figure this out and I’m happy to assist.
Doing the exact same thing as you gives the same result, the laryx container stops without a message. This is the output from cur:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (52) Empty reply from server
Docker for me is a version higher at Docker version 19.03.6, build 369ce74a3c. I also tested it on a brand new alpine VM and that has the same behavior.
Not sure what to test, but if somebody has a clue, I’m happy to test things along.
I know it shouldn’t matter, but is there any chance alpine could be causing an issue? Everything is compiled against a Ubuntu base. Docker should hide the differences, but it’s the only thing I can think of.
Another option might be to start the container with --entrypoint bash to get a shell prompt, and then run the ENTRYPOINT command in the Dockerfile manually. Maybe you’ll get an error message when it crashes?
I also encounterd the “empty reply from server” problem while trying to setup the rhasspy/larynx:nl-rdh-1 docker image. I tested the docker image on my laptop and it was working fine. The moment I moved it to my server it stopped working.
I followed the above advice and changed the entrypoint, then ran the ENTRYPOINT command. I got the following error:
Illegal instruction (core dumped)
Googling this message I get mostly tensorflow related problems about having an cpu not supporting an specific extension. I think this is also happening with this docker image.
the supported extensions on my server:
model name : Intel® Atom™ CPU D510 @ 1.66GHz
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl cpuid aperfmperf pni dtes64 monitor ds_cpl tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm dtherm