French male voice for Piper

Hi !
Training a new TTS model is a lot easier than it used to be thanks to piper, so I jumped in.

I’ve published it online:

1 Like

This is great @tjiho! Do you mind if I include one or both of the models in Piper’s voice repository?
Maybe the one trained on more data and named “tom”?

Sure, with pleasure.

I add a third one (which is the second one train more), so i think if you include just one model, the third one is the best one.

The model train on multiple dataset has a voice too deep.

1 Like

I don’t know what’s happened to github now, but it’s only a mirror of Tjiho/French-tts-model-piper: French TTS models for pipers - French-tts-model-piper - Gitea: Git with a cup of tea

Please i need help to finetune or train the model. I have tried but I get
PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 4 which is the number of cpus on this machine) in theDataLoader` init to improve performance.
rank_zero_warn(
Killed

when I use --device 1 and i also get this error when i set it to 4

RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).

Please how can i solve this? I have tried everything from scratch 3 times but still the same error. Machine: Windows 11 with Intel CoreI5 and 8BG Ram

Hi !
Message about num_workers is just a warning, you don’t have to change it.
You probably have another issue. Which graphic card your computer has ? Does the GPU has enough memory ? 8go on my 1070 was limit.

To limit memory usage, there is 2 parameters to set.
First:

export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256'

Then when you start the training, set batch-size argument to a small value, I set it to 32 or 16.