RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad

Please I am want to finetune lessac High checkpoint but I ran into problems. I have followed the training step 3 times but It always end up at the same error.

Command I use:

python3 -m piper_train --dataset-dir /mnt/c/Users/Code\ Ripple/Desktop/AI\ Tools/piper/Training_Files --accelerator ‘cpu’ --devices 4 --batch-size 16 --validation-split 0.0 --num-test-examples 0 --max_epochs 100
00 --resume_from_checkpoint /mnt/c/Users/Code\ Ripple/Desktop/AI\ Tools/piper/Model-Check-Point/epoch=2218-step=838782.ckpt --checkpoint
-epochs 1 --precision 32 --quality high

Error I get:

Traceback (most recent call last):
File “/usr/lib/python3.10/runpy.py”, line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/usr/lib/python3.10/runpy.py”, line 86, in _run_code
exec(code, run_globals)
File “/root/piper/src/python/piper_train/main.py”, line 147, in
main()
File “/root/piper/src/python/piper_train/main.py”, line 124, in main
trainer.fit(model)
File “/root/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py”, line 696, in fit
self._call_and_handle_interrupt(
File “/root/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py”, line 648, in _call_and_handle_interrupt
return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
File “/root/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py”, line 107, in launch
mp.start_processes(
File “/root/piper/src/python/.venv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py”, line 189, in start_processes
process.start()
File “/usr/lib/python3.10/multiprocessing/process.py”, line 121, in start
self._popen = self._Popen(self)
File “/usr/lib/python3.10/multiprocessing/context.py”, line 288, in _Popen
return Popen(process_obj)
File “/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py”, line 32, in init
super().init(process_obj)
File “/usr/lib/python3.10/multiprocessing/popen_fork.py”, line 19, in init
self._launch(process_obj)
File “/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py”, line 47, in _launch
reduction.dump(process_obj, fp)
File “/usr/lib/python3.10/multiprocessing/reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “/root/piper/src/python/.venv/lib/python3.10/site-packages/torch/multiprocessing/reductions.py”, line 143, in reduce_tensor
raise RuntimeError("Cowardly refusing to serialize non-leaf tensor which requires_grad, "
RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).