You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The tokeniser used is OpenAI BPE based, and can support any unicode character (although hasn't seen data using those tokens), so you'd just need to retrain on that data, without making changes in utils.py
The tokeniser used is OpenAI BPE based, and can support any unicode character (although hasn't seen data using those tokens), so you'd just need to retrain on that data, without making changes in utils.py
We have created our own Hindi_speaker_encoder.pt based on metavoice encoder architecture, and we have fine-tuned first_stage.pt using our hindi_speaker_encoder.pt just to get an idea of result, but it gives a silent output when we give a hindi text as an input. We have also noticed that their is no training script for firs_stage and second_stage so will it still work or not if we train our hindi dataset on this two pretrained models?
No description provided.
The text was updated successfully, but these errors were encountered: