coqui
coqui/XTTS-v2
No description available.
Model Documentation
ⓍTTS
ⓍTTS is a Voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. There is no need for an excessive amount of training data that spans countless hours.This is the same or similar model to what powers Coqui Studio and Coqui API.
Features
Updates over XTTS-v1
Languages
XTTS-v2 supports 17 languages: **English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko) Hindi (hi)**.Stay tuned as we continue to add support for more languages. If you have any language requests, feel free to reach out!
Code
The code-base supports inference and fine-tuning.Demo Spaces
| | | | ------------------------------
License
This model is licensed under Coqui Public Model License. There's a lot that goes into a license for generative models, and you can read more of the origin story of CPML here.Contact
Come and join in our 🐸Community. We're active on Discord and Twitter. You can also mail us at info@coqui.ai.Using 🐸TTS API:
python
from TTS.api import TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True)
generate speech by cloning a voice using default settings
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
file_path="output.wav",
speaker_wav="/path/to/target/speaker.wav",
language="en")
Using 🐸TTS Command line:
console
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
--text "Bugün okula gitmek istemiyorum." \
--speaker_wav /path/to/target/speaker.wav \
--language_idx tr \
--use_cuda true
Using the model directly:
python
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
model.cuda()
outputs = model.synthesize(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
config,
speaker_wav="/data/TTS-public/_refclips/3.wav",
gpt_cond_len=3,
language="en",
)
Files & Weights
| Filename | Size | Action |
|---|---|---|
| dvae.pth | 0.20 GB | |
| mel_stats.pth | 0.00 GB | |
| model.pth | 1.74 GB | |
| speakers_xtts.pth | 0.01 GB |