jonatasgrosman
jonatasgrosman/wav2vec2-large-xlsr-53-russian
- commonvoice - mozilla-foundation/commonvoice60 - wer - cer - audio - automatic-speech-recognition - hf-asr-leaderboard - mozilla-foundatio...
Model Documentation
Fine-tuned XLSR-53 large model for speech recognition in Russian
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Russian using the train and validation splits of Common Voice 6.1 and CSS10. When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
Usage
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-russian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
Writing your own inference script:
python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ru"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-russian"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
Preprocessing the datasets.
We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
| Reference | Prediction | | ------------
Evaluation
1. To evaluate on
mozilla-foundation/common_voice_6_0 with split testbash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset mozilla-foundation/common_voice_6_0 --config ru --split test
2. To evaluate on
speech-recognition-community-v2/dev_databash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset speech-recognition-community-v2/dev_data --config ru --split validation --chunk_length_s 5.0 --stride_length_s 1.0
Citation
If you want to cite this model you can use this:bibtex
@misc{grosman2021xlsr53-large-russian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {R}ussian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-russian}},
year={2021}
}
Files & Weights
| Filename | Size | Action |
|---|---|---|
| flax_model.msgpack | 1.18 GB | |
| pytorch_model.bin | 1.18 GB |