From cbaeb85034d2391027a348c67a148761f6109ea3 Mon Sep 17 00:00:00 2001 From: Max Bain Date: Mon, 19 Dec 2022 19:41:39 +0000 Subject: [PATCH] restructure readme, --- README.md | 50 +++++++++++++++++++++++------------------- whisperx/transcribe.py | 2 +- 2 files changed, 29 insertions(+), 23 deletions(-) diff --git a/README.md b/README.md index 482288c..571761b 100644 --- a/README.md +++ b/README.md @@ -4,8 +4,8 @@ GitHub stars - - + GitHub issues @@ -17,6 +17,15 @@

+

+ What is it โ€ข + Setup โ€ข + Example usage +

+ +whisperx-arch + +
Made by Max Bain โ€ข :globe_with_meridians: https://www.maxbain.com
@@ -25,9 +34,9 @@

-

What is it ๐Ÿ”Ž

+

What is it ๐Ÿ”Ž

-This repository refines the timestamps of openAI's Whisper model via forced aligment with phoneme-based ASR models (e.g. wav2vec2.0) +This repository refines the timestamps of openAI's Whisper model via forced aligment with phoneme-based ASR models (e.g. wav2vec2.0), multilingual use-case. **Whisper** is an ASR model [developed by OpenAI](https://github.com/openai/whisper), trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds. @@ -36,25 +45,25 @@ This repository refines the timestamps of openAI's Whisper model via forced alig **Forced Alignment** refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation. -whisperx-arch - - -

Setup โš™๏ธ

+

Setup โš™๏ธ

Install this package using `pip install git+https://github.com/m-bain/whisperx.git` You may also need to install ffmpeg, rust etc. Follow openAI instructions here https://github.com/openai/whisper#setup. -

Examples๐Ÿ’ฌ

+

Example usage๐Ÿ’ฌ

+ ### English + Run whisper on example segment (using default params) -`whisperx examples/sample01.wav --model medium.en --output examples/whisperx --align_model WAV2VEC2_ASR_BASE_960H --align_extend 2` + whisperx examples/sample01.wav + For increased timestamp accuracy, at the cost of higher gpu mem, use a bigger alignment model e.g. -`WAV2VEC2_ASR_LARGE_LV60K_960H` or `HUBERT_ASR_XLARGE` + whisperx examples/sample01.wav --model medium.en --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --output_dir examples/whisperx Result using *WhisperX* with forced alignment to wav2vec2.0 large: @@ -69,7 +78,7 @@ https://user-images.githubusercontent.com/36994049/207743923-b4f0d537-29ae-4be2- For non-english ASR, it is best to use the `large` whisper model. ### French -`whisperx --model large --language fr examples/sample_fr_01.wav --align_model VOXPOPULI_ASR_BASE_10K_FR --output_dir examples/whisperx/ --align_extend 2` + whisperx examples/sample_fr_01.wav --model large --language fr --align_model VOXPOPULI_ASR_BASE_10K_FR --output_dir examples/whisperx https://user-images.githubusercontent.com/36994049/208298804-31c49d6f-6787-444e-a53f-e93c52706752.mov @@ -77,8 +86,7 @@ https://user-images.githubusercontent.com/36994049/208298804-31c49d6f-6787-444e- ### German -`whisperx --model large --language de examples/sample_de_01.wav --align_model VOXPOPULI_ASR_BASE_10K_DE --output_dir examples/whisperx/ --align_extend 2` - + whisperx examples/sample_de_01.wav --model large --language de --align_model VOXPOPULI_ASR_BASE_10K_DE --output_dir examples/whisperx https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731-97d4-0aebd07e0eb3.mov @@ -87,22 +95,21 @@ https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731- ### Italian -`whisperx --model large --language it examples/sample_it_01.wav --align_model VOXPOPULI_ASR_BASE_10K_IT --output_dir examples/whisperx/ --align_extend 2` + whisperx examples/sample_it_01.wav --model large --language it --align_model VOXPOPULI_ASR_BASE_10K_IT --output_dir examples/whisperx https://user-images.githubusercontent.com/36994049/208298819-6f462b2c-8cae-4c54-b8e1-90855794efc7.mov - -

Limitations โš ๏ธ

+

Limitations โš ๏ธ

- Not thoroughly tested, especially for non-english, results may vary -- please post issue to let me know the results on your data - Whisper normalises spoken numbers e.g. "fifty seven" to arabic numerals "57". Need to perform this normalization after alignment, so the phonemes can be aligned. Currently just ignores numbers. - Assumes the initial whisper timestamps are accurate to some degree (within margin of 2 seconds, adjust if needed -- bigger margins more prone to alignment errors) - Hacked this up quite quickly, there might be some errors, please raise an issue if you encounter any. -

Coming Soon ๐Ÿ—“

+

Coming Soon ๐Ÿ—“

[x] Multilingual init @@ -110,24 +117,23 @@ https://user-images.githubusercontent.com/36994049/208298819-6f462b2c-8cae-4c54- [ ] Automatic align model selection based on language detection -[ ] Option to minimise gpu load (chunk wav2vec) [ ] Incorporating word-level speaker diarization [ ] Inference speedup with batch processing -

Contact

+

Contact

Contact maxbain[at]robots[dot]ox[dot]ac[dot]uk if using this commerically. -

Acknowledgements ๐Ÿ™

+

Acknowledgements ๐Ÿ™

Of course, this is mostly just a modification to [openAI's whisper](https://github.com/openai/whisper). As well as accreditation to this [PyTorch tutorial on forced alignment](https://pytorch.org/tutorials/intermediate/forced_alignment_with_torchaudio_tutorial.html) -

Citation

+

Citation

If you use this in your research, just cite the repo, ```bibtex diff --git a/whisperx/transcribe.py b/whisperx/transcribe.py index 174cdbd..a4fc279 100644 --- a/whisperx/transcribe.py +++ b/whisperx/transcribe.py @@ -26,7 +26,7 @@ def transcribe( compression_ratio_threshold: Optional[float] = 2.4, logprob_threshold: Optional[float] = -1.0, no_speech_threshold: Optional[float] = 0.6, - condition_on_previous_text: bool = True, + condition_on_previous_text: bool = False, # turn off by default due to errors it causes **decode_options, ): """