From fb2b727e82a23ccdd9ac7e8fc1917713d831d833 Mon Sep 17 00:00:00 2001
From: m-bain <36994049+m-bain@users.noreply.github.com>
Date: Thu, 15 Dec 2022 01:01:48 +0000
Subject: [PATCH] Update README.md
---
README.md | 28 ++++++++++++++++++----------
1 file changed, 18 insertions(+), 10 deletions(-)
diff --git a/README.md b/README.md
index 46471be..495423d 100644
--- a/README.md
+++ b/README.md
@@ -1,20 +1,22 @@
WhisperX
-Whisper-Based Automatic Speech Recognition with improved timestamp accuracy using forced alignment.
+
Whisper-Based Automatic Speech Recognition (ASR) with improved timestamp accuracy using forced alignment.
-What is it
+What is it 🔎
This repository refines the timestamps of openAI's Whisper model via forced aligment with phoneme-based ASR models (e.g. wav2vec2.0)
-**Whisper** is an Automatic Speech Recognition model [developed by OpenAI](https://github.com/openai/whisper), trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds.
+**Whisper** is an ASR model [developed by OpenAI](https://github.com/openai/whisper), trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds.
+
+**Phoneme-Based ASR** A suite of models finetuned to recognise the smallest unit of speech distinguishing one word from another, e.g. the element p in "tap". A popular example model is [wav2vec2.0](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self).
**Forced Alignment** refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation.
-Setup
+Setup ⚙️
Install this package using
`pip install git+https://github.com/m-bain/whisperx.git`
@@ -37,6 +39,7 @@ Using normal whisper out of the box, many transcriptions are out of sync:
https://user-images.githubusercontent.com/36994049/207743923-b4f0d537-29ae-4be2-b404-bb941db73652.mov
Now, using *WhisperX* with forced alignment to wav2vec2.0:
+
(a) refining segment timestamps
https://user-images.githubusercontent.com/36994049/207744049-5c0ec593-5c68-44de-805b-b1701d6cc968.mov
@@ -46,19 +49,24 @@ https://user-images.githubusercontent.com/36994049/207744049-5c0ec593-5c68-44de-
https://user-images.githubusercontent.com/36994049/207744104-ff4faca1-1bb8-41c9-84fe-033f877e5276.mov
-Limitations
+Limitations ⚠️
- Hacked this up quite quickly, there might be some errors, please raise an issue if you encounter any.
- Currently only working and tested for ENGLISH language.
- Whisper normalises spoken numbers e.g. "fifty seven" to arabic numerals "57". Need to perform this normalization after alignment, so the phonemes can be aligned. Currently just ignores numbers.
- Assumes the initial whisper timestamps are accurate to some degree (within margin of 2 seconds, adjust if needed -- bigger margins more prone to alignment errors)
-Contact
+Coming Soon 🗓
-Contact maxbain[at]robots.ox.ac.uk if you are using this at scale.
+[ ] Incorporating word-level speaker diarization
-Acknowledgements
+[ ] Inference speedup with batch processing
--OpenAI's whisper https://github.com/openai/whisper
+Contact
--PyTorch forced alignment tutorial https://pytorch.org/tutorials/intermediate/forced_alignment_with_torchaudio_tutorial.html
+Contact maxbain[at]robots.ox.ac.uk non-bug related queries.
+
+Acknowledgements 🙏
+
+Of course, this is mostly just a modification to [openAI's whisper](https://github.com/openai/whisper).
+As well as accreditation to this [PyTorch tutorial on forced alignment](https://pytorch.org/tutorials/intermediate/forced_alignment_with_torchaudio_tutorial.html)