From 6b41216902d6ba1aa21e230a85a65774c5fb6cee Mon Sep 17 00:00:00 2001 From: m-bain <36994049+m-bain@users.noreply.github.com> Date: Sat, 17 Dec 2022 17:34:38 +0000 Subject: [PATCH] Update README.md --- README.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 386c4dd..68ce908 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,5 @@
Whisper-Based Automatic Speech Recognition (ASR) with improved timestamp accuracy using forced alignment. @@ -29,7 +30,7 @@ Run whisper on example segment (using default params) `whisperx examples/sample01.wav --model medium.en --output examples/whisperx --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --align_extend 2` -If the speech is non-english, select an alternative ASR phoneme model from this list https://pytorch.org/audio/stable/pipelines.html#id14 +If the speech is non-english, select model from this [list](https://pytorch.org/audio/stable/pipelines.html#id14) that has been trained on desired language. @@ -41,31 +42,29 @@ https://user-images.githubusercontent.com/36994049/207743923-b4f0d537-29ae-4be2- Now, using *WhisperX* with forced alignment to wav2vec2.0: -(a) refining segment timestamps - -https://user-images.githubusercontent.com/36994049/207744049-5c0ec593-5c68-44de-805b-b1701d6cc968.mov - -(b) word-level timestamps - -https://user-images.githubusercontent.com/36994049/207744104-ff4faca1-1bb8-41c9-84fe-033f877e5276.mov +https://user-images.githubusercontent.com/36994049/208253969-7e35fe2a-7541-434a-ae91-8e919540555d.mp4