mirror of
https://github.com/m-bain/whisperX.git
synced 2025-07-01 18:17:27 -04:00
resolve conflicts
This commit is contained in:
77
README.md
77
README.md
@ -1,14 +1,42 @@
|
|||||||
<h1 align="center">WhisperX</h1>
|
<h1 align="center">WhisperX</h1>
|
||||||
<h6 align="center">Made by Max Bain • :globe_with_meridians: <a href="https://www.maxbain.com/">https://www.maxbain.com/</a></h6>
|
<p align="center">
|
||||||
|
<a href="https://github.com/m-bain/whisperX/stargazers">
|
||||||
|
<img src="https://img.shields.io/github/stars/m-bain/whisperX.svg?colorA=orange&colorB=orange&logo=github"
|
||||||
|
alt="GitHub stars">
|
||||||
|
</a>
|
||||||
|
<a href="https://github.com/m-bain/whisperX/issues">
|
||||||
|
<img src="https://img.shields.io/github/issues/m-bain/whisperx.svg"
|
||||||
|
alt="GitHub issues">
|
||||||
|
</a>
|
||||||
|
<a href="https://github.com/m-bain/whisperX/blob/master/LICENSE">
|
||||||
|
<img src="https://img.shields.io/github/license/m-bain/whisperX.svg"
|
||||||
|
alt="GitHub license">
|
||||||
|
</a>
|
||||||
|
<a href="https://twitter.com/intent/tweet?text=&url=https%3A%2F%2Fgithub.com%2Fm-bain%2FwhisperX">
|
||||||
|
<img src="https://img.shields.io/twitter/url/https/github.com/m-bain/whisperX.svg?style=social" alt="Twitter">
|
||||||
|
</a>
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<a href="#what-is-it">What is it</a> •
|
||||||
|
<a href="#setup">Setup</a> •
|
||||||
|
<a href="#example">Example usage</a>
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<img width="1216" align="center" alt="whisperx-arch" src="https://user-images.githubusercontent.com/36994049/208313881-903ab3ea-4932-45fd-b3dc-70876cddaaa2.png">
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<h6 align="center">Made by Max Bain • :globe_with_meridians: <a href="https://www.maxbain.com">https://www.maxbain.com</a></h6>
|
||||||
|
|
||||||
<p align="left">Whisper-Based Automatic Speech Recognition (ASR) with improved timestamp accuracy using forced alignment.
|
<p align="left">Whisper-Based Automatic Speech Recognition (ASR) with improved timestamp accuracy using forced alignment.
|
||||||
|
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
|
||||||
<h2 align="left">What is it 🔎</h2>
|
<h2 align="left", id="what-is-it">What is it 🔎</h2>
|
||||||
|
|
||||||
This repository refines the timestamps of openAI's Whisper model via forced aligment with phoneme-based ASR models (e.g. wav2vec2.0)
|
This repository refines the timestamps of openAI's Whisper model via forced aligment with phoneme-based ASR models (e.g. wav2vec2.0), multilingual use-case.
|
||||||
|
|
||||||
|
|
||||||
**Whisper** is an ASR model [developed by OpenAI](https://github.com/openai/whisper), trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds.
|
**Whisper** is an ASR model [developed by OpenAI](https://github.com/openai/whisper), trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds.
|
||||||
@ -17,40 +45,40 @@ This repository refines the timestamps of openAI's Whisper model via forced alig
|
|||||||
|
|
||||||
**Forced Alignment** refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation.
|
**Forced Alignment** refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation.
|
||||||
|
|
||||||
<img width="1216" align="center" alt="whisperx-arch" src="https://user-images.githubusercontent.com/36994049/208313881-903ab3ea-4932-45fd-b3dc-70876cddaaa2.png">
|
<h2 align="left" id="setup">Setup ⚙️</h2>
|
||||||
|
|
||||||
|
|
||||||
<h2 align="left">Setup ⚙️</h2>
|
|
||||||
Install this package using
|
Install this package using
|
||||||
|
|
||||||
`pip install git+https://github.com/m-bain/whisperx.git`
|
`pip install git+https://github.com/m-bain/whisperx.git`
|
||||||
|
|
||||||
You may also need to install ffmpeg, rust etc. Follow openAI instructions here https://github.com/openai/whisper#setup.
|
You may also need to install ffmpeg, rust etc. Follow openAI instructions here https://github.com/openai/whisper#setup.
|
||||||
|
|
||||||
<h2 align="left">Examples💬</h2>
|
<h2 align="left" id="example">Example usage💬</h2>
|
||||||
|
|
||||||
### English
|
### English
|
||||||
|
|
||||||
Run whisper on example segment (using default params)
|
Run whisper on example segment (using default params)
|
||||||
|
|
||||||
`whisperx examples/sample01.wav --model medium.en --output examples/whisperx --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --align_extend 2`
|
whisperx examples/sample01.wav
|
||||||
|
|
||||||
If low gpu memory is required, use a smaller align model e.g. `WAV2VEC2_ASR_BASE_LV60K_960H`
|
|
||||||
|
|
||||||
Using normal whisper out of the box, many transcriptions are out of sync:
|
For increased timestamp accuracy, at the cost of higher gpu mem, use a bigger alignment model e.g.
|
||||||
|
|
||||||
https://user-images.githubusercontent.com/36994049/207743923-b4f0d537-29ae-4be2-b404-bb941db73652.mov
|
whisperx examples/sample01.wav --model medium.en --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --output_dir examples/whisperx
|
||||||
|
|
||||||
Now, using *WhisperX* with forced alignment to wav2vec2.0:
|
Result using *WhisperX* with forced alignment to wav2vec2.0 large:
|
||||||
|
|
||||||
https://user-images.githubusercontent.com/36994049/208253969-7e35fe2a-7541-434a-ae91-8e919540555d.mp4
|
https://user-images.githubusercontent.com/36994049/208253969-7e35fe2a-7541-434a-ae91-8e919540555d.mp4
|
||||||
|
|
||||||
|
Compare this to original whisper out the box, where many transcriptions are out of sync:
|
||||||
|
|
||||||
|
https://user-images.githubusercontent.com/36994049/207743923-b4f0d537-29ae-4be2-b404-bb941db73652.mov
|
||||||
|
|
||||||
## Other Languages
|
## Other Languages
|
||||||
|
|
||||||
For non-english ASR, it is best to use the `large` whisper model.
|
For non-english ASR, it is best to use the `large` whisper model.
|
||||||
|
|
||||||
### French
|
### French
|
||||||
`whisperx --model large --language fr examples/sample_fr_01.wav --align_model VOXPOPULI_ASR_BASE_10K_FR --output_dir examples/whisperx/ --align_extend 2`
|
whisperx examples/sample_fr_01.wav --model large --language fr --align_model VOXPOPULI_ASR_BASE_10K_FR --output_dir examples/whisperx
|
||||||
|
|
||||||
|
|
||||||
https://user-images.githubusercontent.com/36994049/208298804-31c49d6f-6787-444e-a53f-e93c52706752.mov
|
https://user-images.githubusercontent.com/36994049/208298804-31c49d6f-6787-444e-a53f-e93c52706752.mov
|
||||||
@ -58,8 +86,7 @@ https://user-images.githubusercontent.com/36994049/208298804-31c49d6f-6787-444e-
|
|||||||
|
|
||||||
|
|
||||||
### German
|
### German
|
||||||
`whisperx --model large --language de examples/sample_de_01.wav --align_model VOXPOPULI_ASR_BASE_10K_DE --output_dir examples/whisperx/ --align_extend 2`
|
whisperx examples/sample_de_01.wav --model large --language de --align_model VOXPOPULI_ASR_BASE_10K_DE --output_dir examples/whisperx
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731-97d4-0aebd07e0eb3.mov
|
https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731-97d4-0aebd07e0eb3.mov
|
||||||
@ -68,7 +95,7 @@ https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731-
|
|||||||
|
|
||||||
|
|
||||||
### Italian
|
### Italian
|
||||||
`whisperx --model large --language it examples/sample_it_01.wav --align_model VOXPOPULI_ASR_BASE_10K_IT --output_dir examples/whisperx/ --align_extend 2`
|
whisperx examples/sample_it_01.wav --model large --language it --align_model VOXPOPULI_ASR_BASE_10K_IT --output_dir examples/whisperx
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -78,17 +105,16 @@ https://user-images.githubusercontent.com/36994049/208298819-6f462b2c-8cae-4c54-
|
|||||||
`whisperx --model large --language ja examples/sample_ja_01.wav --align_model jonatasgrosman/wav2vec2-large-xlsr-53-japanese --output_dir examples/whisperx --align_extend 2`
|
`whisperx --model large --language ja examples/sample_ja_01.wav --align_model jonatasgrosman/wav2vec2-large-xlsr-53-japanese --output_dir examples/whisperx --align_extend 2`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
https://user-images.githubusercontent.com/19920981/208448405-60f80c0e-2715-42d8-9437-e19e6362b638.mov
|
https://user-images.githubusercontent.com/19920981/208448405-60f80c0e-2715-42d8-9437-e19e6362b638.mov
|
||||||
|
|
||||||
<h2 align="left">Limitations ⚠️</h2>
|
<h2 align="left" id="limitations">Limitations ⚠️</h2>
|
||||||
|
|
||||||
- Not thoroughly tested, especially for non-english, results may vary -- please post issue to let me know its results on your data
|
- Not thoroughly tested, especially for non-english, results may vary -- please post issue to let me know the results on your data
|
||||||
- Whisper normalises spoken numbers e.g. "fifty seven" to arabic numerals "57". Need to perform this normalization after alignment, so the phonemes can be aligned. Currently just ignores numbers.
|
- Whisper normalises spoken numbers e.g. "fifty seven" to arabic numerals "57". Need to perform this normalization after alignment, so the phonemes can be aligned. Currently just ignores numbers.
|
||||||
- Assumes the initial whisper timestamps are accurate to some degree (within margin of 2 seconds, adjust if needed -- bigger margins more prone to alignment errors)
|
- Assumes the initial whisper timestamps are accurate to some degree (within margin of 2 seconds, adjust if needed -- bigger margins more prone to alignment errors)
|
||||||
- Hacked this up quite quickly, there might be some errors, please raise an issue if you encounter any.
|
- Hacked this up quite quickly, there might be some errors, please raise an issue if you encounter any.
|
||||||
|
|
||||||
<h2 align="left">Coming Soon 🗓</h2>
|
<h2 align="left" id="coming-soon">Coming Soon 🗓</h2>
|
||||||
|
|
||||||
[x] Multilingual init
|
[x] Multilingual init
|
||||||
|
|
||||||
@ -96,24 +122,23 @@ https://user-images.githubusercontent.com/19920981/208448405-60f80c0e-2715-42d8-
|
|||||||
|
|
||||||
[ ] Automatic align model selection based on language detection
|
[ ] Automatic align model selection based on language detection
|
||||||
|
|
||||||
[ ] Reduce GPU (clear cache etc.)
|
|
||||||
|
|
||||||
[ ] Incorporating word-level speaker diarization
|
[ ] Incorporating word-level speaker diarization
|
||||||
|
|
||||||
[ ] Inference speedup with batch processing
|
[ ] Inference speedup with batch processing
|
||||||
|
|
||||||
<h2 align="left">Contact</h2>
|
<h2 align="left" id="contact">Contact</h2>
|
||||||
|
|
||||||
Contact maxbain[at]robots[dot]ox[dot]ac[dot]uk if using this for commerical purposes.
|
Contact maxbain[at]robots[dot]ox[dot]ac[dot]uk if using this commerically.
|
||||||
|
|
||||||
|
|
||||||
<h2 align="left">Acknowledgements 🙏</h2>
|
<h2 align="left" id="acks">Acknowledgements 🙏</h2>
|
||||||
|
|
||||||
Of course, this is mostly just a modification to [openAI's whisper](https://github.com/openai/whisper).
|
Of course, this is mostly just a modification to [openAI's whisper](https://github.com/openai/whisper).
|
||||||
As well as accreditation to this [PyTorch tutorial on forced alignment](https://pytorch.org/tutorials/intermediate/forced_alignment_with_torchaudio_tutorial.html)
|
As well as accreditation to this [PyTorch tutorial on forced alignment](https://pytorch.org/tutorials/intermediate/forced_alignment_with_torchaudio_tutorial.html)
|
||||||
|
|
||||||
|
|
||||||
<h2 align="left">Citation</h2>
|
<h2 align="left" id="cite">Citation</h2>
|
||||||
If you use this in your research, just cite the repo,
|
If you use this in your research, just cite the repo,
|
||||||
|
|
||||||
```bibtex
|
```bibtex
|
||||||
|
@ -431,194 +431,194 @@ green
|
|||||||
case.
|
case.
|
||||||
|
|
||||||
109
|
109
|
||||||
00:00:38,095 --> 00:00:38,256
|
00:00:38,135 --> 00:00:38,255
|
||||||
Do
|
Do
|
||||||
|
|
||||||
110
|
110
|
||||||
00:00:38,276 --> 00:00:38,356
|
00:00:38,275 --> 00:00:38,355
|
||||||
you
|
you
|
||||||
|
|
||||||
111
|
111
|
||||||
00:00:38,376 --> 00:00:38,516
|
00:00:38,375 --> 00:00:38,535
|
||||||
want
|
want
|
||||||
|
|
||||||
112
|
112
|
||||||
00:00:38,556 --> 00:00:38,736
|
00:00:38,555 --> 00:00:38,736
|
||||||
your
|
your
|
||||||
|
|
||||||
113
|
113
|
||||||
00:00:38,877 --> 00:00:39,297
|
00:00:38,876 --> 00:00:39,296
|
||||||
PJs?
|
PJs?
|
||||||
|
|
||||||
114
|
114
|
||||||
00:00:39,862 --> 00:00:40,185
|
00:00:39,879 --> 00:00:40,181
|
||||||
Yeah.
|
Yeah.
|
||||||
|
|
||||||
115
|
115
|
||||||
00:00:42,394 --> 00:00:42,474
|
00:00:42,388 --> 00:00:42,689
|
||||||
Yeah.
|
|
||||||
|
|
||||||
116
|
|
||||||
00:00:42,474 --> 00:00:42,694
|
|
||||||
Lifting
|
Lifting
|
||||||
|
|
||||||
117
|
116
|
||||||
00:00:42,714 --> 00:00:42,754
|
00:00:42,729 --> 00:00:42,749
|
||||||
a
|
a
|
||||||
|
|
||||||
118
|
117
|
||||||
00:00:42,794 --> 00:00:43,095
|
00:00:42,809 --> 00:00:43,110
|
||||||
bundle
|
bundle
|
||||||
|
|
||||||
119
|
118
|
||||||
00:00:43,135 --> 00:00:43,195
|
00:00:43,131 --> 00:00:43,191
|
||||||
of
|
of
|
||||||
|
|
||||||
120
|
119
|
||||||
00:00:43,235 --> 00:00:43,776
|
00:00:43,251 --> 00:00:43,773
|
||||||
pajamas,
|
pajamas,
|
||||||
|
|
||||||
121
|
120
|
||||||
00:00:44,076 --> 00:00:44,316
|
00:00:44,073 --> 00:00:44,314
|
||||||
Peter
|
Peter
|
||||||
|
|
||||||
122
|
121
|
||||||
00:00:44,376 --> 00:00:44,637
|
00:00:44,374 --> 00:00:44,634
|
||||||
finds
|
finds
|
||||||
|
|
||||||
123
|
122
|
||||||
00:00:44,677 --> 00:00:44,697
|
00:00:44,674 --> 00:00:44,694
|
||||||
a
|
a
|
||||||
|
|
||||||
124
|
123
|
||||||
00:00:44,757 --> 00:00:44,957
|
00:00:44,754 --> 00:00:44,955
|
||||||
sheet
|
sheet
|
||||||
|
|
||||||
125
|
124
|
||||||
00:00:44,997 --> 00:00:45,057
|
00:00:44,995 --> 00:00:45,055
|
||||||
of
|
of
|
||||||
|
|
||||||
126
|
125
|
||||||
00:00:45,117 --> 00:00:45,418
|
00:00:45,115 --> 00:00:45,456
|
||||||
paper
|
paper
|
||||||
|
|
||||||
127
|
126
|
||||||
00:00:45,538 --> 00:00:45,899
|
00:00:45,536 --> 00:00:45,876
|
||||||
labeled
|
labeled
|
||||||
|
|
||||||
128
|
127
|
||||||
00:00:46,341 --> 00:00:47,043
|
00:00:46,338 --> 00:00:47,041
|
||||||
Lancaster
|
Lancaster
|
||||||
|
|
||||||
129
|
128
|
||||||
00:00:47,124 --> 00:00:47,384
|
00:00:47,121 --> 00:00:47,382
|
||||||
North
|
North
|
||||||
|
|
||||||
130
|
129
|
||||||
00:00:47,445 --> 00:00:47,946
|
00:00:47,442 --> 00:00:47,944
|
||||||
Hospital
|
Hospital
|
||||||
|
|
||||||
131
|
130
|
||||||
00:00:48,267 --> 00:00:48,930
|
00:00:48,266 --> 00:00:48,928
|
||||||
discharge
|
discharge
|
||||||
|
|
||||||
132
|
131
|
||||||
00:00:49,030 --> 00:00:49,251
|
00:00:49,029 --> 00:00:49,249
|
||||||
sheet.
|
sheet.
|
||||||
|
|
||||||
133
|
132
|
||||||
00:00:50,293 --> 00:00:50,373
|
00:00:50,291 --> 00:00:50,371
|
||||||
He
|
He
|
||||||
|
|
||||||
134
|
133
|
||||||
00:00:50,413 --> 00:00:50,774
|
00:00:50,412 --> 00:00:50,772
|
||||||
closes
|
closes
|
||||||
|
|
||||||
135
|
134
|
||||||
00:00:50,814 --> 00:00:50,914
|
00:00:50,812 --> 00:00:50,912
|
||||||
the
|
the
|
||||||
|
|
||||||
136
|
135
|
||||||
00:00:50,954 --> 00:00:51,395
|
00:00:50,953 --> 00:00:51,393
|
||||||
suitcase
|
suitcase
|
||||||
|
|
||||||
137
|
136
|
||||||
00:00:51,435 --> 00:00:51,515
|
00:00:51,433 --> 00:00:51,514
|
||||||
and
|
and
|
||||||
|
|
||||||
138
|
137
|
||||||
00:00:51,535 --> 00:00:51,796
|
00:00:51,534 --> 00:00:51,794
|
||||||
brings
|
brings
|
||||||
|
|
||||||
139
|
138
|
||||||
00:00:51,836 --> 00:00:52,217
|
00:00:51,834 --> 00:00:52,235
|
||||||
Gloria
|
Gloria
|
||||||
|
|
||||||
140
|
139
|
||||||
00:00:52,257 --> 00:00:52,317
|
00:00:52,255 --> 00:00:52,315
|
||||||
the
|
the
|
||||||
|
|
||||||
141
|
140
|
||||||
00:00:52,357 --> 00:00:52,858
|
00:00:52,355 --> 00:00:52,856
|
||||||
pajamas.
|
pajamas.
|
||||||
|
|
||||||
142
|
141
|
||||||
00:00:54,187 --> 00:00:54,489
|
00:00:54,186 --> 00:00:54,488
|
||||||
There
|
There
|
||||||
|
|
||||||
143
|
142
|
||||||
00:00:54,550 --> 00:00:54,771
|
00:00:54,549 --> 00:00:54,771
|
||||||
you
|
you
|
||||||
|
|
||||||
144
|
143
|
||||||
00:00:54,791 --> 00:00:54,832
|
00:00:54,791 --> 00:00:54,831
|
||||||
go.
|
go.
|
||||||
|
|
||||||
145
|
144
|
||||||
00:00:55,655 --> 00:00:55,755
|
00:00:55,654 --> 00:00:55,775
|
||||||
Thank
|
Thank
|
||||||
|
|
||||||
146
|
145
|
||||||
00:00:55,775 --> 00:00:55,896
|
00:00:55,795 --> 00:00:55,895
|
||||||
you.
|
you.
|
||||||
|
|
||||||
147
|
146
|
||||||
00:00:55,916 --> 00:00:55,956
|
00:00:55,895 --> 00:00:55,936
|
||||||
He
|
He
|
||||||
|
|
||||||
148
|
147
|
||||||
00:00:55,976 --> 00:00:56,077
|
00:00:55,956 --> 00:00:56,097
|
||||||
picks
|
picks
|
||||||
|
|
||||||
149
|
148
|
||||||
00:00:56,097 --> 00:00:56,198
|
00:00:56,117 --> 00:00:56,198
|
||||||
up
|
up
|
||||||
|
|
||||||
150
|
149
|
||||||
00:00:56,218 --> 00:00:56,319
|
00:00:56,218 --> 00:00:56,319
|
||||||
the
|
the
|
||||||
|
|
||||||
151
|
150
|
||||||
00:00:56,359 --> 00:00:56,742
|
00:00:56,359 --> 00:00:56,742
|
||||||
locket.
|
locket.
|
||||||
|
|
||||||
152
|
151
|
||||||
00:00:57,124 --> 00:00:57,225
|
00:00:57,124 --> 00:00:57,225
|
||||||
He
|
You
|
||||||
|
|
||||||
153
|
152
|
||||||
00:00:57,265 --> 00:00:57,466
|
00:00:57,265 --> 00:00:57,466
|
||||||
kept
|
kept
|
||||||
|
|
||||||
154
|
153
|
||||||
00:00:57,547 --> 00:00:57,627
|
00:00:57,547 --> 00:00:57,627
|
||||||
it.
|
it.
|
||||||
|
|
||||||
155
|
154
|
||||||
00:00:58,874 --> 00:00:58,995
|
00:00:58,874 --> 00:00:58,994
|
||||||
Oh,
|
Oh,
|
||||||
|
|
||||||
156
|
155
|
||||||
00:00:59,678 --> 00:00:59,899
|
00:00:59,276 --> 00:00:59,578
|
||||||
cool.
|
of
|
||||||
|
|
||||||
|
156
|
||||||
|
00:00:59,678 --> 00:00:59,960
|
||||||
|
course.
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ def transcribe(
|
|||||||
compression_ratio_threshold: Optional[float] = 2.4,
|
compression_ratio_threshold: Optional[float] = 2.4,
|
||||||
logprob_threshold: Optional[float] = -1.0,
|
logprob_threshold: Optional[float] = -1.0,
|
||||||
no_speech_threshold: Optional[float] = 0.6,
|
no_speech_threshold: Optional[float] = 0.6,
|
||||||
condition_on_previous_text: bool = True,
|
condition_on_previous_text: bool = False, # turn off by default due to errors it causes
|
||||||
**decode_options,
|
**decode_options,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
@ -258,6 +258,7 @@ def align(
|
|||||||
device: str,
|
device: str,
|
||||||
extend_duration: float = 0.0,
|
extend_duration: float = 0.0,
|
||||||
start_from_previous: bool = True,
|
start_from_previous: bool = True,
|
||||||
|
drop_non_aligned_words: bool = False,
|
||||||
):
|
):
|
||||||
print("Performing alignment...")
|
print("Performing alignment...")
|
||||||
if not torch.is_tensor(audio):
|
if not torch.is_tensor(audio):
|
||||||
@ -270,6 +271,7 @@ def align(
|
|||||||
MAX_DURATION = audio.shape[1] / SAMPLE_RATE
|
MAX_DURATION = audio.shape[1] / SAMPLE_RATE
|
||||||
|
|
||||||
prev_t2 = 0
|
prev_t2 = 0
|
||||||
|
word_segments_list = []
|
||||||
for idx, segment in enumerate(transcript):
|
for idx, segment in enumerate(transcript):
|
||||||
t1 = max(segment['start'] - extend_duration, 0)
|
t1 = max(segment['start'] - extend_duration, 0)
|
||||||
t2 = min(segment['end'] + extend_duration, MAX_DURATION)
|
t2 = min(segment['end'] + extend_duration, MAX_DURATION)
|
||||||
@ -319,8 +321,7 @@ def align(
|
|||||||
segment['end'] = t2_actual
|
segment['end'] = t2_actual
|
||||||
prev_t2 = segment['end']
|
prev_t2 = segment['end']
|
||||||
|
|
||||||
|
# for the .ass output
|
||||||
# merge missing words to previous, or merge with next word ahead if idx == 0
|
|
||||||
for x in range(len(t_local)):
|
for x in range(len(t_local)):
|
||||||
curr_word = t_words[x]
|
curr_word = t_words[x]
|
||||||
curr_timestamp = t_local[x]
|
curr_timestamp = t_local[x]
|
||||||
@ -329,15 +330,29 @@ def align(
|
|||||||
else:
|
else:
|
||||||
segment['word-level'].append({"text": curr_word, "start": None, "end": None})
|
segment['word-level'].append({"text": curr_word, "start": None, "end": None})
|
||||||
|
|
||||||
|
# for per-word .srt ouput
|
||||||
|
# merge missing words to previous, or merge with next word ahead if idx == 0
|
||||||
|
for x in range(len(t_local)):
|
||||||
|
curr_word = t_words[x]
|
||||||
|
curr_timestamp = t_local[x]
|
||||||
|
if curr_timestamp is not None:
|
||||||
|
word_segments_list.append({"text": curr_word, "start": curr_timestamp[0], "end": curr_timestamp[1]})
|
||||||
|
elif not drop_non_aligned_words:
|
||||||
|
# then we merge
|
||||||
|
if x == 0:
|
||||||
|
t_words[x+1] = " ".join([curr_word, t_words[x+1]])
|
||||||
|
else:
|
||||||
|
word_segments_list[-1]['text'] += ' ' + curr_word
|
||||||
else:
|
else:
|
||||||
# then we resort back to original whisper timestamps
|
# then we resort back to original whisper timestamps
|
||||||
# segment['start] and segment['end'] are unchanged
|
# segment['start] and segment['end'] are unchanged
|
||||||
prev_t2 = 0
|
prev_t2 = 0
|
||||||
segment['word-level'].append({"text": segment['text'], "start": segment['start'], "end":segment['end']})
|
segment['word-level'].append({"text": segment['text'], "start": segment['start'], "end":segment['end']})
|
||||||
|
word_segments_list.append({"text": segment['text'], "start": segment['start'], "end":segment['end']})
|
||||||
|
|
||||||
print(f"[{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}] {segment['text']}")
|
print(f"[{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}] {segment['text']}")
|
||||||
|
|
||||||
return {"segments": transcript}
|
return {"segments": transcript, "word_segments": word_segments_list}
|
||||||
|
|
||||||
def cli():
|
def cli():
|
||||||
from . import available_models
|
from . import available_models
|
||||||
@ -348,9 +363,10 @@ def cli():
|
|||||||
parser.add_argument("--model_dir", type=str, default=None, help="the path to save model files; uses ~/.cache/whisper by default")
|
parser.add_argument("--model_dir", type=str, default=None, help="the path to save model files; uses ~/.cache/whisper by default")
|
||||||
parser.add_argument("--device", default="cuda" if torch.cuda.is_available() else "cpu", help="device to use for PyTorch inference")
|
parser.add_argument("--device", default="cuda" if torch.cuda.is_available() else "cpu", help="device to use for PyTorch inference")
|
||||||
# alignment params
|
# alignment params
|
||||||
parser.add_argument("--align_model", default="WAV2VEC2_ASR_LARGE_LV60K_960H", help="Name of phoneme-level ASR model to do alignment")
|
parser.add_argument("--align_model", default="WAV2VEC2_ASR_BASE_960H", help="Name of phoneme-level ASR model to do alignment")
|
||||||
parser.add_argument("--align_extend", default=2, type=float, help="Seconds before and after to extend the whisper segments for alignment")
|
parser.add_argument("--align_extend", default=2, type=float, help="Seconds before and after to extend the whisper segments for alignment")
|
||||||
parser.add_argument("--align_from_prev", default=True, type=bool, help="Whether to clip the alignment start time of current segment to the end time of the last aligned word of the previous segment")
|
parser.add_argument("--align_from_prev", default=True, type=bool, help="Whether to clip the alignment start time of current segment to the end time of the last aligned word of the previous segment")
|
||||||
|
parser.add_argument("--drop_non_aligned", action="store_true", help="For word .srt, whether to drop non aliged words, or merge them into neighbouring.")
|
||||||
|
|
||||||
parser.add_argument("--output_dir", "-o", type=str, default=".", help="directory to save the outputs")
|
parser.add_argument("--output_dir", "-o", type=str, default=".", help="directory to save the outputs")
|
||||||
parser.add_argument("--output_type", default="srt", choices=['all', 'srt', 'vtt', 'txt'], help="directory to save the outputs")
|
parser.add_argument("--output_type", default="srt", choices=['all', 'srt', 'vtt', 'txt'], help="directory to save the outputs")
|
||||||
@ -387,7 +403,7 @@ def cli():
|
|||||||
align_model: str = args.pop("align_model")
|
align_model: str = args.pop("align_model")
|
||||||
align_extend: float = args.pop("align_extend")
|
align_extend: float = args.pop("align_extend")
|
||||||
align_from_prev: bool = args.pop("align_from_prev")
|
align_from_prev: bool = args.pop("align_from_prev")
|
||||||
# align_interpolate_missing: bool = args.pop("align_interpolate_missing")
|
drop_non_aligned: bool = args.pop("drop_non_aligned")
|
||||||
|
|
||||||
os.makedirs(output_dir, exist_ok=True)
|
os.makedirs(output_dir, exist_ok=True)
|
||||||
|
|
||||||
@ -421,12 +437,13 @@ def cli():
|
|||||||
labels = processor.tokenizer.get_vocab()
|
labels = processor.tokenizer.get_vocab()
|
||||||
align_dictionary = processor.tokenizer.get_vocab()
|
align_dictionary = processor.tokenizer.get_vocab()
|
||||||
else:
|
else:
|
||||||
print(f'Align model "{align_model}" is not supported, choose from:\n {torchaudio.pipelines.__all__ + wa2vec2_models_on_hugginface}')
|
print(f'Align model "{align_model}" is not supported, choose from:\n {torchaudio.pipelines.__all__ + wa2vec2_models_on_hugginface} \n\
|
||||||
|
See details here https://pytorch.org/audio/stable/pipelines.html#id14')
|
||||||
raise ValueError(f'Align model "{align_model}" not supported')
|
raise ValueError(f'Align model "{align_model}" not supported')
|
||||||
for audio_path in args.pop("audio"):
|
for audio_path in args.pop("audio"):
|
||||||
result = transcribe(model, audio_path, temperature=temperature, **args)
|
result = transcribe(model, audio_path, temperature=temperature, **args)
|
||||||
result_aligned = align(result["segments"], result["language"], align_model, align_dictionary, audio_path, device,
|
result_aligned = align(result["segments"], result["language"], align_model, align_dictionary, audio_path, device,
|
||||||
extend_duration=align_extend, start_from_previous=align_from_prev)
|
extend_duration=align_extend, start_from_previous=align_from_prev, drop_non_aligned_words=drop_non_aligned)
|
||||||
audio_basename = os.path.basename(audio_path)
|
audio_basename = os.path.basename(audio_path)
|
||||||
|
|
||||||
# save TXT
|
# save TXT
|
||||||
@ -444,6 +461,10 @@ def cli():
|
|||||||
with open(os.path.join(output_dir, audio_basename + ".srt"), "w", encoding="utf-8") as srt:
|
with open(os.path.join(output_dir, audio_basename + ".srt"), "w", encoding="utf-8") as srt:
|
||||||
write_srt(result_aligned["segments"], file=srt)
|
write_srt(result_aligned["segments"], file=srt)
|
||||||
|
|
||||||
|
# save per-word SRT
|
||||||
|
with open(os.path.join(output_dir, audio_basename + ".word.srt"), "w", encoding="utf-8") as srt:
|
||||||
|
write_srt(result_aligned["word_segments"], file=srt)
|
||||||
|
|
||||||
# save ASS
|
# save ASS
|
||||||
with open(os.path.join(output_dir, audio_basename + ".ass"), "w", encoding="utf-8") as srt:
|
with open(os.path.join(output_dir, audio_basename + ".ass"), "w", encoding="utf-8") as srt:
|
||||||
write_ass(result_aligned["segments"], file=srt)
|
write_ass(result_aligned["segments"], file=srt)
|
||||||
|
Reference in New Issue
Block a user