jhj0517 commited on
Commit
dc250eb
·
1 Parent(s): 363657b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -7,6 +7,10 @@ A Gradio-based browser interface for [Whisper](https://github.com/openai/whisper
7
  If you wish to try this on Colab, you can do it in [here](https://colab.research.google.com/github/jhj0517/Whisper-WebUI/blob/master/notebook/whisper-webui.ipynb)!
8
 
9
  # Feature
 
 
 
 
10
  - Generate subtitles from various sources, including :
11
  - Files
12
  - Youtube
@@ -42,7 +46,7 @@ After installing FFmpeg, **make sure to add the `FFmpeg/bin` folder to your syst
42
 
43
  And you can also run the project with command line arguments if you like by running `start-webui.bat`, see [wiki](https://github.com/jhj0517/Whisper-WebUI/wiki/Command-Line-Arguments) for a guide to arguments.
44
 
45
- - ## Or Run with Docker
46
 
47
  1. Build the image
48
 
@@ -80,7 +84,8 @@ According to faster-whisper, the efficiency of the optimized whisper model is as
80
  | openai/whisper | fp16 | 5 | 4m30s | 11325MB | 9439MB |
81
  | faster-whisper | fp16 | 5 | 54s | 4755MB | 3244MB |
82
 
83
- If you want to use the original Open AI whisper implementation instead of optimized whisper, you can set the command line argument `--disable_faster_whisper` to `True`. See the [wiki](https://github.com/jhj0517/Whisper-WebUI/wiki/Command-Line-Arguments) for more information.
 
84
 
85
  ## Available models
86
  This is Whisper's original VRAM usage table for models.
@@ -101,7 +106,7 @@ This is Whisper's original VRAM usage table for models.
101
  - [x] Add DeepL API translation
102
  - [x] Add NLLB Model translation
103
  - [x] Integrate with faster-whisper
104
- - [ ] Integrate with insanely-fast-whisper
105
  - [ ] Integrate with whisperX
106
 
107
 
 
7
  If you wish to try this on Colab, you can do it in [here](https://colab.research.google.com/github/jhj0517/Whisper-WebUI/blob/master/notebook/whisper-webui.ipynb)!
8
 
9
  # Feature
10
+ - Select the Whisper implementation you want to use between :
11
+ - [openai/whisper](https://github.com/openai/whisper)
12
+ - [SYSTRAN/faster-whisper](https://github.com/SYSTRAN/faster-whisper) (used by default)
13
+ - [insanely-fast-whisper](https://github.com/Vaibhavs10/insanely-fast-whisper)
14
  - Generate subtitles from various sources, including :
15
  - Files
16
  - Youtube
 
46
 
47
  And you can also run the project with command line arguments if you like by running `start-webui.bat`, see [wiki](https://github.com/jhj0517/Whisper-WebUI/wiki/Command-Line-Arguments) for a guide to arguments.
48
 
49
+ - ## Running with Docker
50
 
51
  1. Build the image
52
 
 
84
  | openai/whisper | fp16 | 5 | 4m30s | 11325MB | 9439MB |
85
  | faster-whisper | fp16 | 5 | 54s | 4755MB | 3244MB |
86
 
87
+ If you want to use an implementation other than faster-whisper, use `--whisper_type` arg and the repository name.<br>
88
+ Read [wiki](https://github.com/jhj0517/Whisper-WebUI/wiki/Command-Line-Arguments) for more info about CLI args.
89
 
90
  ## Available models
91
  This is Whisper's original VRAM usage table for models.
 
106
  - [x] Add DeepL API translation
107
  - [x] Add NLLB Model translation
108
  - [x] Integrate with faster-whisper
109
+ - [x] Integrate with insanely-fast-whisper
110
  - [ ] Integrate with whisperX
111
 
112