kotoba-whisper-v2.0-mlx

This repository contains a converted mlx-whisper model of kotoba-whisper-v2.0 which is suitable for running with Apple Silicon. As kotoba-whisper-v2.0 is derived from distil-large-v3, this model is significantly faster than mlx-community/whisper-large-v3-mlx without losing much accuracy for Japanese transcription.

Usage

pip install mlx-whisper
mlx_whisper.transcribe(speech_file, path_or_hf_repo="kaiinui/kotoba-whisper-v2.0-mlx")

Related Links

Downloads last month
31
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for kaiinui/kotoba-whisper-v2.0-mlx

Finetuned
(3)
this model