float32 weight pretty please!
Just like with the release of the Whisper v3 model when i asked for the float32 weights, can you please upload them for this "turbo" model? For example, some backends convert a model to various precisions (e.g. float16/bfloat16/int-8, etc.). Even though it might be "minimal," there is a quality loss converting from float16 to bfloat6, for example, when compared to converting from float32 to bfloat16.
If the model was never in float32 for some reason please let me know, but that would be surprising because in the past there's always been an original float32 version...apart from the issue of whether it's been posted or not that is...
Overall, looking forward to testing this out!!!
Hi, I found there's a float32 model in the initial release of whisper large v3 turbo, maybe this will help.
Change list: https://huggingface.co./openai/whisper-large-v3-turbo/commit/57d207fdeb9f44fffde0a0fd30f2cd792df93de5
initial model file: https://huggingface.co./openai/whisper-large-v3-turbo/tree/f1baaf0c070fd03fc67d773bebeff75023422b6d