I notice that the base model Qwen2-7B-instruct is in bfloat16 but this model is in float32.What's the impact on precision if we load with bfloat16?
· Sign up or log in to comment