cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
#19 opened 1 day ago
by
d3vnu77
Where is the gguf format?
1
#18 opened 1 day ago
by
RameshRajamani
how many languages supported?
2
#16 opened 5 days ago
by
xingwang1234
i am trying hf to gguf but there is no config
2
#15 opened 7 days ago
by
Batubatu
Updated README.md
1
#13 opened 9 days ago
by
drocks
Updated README.md
#12 opened 10 days ago
by
riaz
Use local image and quantise the model for low Gpu usage with solution
3
#11 opened 10 days ago
by
faizan4458
Fine-tuning
2
#10 opened 10 days ago
by
yukiarimo
Quantized Versions?
20
#9 opened 10 days ago
by
StopLockingDarkmode
Help
1
#8 opened 10 days ago
by
satvikahuja
Fix llm chat function call in README
#7 opened 10 days ago
by
ananddtyagi
Passing local images to chat (workaround).
1
#6 opened 10 days ago
by
averoo
MLX / MPS users out of luck and can't use this model with VLLM
1
#4 opened 11 days ago
by
kronosprime
Update README.md
#3 opened 11 days ago
by
pranay-ar