Munish Kumar
munish0838
AI & ML interests
LLM Quantizations
Recent Activity
liked
a model
about 15 hours ago
QuantFactory/DeepSeek-R1-Distill-Qwen-7B-GGUF
published
a model
about 15 hours ago
QuantFactory/DeepSeek-R1-Distill-Qwen-7B-GGUF
updated
a model
about 20 hours ago
QuantFactory/DeepSeek-R1-Distill-Qwen-7B-GGUF
Organizations
munish0838's activity
GGUF
3
#1 opened about 1 month ago
by
amogusgaysex
Heads up: this isn't the new ministral 3b
2
#1 opened 5 months ago
by
bartowski

Requesting Re-Quant. Tokenizer Updated with better chatml Support
1
#1 opened 5 months ago
by
Luni

Update app.py
#2 opened 7 months ago
by
munish0838

Error loading model in llama.cpp ?
8
#1 opened 8 months ago
by
ubergarm
Add paper and citation
#1 opened 8 months ago
by
maximegmd
Adding `safetensors` variant of this model
#1 opened 8 months ago
by
SFconvertbot

Adding `safetensors` variant of this model
#1 opened 9 months ago
by
SFconvertbot

Adding `safetensors` variant of this model
#1 opened 9 months ago
by
SFconvertbot

Adding `safetensors` variant of this model
#1 opened 9 months ago
by
SFconvertbot

Tokenizer files missing
4
#2 opened 9 months ago
by
munish0838

Unable to use model
2
#1 opened 9 months ago
by
munish0838

What am I doing wrong? Using Oobabooga.
3
#1 opened 11 months ago
by
Goldenblood56
does not appear to have a file named config.json
2
#2 opened 11 months ago
by
atubong
Compatibility with llama-cpp and Ollama
6
#17 opened 10 months ago
by
liashchynskyi
Is the original model allganize/Llama-3-Alpha-Ko-8B-Instruct?
3
#1 opened 9 months ago
by
coconut00
error when loading the model
5
#3 opened 10 months ago
by
StefanStroescu
How to convert to HF format?
5
#6 opened 9 months ago
by
ddh0

Rename configuration_internlm.py to configuration_internlm2.py
#1 opened 10 months ago
by
munish0838

Rename configuration_internlm.py to configuration_internlm2.py
#1 opened 10 months ago
by
munish0838
