momonga PRO
mmnga
AI & ML interests
None yet
Recent Activity
updated
a model
5 minutes ago
mmnga/ABEJA-Qwen2.5-32b-Japanese-v0.1-gguf
updated
a model
39 minutes ago
mmnga/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf
updated
a model
about 4 hours ago
mmnga/AXCXEPT-phi-4-open-R1-Distill-EZOv1-gguf
Organizations
mmnga's activity
Fix
1
#1 opened 20 days ago
by
STATIKwitak
Would it be possible to have an 8bit gguf?
2
#1 opened 6 months ago
by
PurityWolf
Please use split ggufs instead of splitting files manually
1
#1 opened 6 months ago
by
lmg-anon
Usage in the model card seems to be ChatML format.
1
#1 opened 6 months ago
by
yamikumods
LM Studioでのエラー
3
#1 opened 8 months ago
by
alfredplpl
Update tokenization_arcade100k.py
#1 opened 9 months ago
by
mmnga
Please tell me how did you convert this FAST model into gguf file.
7
#1 opened 10 months ago
by
wattai
Update config.json
1
#3 opened 10 months ago
by
mmnga
Differences in output from the original model
2
#1 opened about 1 year ago
by
nitky
Librarian Bot: Add moe tag to model
#3 opened about 1 year ago
by
librarian-bot
Librarian Bot: Add moe tag to model
#1 opened about 1 year ago
by
librarian-bot
Librarian Bot: Add moe tag to model
#1 opened about 1 year ago
by
librarian-bot
Maybe a slerp or some other merge method will preserve the component experts better?
3
#2 opened about 1 year ago
by
BlueNipples
Responses somewhat related to the prompt but still gibberish
2
#1 opened about 1 year ago
by
JeroenAdam
Tritonのサポート切れによるColab A100への移行
2
#2 opened about 1 year ago
by
alfredplpl
bfloat16でなくfloat16による量子化
2
#1 opened about 1 year ago
by
alfredplpl
Missing tokenizer.model
4
#1 opened over 1 year ago
by
mmnga
is this related with GPT-Neo-2.7B-AID ?
1
#1 opened over 1 year ago
by
adriey