mobicham
mobicham
AI & ML interests
Model pruning, quantization, computer vision, LLMs
Recent Activity
updated
a model
27 days ago
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1_4bitgs64_hqq_hf
liked
a model
27 days ago
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1_4bitgs64_hqq_hf
published
a model
27 days ago
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1_4bitgs64_hqq_hf
Organizations
mobicham's activity
Open source eval metrics and library?
1
#1 opened 29 days ago
by
er1k0
No model file found in repo 'mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1'
5
#1 opened about 1 month ago
by
SalomonMejia01

It's bad, sorry.
1
#2 opened about 1 month ago
by
MrDevolver
Will there be a 32b and 70b too?
1
#1 opened about 1 month ago
by
AlgorithmicKing

Open Source Distillation Approach?
1
#1 opened about 1 month ago
by
nbroad


New activity in
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bitgs8-metaoffload-HQQ
about 1 month ago
the coder from the model card has errors when executing on google colab
6
#1 opened 7 months ago
by
vasilee

New activity in
open-llm-leaderboard/deepseek-ai__DeepSeek-R1-Distill-Qwen-1.5B-details
about 2 months ago
Benchmark numbers are too low and don't match the numbers run locally
#1 opened about 2 months ago
by
mobicham

CPU support
1
#2 opened 6 months ago
by
Essa20001
Decensored version?
5
#1 opened 7 months ago
by
KnutJaegersberg

Oobabooga?
1
#1 opened 7 months ago
by
AIGUYCONTENT

QUANTIZED VERSION GGUF
1
#5 opened 8 months ago
by
ar08

GSM8K (5-shot) performance is quite different compared to running lm_eval locally
5
#755 opened 10 months ago
by
mobicham

Details about this model
1
#4 opened 10 months ago
by
at676
Make it usageable for cpu
2
#3 opened 10 months ago
by
ar08

Error with adapter ?
3
#2 opened 11 months ago
by
nelkh
Any plan for making HQQ+ 2bit quant for Mixtral or larger models?
1
#1 opened 11 months ago
by
raincandy-u
