Quan Nguyen PRO
qnguyen3
AI & ML interests
None yet
Recent Activity
liked
a model
12 days ago
convergence-ai/proxy-lite-3b
published
a model
about 1 month ago
arcee-train/evolkit-openhermes-100k
updated
a model
about 1 month ago
arcee-train/evolkit-openhermes-100k
Organizations
qnguyen3's activity
VSCODE + Cline + Ollama + Qwen2.5-Coder-32B-Instruct.Q8_0
3
#20 opened 4 months ago
by
BigDeeper
Adding Evaluation Results
#2 opened 5 months ago
by
leaderboard-pr-bot

Open LLM Leaderboard results
1
#3 opened 5 months ago
by
SaisExperiments
thank you for making quants
1
#1 opened 5 months ago
by
qnguyen3

Evaluate output results
1
#3 opened 5 months ago
by
Quy1004
Why dataset tag?
7
#1 opened 6 months ago
by
rombodawg

Transformers doesn't support it yet?
6
#2 opened 8 months ago
by
mahiatlinux
Missing configuration_llava_qwen2.py and configuration_llava_qwen2.py ??
1
#1 opened 8 months ago
by
nicolollo
Handling `flash_attn` Dependency for Non-GPU Environments
20
#4 opened 8 months ago
by
giacomopedemonte
This model is amazing!
3
#1 opened 8 months ago
by
nicolollo
Leaderboard
1
#6 opened 8 months ago
by
Stark2008

ONNX Conversion Tutorial
3
#3 opened 10 months ago
by
qnguyen3

Multi-round conversation w/ PKV cache example code
4
#5 opened 10 months ago
by
Xenova

vilm/VinaLlama2-14B-arxiv vs vilm/VinaLlama2-14B
1
#1 opened 10 months ago
by
anhnh2002

Approach to reduce hallucination
8
#1 opened 10 months ago
by
LoneRanger44
Gặp vấn đề khi finetune
2
#2 opened about 1 year ago
by
104-wonohfor
Run on Macbook without flash_attn?
2
#1 opened 11 months ago
by
palebluewanders
Safetensor version
2
#3 opened 11 months ago
by
anhnh2002
