Wolfram Ravenwolf
wolfram
AI & ML interests
Local LLMs
Recent Activity
upvoted
an
article
about 1 month ago
Welcome to Inference Providers on the Hub 🔥
liked
a model
about 2 months ago
deepseek-ai/DeepSeek-R1
liked
a model
about 2 months ago
openbmb/MiniCPM-o-2_6
Organizations
wolfram's activity
[Support] Community Articles
82
#5 opened 12 months ago
by
victor

The tokenizer has changed just fyi
12
#2 opened 8 months ago
by
bullerwins

no system message?
8
#14 opened 10 months ago
by
mclassHF2023
tokenizer.model and tokenizer.model.v3 are identical
#15 opened 10 months ago
by
wolfram

Concerns regarding Prompt Format
6
#1 opened 10 months ago
by
wolfram

Strange observation: model becomes super horny in ST's MinP mode
5
#7 opened 12 months ago
by
deleted
Upload folder using huggingface_hub
2
#3 opened 12 months ago
by
wolfram

VRAM Estimates
6
#3 opened about 1 year ago
by
ernestr

Merge method
1
#4 opened about 1 year ago
by
dnhkng

Can't wait to test
5
#4 opened about 1 year ago
by
froggeric

Kindly asking for quants
7
#2 opened about 1 year ago
by
wolfram

Update README.md
#1 opened about 1 year ago
by
wolfram

Update README.md
#1 opened about 1 year ago
by
wolfram

Update README.md
#1 opened about 1 year ago
by
wolfram

GPTQ / AWQ
1
#2 opened about 1 year ago
by
agahebr
Guidance on GPU VRAM Split?
5
#3 opened about 1 year ago
by
nmitchko
Upload folder using huggingface_hub
2
#1 opened about 1 year ago
by
wolfram

Very interesting that miqu will give 16k context work even only first layer and last layer
15
#2 opened about 1 year ago
by
akoyaki
iMatrix, IQ2_XS & IQ2_XXS
13
#2 opened about 1 year ago
by
Nexesenex

Performance
13
#2 opened about 1 year ago
by
KnutJaegersberg
