Seems like the user prompt is ignored
2
#80 opened 6 days ago
by
jlmeunier
Prompting model for OCR
2
#79 opened 2 months ago
by
EugeneSel
Using lora for idefics-8b-chatty finetuning with two RTX4080 32G, gather_map error
#78 opened 3 months ago
by
shuminzhou26803586
Constraint output to HTML format
#73 opened 6 months ago
by
MH1P
Few-shot in-context learning
3
#72 opened 6 months ago
by
andrewliao11-nv
Lora Training OOM with 2x NVIDIA RTX A6000 (2x48GB)
6
#71 opened 6 months ago
by
ayyylemao
Having issue for doing inference after fine tuning idefics2 using LoRA
1
#70 opened 7 months ago
by
jxue005
Mixing text-only data into fine-tuning
4
#68 opened 7 months ago
by
bilibraker
Update README.md
#63 opened 7 months ago
by
SalmanFaroz
How to modify the weights of the LLM section
2
#62 opened 7 months ago
by
cookey39
Reproducing idefics-8b(instruct)
1
#61 opened 7 months ago
by
Iheb-Chaabane
Bug in attention mask
1
#58 opened 7 months ago
by
lucasjin
How is the image resolution expanded in a vision encoder?
2
#57 opened 7 months ago
by
efei
Pretraining deduplication of data to prevent data leakage?
1
#55 opened 7 months ago
by
SS12444
Idefics2-pretraining
4
#54 opened 7 months ago
by
orrzohar
Add idefics2-8b for HuggingChat
3
#53 opened 7 months ago
by
wangdafa
shape mismatch: value tensor of shape [2320] cannot be broadcast to indexing result of shape [2262]
6
#52 opened 7 months ago
by
yeargun
After fine tuning, there is a problem for using it.
4
#50 opened 7 months ago
by
SalmanFaroz
Bounding boxes in the pre-training data and pre-training tasks
1
#49 opened 8 months ago
by
bilibraker
How does the attention_mask contribute to the projector preformance?
12
#45 opened 8 months ago
by
lucasjin
Getting idefics2 into gguf format for use with llama.cpp and/or ollama?
2
#43 opened 8 months ago
by
PaulCapestany
Large value difference when comparing hidden_states with flash attention ON and OFF
#42 opened 8 months ago
by
Ye27
Fine-tuning Script: QLoRA w/ Flash Attn fails
2
#41 opened 8 months ago
by
RonanMcGovern
Use in pipelines?
1
#40 opened 8 months ago
by
harpreetsahota
Setting compute_metrics in Trainer() leads to AttributeError
3
#38 opened 8 months ago
by
Eyel
[AUTOMATED] Model Memory Requirements
#33 opened 8 months ago
by
model-sizer-bot
Dedicated Inference Endpoints for Idefics2-8b
8
#32 opened 8 months ago
by
zesquirrelnator
How can I deploy idefics2-8b with TensorRT + Triton?
9
#31 opened 8 months ago
by
catworld1212
Multi-gpu fine-tuning
21
#30 opened 8 months ago
by
matbee
Model is incompatible with Inference Endpoints
2
#23 opened 8 months ago
by
sebbyjp
Cuda OOM by simply doing a forward pass on an A6000 (48GB VRAM)
10
#11 opened 8 months ago
by
starzmustdie
CUDA out of memory on A100 with 40GB
7
#8 opened 8 months ago
by
SkalskiP
Error running idefics2-8b-AWQ
23
#7 opened 8 months ago
by
oliverguhr