Gated Repos Permission
1
#78 opened 7 days ago
by
aymenmir
google/gemma-2b-kamba-instruct
#77 opened about 1 month ago
by
FreedElias
Update README.md
#75 opened 3 months ago
by
LCG123
Request: DOI
1
#74 opened 3 months ago
by
Jonasbukhave
Interview request: genAI evaluation & documentation
#73 opened 4 months ago
by
meggymuggy
Supported Languages
1
#72 opened 4 months ago
by
vpkprasanna
Upload config (3).json
#70 opened 4 months ago
by
Malekhmem
Randomness of the output of the trained model
1
#68 opened 6 months ago
by
Sam1989
Can't reproduce hellaswag result - getting 42.3% v.s. 71.4 % reported
1
#67 opened 6 months ago
by
robgarct
Any one who use the script in the Model Card for inference purpose?
3
#64 opened 6 months ago
by
disper84
403 Forbidden: Authorization error
6
#62 opened 7 months ago
by
parkerbotta
Memory requirements to load the model
1
#61 opened 7 months ago
by
nroshania
Following blog for fine tuning gemma-2b doesn't yield same results
11
#60 opened 7 months ago
by
chongdashu
[AUTOMATED] Model Memory Requirements
#59 opened 8 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#58 opened 8 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#57 opened 8 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#56 opened 8 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#55 opened 8 months ago
by
model-sizer-bot
Running finetuned inference on CPU - accelerate ImportError
1
#54 opened 8 months ago
by
saikrishna6491
Unable to reproduce the score of gemma_2b at pass@1 in humaneval.
3
#53 opened 8 months ago
by
ChiYuqi
Feature extraction suitability?
1
#52 opened 8 months ago
by
ivoras
Update README.md
#51 opened 9 months ago
by
raj729
gemma 2b inference Endpoints error
4
#46 opened 9 months ago
by
gawon16
gemma -2b with multi-gpu
3
#44 opened 9 months ago
by
Iamexperimenting
pretraining Gemma for domain dataset
8
#41 opened 9 months ago
by
Iamexperimenting
[AUTOMATED] Model Memory Requirements
#40 opened 10 months ago
by
model-sizer-bot
Gemma tokenizer issue
1
#37 opened 10 months ago
by
Akshayextreme
Question about their name.. why it is 2b???
2
#36 opened 10 months ago
by
sh0416
[AUTOMATED] Model Memory Requirements
#35 opened 10 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#34 opened 10 months ago
by
model-sizer-bot
What is the context size for Gemma? I get error when asking for it in the config file e.g., AttributeError("'GemmaConfig' object has no attribute 'context_length'")
3
#32 opened 10 months ago
by
brando
torch import required in examples
#31 opened 10 months ago
by
shamikbose89
ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes
14
#29 opened 10 months ago
by
WQW
torch.cuda.OutOfMemoryError
4
#26 opened 10 months ago
by
shiwanglai
GPU utlisation high on Gemma-2b-it
#24 opened 10 months ago
by
sharad07
Sentiment analysis
2
#23 opened 10 months ago
by
PTsag
Note on adding new elements to the vocabulary
2
#21 opened 10 months ago
by
johnhew
Has anyone used this with Chat With RTX Yet
2
#20 opened 10 months ago
by
TheMildEngineer
Update README.md
1
#19 opened 10 months ago
by
shamikbose89
Strange and limited response
3
#15 opened 10 months ago
by
Squeack
Weird token in the tokenizer?
7
#13 opened 10 months ago
by
Lambent