File size: 2,043 Bytes
a6eb458
657d724
 
a6eb458
97fde1d
 
8780275
97fde1d
db86d40
 
657d724
99f9075
657d724
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6eb458
97fde1d
 
657d724
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97fde1d
657d724
 
97fde1d
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
language:
- code
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- Vipitis/Shadertoys
pipeline_tag: text-generation
tags:
- code
- shader
base_model: bigcode/santacoder
widget:
- text: void mainImage( out vec4 fragColor, in vec2 fragCoord )
  example_title: mainImage
  group: Shadertoy
model-index:
- name: santacoder-finetuned-the-stack-glsl
  results:
  - task:
      type: text-generation
      name: ShaderEval
    dataset:
      type: Vipitis/Shadertoys-fine
      name: Shadertoys-fine
      config: return_completion
      revision: 0.0.2
    metrics:
      - type: exact_match
        value: 0.550
        name: 300 samples, greedy decoding
        verified: false
---

[Santacoder](https://huggingface.co./bigcode/santacoder) finetuned on [Shadertoys](https://huggingface.co./datasets/Vipitis/Shadertoys) for 1000 steps with a batch size of 2 and full sequence length of 2048.
adapted finetuning script found [here](./train.py)

Try model in the [ShaderCoder](https://huggingface.co./spaces/Vipitis/ShaderCoder) demo space

### Finetuning parameters
```sh
python3 train.py --model_path "bigcode/santacoder" \
--dataset_name "Vipitis/Shadertoys" \
--data_column "code" \
--split "train" \
--seq_length 2048 \
--max_steps 1000 \
--batch_size 2 \
--gradient_accumulation_steps 4 \
--learning_rate 5e-5 \
--num_warmup_steps 100 \
--eval_freq 100 \
--save_freq 100 \
--log_freq 1 \
--output_dir "checkpoint_dir" \
--no_fp16


```

Main purpose of this model is to explore if finetuning models improves performance on [ShaderEval](https://huggingface.co./spaces/Vipitis/ShaderEval), which reached 0.550 with 300 samples.

### Disclaimer

While the train/test split is held out, there is a lot of data contamination. The model results can't be trusted for this simple benchmark.
Better tasks for the benchmark will be developed and tested against these models.

License carried over from model, however training data has an undefied license. Check details in [Shadertoys](https://huggingface.co./datasets/Vipitis/Shadertoys).