regisss HF staff astachowicz commited on
Commit
1b31e6d
1 Parent(s): 8f1f2c6

Update README.md (#3)

Browse files

- Update README.md (7c4eb899a6b9a03b48793d82b551de8eb2c64116)
- Update README.md (b1a44217b69169dc854d10f08e7321abbd7d2ede)


Co-authored-by: Adam Stachowicz <[email protected]>

Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -24,7 +24,7 @@ The only difference is that there are a few new training arguments specific to H
24
 
25
  [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with RoBERTa Large with the following command:
26
  ```bash
27
- python run_qa.py \
28
  --model_name_or_path roberta-large \
29
  --gaudi_config_name Habana/roberta-large \
30
  --dataset_name squad \
@@ -37,7 +37,9 @@ python run_qa.py \
37
  --max_seq_length 384 \
38
  --output_dir /tmp/squad/ \
39
  --use_habana \
40
- --use_lazy_mode \
 
 
41
  --throughput_warmup_steps 3 \
42
  --bf16
43
  ```
 
24
 
25
  [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with RoBERTa Large with the following command:
26
  ```bash
27
+ PT_HPU_LAZY_MODE=0 python run_qa.py \
28
  --model_name_or_path roberta-large \
29
  --gaudi_config_name Habana/roberta-large \
30
  --dataset_name squad \
 
37
  --max_seq_length 384 \
38
  --output_dir /tmp/squad/ \
39
  --use_habana \
40
+ --torch_compile_backend hpu_backend \
41
+ --torch_compile \
42
+ --use_lazy_mode false \
43
  --throughput_warmup_steps 3 \
44
  --bf16
45
  ```