lxmert-vsr-random / tokenizer_config.json
juletxara's picture
add model and tokenizer
256d87e
raw
history blame contribute delete
153 Bytes
{"do_lower_case": true, "model_max_length": 512, "special_tokens_map_file": "/home/eltoto/unc-nlp/special_tokens_map.json", "full_tokenizer_file": null}