Update generation_config.json
#3
by
alugowski
- opened
Pull in upstream second stop token.
Fixes issue where inference does not stop.
See upstream: https://huggingface.co./meta-llama/Meta-Llama-3-70B-Instruct/blob/main/generation_config.json
casperhansen
changed pull request status to
closed