metadata
language:
- en
Pytorch int8 quantized version of gpt2-large
Usage
Download the .bin file locally. Load with:
Rest of the usage according to original instructions.
import torch
model = torch.load("path/to/pytorch_model_quantized.bin")