This is the huggingface transformers (HF) version of sentence_trannsformers/gtr-t5-base, which is originally in SentenceTransformer form.

The model architecture:

class GTR(T5PreTrainedModel):
    def __init__(self, config):
        super().__init__(config)
        # T5EncoderModel.__init__(self, config)
        self.t5_encoder = T5EncoderModel(config)
        self.embeddingHead = nn.Linear(config.hidden_size, config.hidden_size, bias=False) # gtr has
        self.activation = torch.nn.Identity()
        self.model_parallel = False
        
    def pooling(self, token_embeddings, attention_mask):
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
        sum_mask = input_mask_expanded.sum(1)
        sum_mask = torch.clamp(sum_mask, min=1e-9)
        
        return sum_embeddings / sum_mask
    
    def forward(self, input_ids, attention_mask):
        output = self.t5_encoder(input_ids=input_ids, attention_mask=attention_mask).last_hidden_state
        output = self.pooling(output, attention_mask)    
        output = self.activation(self.embeddingHead(output))
        output = F.normalize(output, p=2, dim=1)
        
        return output

The operations of this model follow the standard operations of HF. For example,

To load the model, run

model = GTR.from_pretrainend('kyriemao/gtr-t5-base')

To get the forward embeddings:

from transformers import AutoTokenizer

sentences = ["This is an example sentence", "Each sentence is converted"]
model = GTR.from_pretrained('kyriemao/gtr-t5-base')
tokenizer = AutoTokenizer.from_pretrained('kyriemao/gtr-t5-base')
input_encodings = tokenizer(sentences, padding=True, return_tensors='pt')

output = model(**input_encodings)
Downloads last month
5
Safetensors
Model size
110M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .