This model fine-tuned ClimateBert on the textual entailment task using Climate FEVER data. Given (claim, evidence) pairs, the model predicts support (entailment), refute (contradict), or not enough info (neutral). The model has 67% validation accuracy.

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained("amandakonet/climatebert-fact-checking")
tokenizer = AutoTokenizer.from_pretrained("amandakonet/climatebert-fact-checking")

features = tokenizer(['Beginning in 2005, however, polar ice modestly receded for several years'], 
                   ['Polar Discovery "Continued Sea Ice Decline in 2005'],  
                   padding='max_length', truncation=True, return_tensors="pt", max_length=512)

model.eval()
with torch.no_grad():
   scores = model(**features).logits
   label_mapping = ['entailment', 'contradiction', 'neutral']
   labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
   print(labels)
Downloads last month
52
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train amandakonet/climatebert-fact-checking

Space using amandakonet/climatebert-fact-checking 1