FIN_BERT_sentiment

This model is a fine-tuned version of bert-base-uncased on the financial_phrasebank dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4905
  • F1: 0.8891
  • Acc: 0.8886

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss F1 Acc
0.5295 1.0 211 0.3757 0.8731 0.8720
0.2174 2.0 422 0.3117 0.8911 0.8910
0.1129 3.0 633 0.4066 0.8886 0.8874
0.0459 4.0 844 0.4923 0.8896 0.8886
0.0275 5.0 1055 0.4905 0.8891 0.8886

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1
  • Datasets 3.1.0
  • Tokenizers 0.20.3

Code to use model as pipeline classifier

import plotly.graph_objects as go
%matplotlib inline
from transformers import pipeline

# Load the sentiment analysis pipeline
classifier = pipeline("text-classification", model="Sharpaxis/FIN_BERT_sentiment", top_k=None)

def finance_sentiment_predictor(text):
    text = str(text)
    out = classifier(text)[0]
    scores = [sample['score'] for sample in out]
    labels = [sample['label'] for sample in out ]
    label_map = {'LABEL_0':"Negative",'LABEL_1':"Neutral",'LABEL_2':"Positive"}
    sentiments = [label_map[label] for label  in labels]
    for i in range(len(scores)):
        print(f"{sentiments[i]} : {scores[i]}")
    print(f"Sentiment of text is {sentiments[np.argmax(scores)]}")
    fig = go.Figure(
        data=[go.Bar(x=sentiments,y=scores,marker=dict(color=["red", "blue", "green"]),width=0.3)])
    fig.update_layout(
        title="Sentiment Analysis Scores",
        xaxis_title="Sentiments",
        yaxis_title="Scores",
        template="plotly_dark"
    )
    fig.show()
Downloads last month
54
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Sharpaxis/FIN_BERT_sentiment

Finetuned
(2310)
this model

Dataset used to train Sharpaxis/FIN_BERT_sentiment

Evaluation results