File size: 3,158 Bytes
f5fd9e6 fde22d4 973ecc0 5e2674f fde22d4 5e2674f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
---
license: afl-3.0
datasets:
- tweet_eval
- sentiment140
- mteb/tweet_sentiment_extraction
- yelp_review_full
- amazon_polarity
language:
- en
metrics:
- accuracy
- sparse_val accuracy
- sparse_val categorical accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- textclassisification
- roberta
- robertabase
- sentimentanalysis
- nlp
- tweetanalysis
- tweet
- analysis
- sentiment
- positive
- newsanalysis
---
---
<b>BYRD'S I - ROBERTA BASED TWEET/REVIEW/TEXT ANALYSIS</b>
---
This is ro<b>BERT</b>a-base model fine tuned on 8 datasets with ~20 M tweets this model is suitable for english while can do a fine job on other languages.
<b>Git Repo:</b><a href = "https://github.com/Caffeine-Coders/Sentiment-Analysis-Project"> SENTIMENTANALYSIS-PROJECT</a>
<b>Demo:</b><a href = "https://byrdi.netlify.app/"> BYRD'S I</a>
<b>labels: </b>
0 -> Negative;
1 -> Neutral;
2 -> Positive;
<b>Model Metrics</b><br/>
<b>Accuracy: </b> ~96% <br/>
<b>Sparse Categorical Accuracy: </b> 0.9597 <br/>
<b>Loss: </b> 0.1144 <br/>
<b>val_loss -- [onLast_train] : </b> 0.1482 <br/>
<b>Note: </b>
Due to dataset discrepencies of Neutral data we published another model <a href = "https://huggingface.co./AK776161/birdseye_roberta-base-18">
Byrd's I only positive_negative model</a> to find only neutral data and have used
<b>AdaBoot</b> method to get the accurate output.
# Example of Classification:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoModelForSeq2SeqLM
from transformers import TFAutoModelForSequenceClassification
import pandas as pd
import numpy as np
import tensorflow
# model 0
tokenizer = AutoTokenizer.from_pretrained("AK776161/birdseye_roberta-base-18", use_fast = True)
model = AutoModelForSequenceClassification.from_pretrained("AK776161/birdseye_roberta-base-18", from_tf=True)
# model1
tokenizer1 = AutoTokenizer.from_pretrained("AK776161/birdseye_roberta-base-tweet-eval", use_fast = True)
model1 = AutoModelForSequenceClassification.from_pretrained("AK776161/birdseye_roberta-base-tweet-eval",from_tf=True)
#-----------------------Adaboot technique---------------------------
def nparraymeancalc(arr1, arr2):
returner = []
for i in range(0,len(arr1)):
if(arr1[i][1] < -7):
arr1[i][1] = 0
returner.append(np.mean([arr1[i],arr2[i]], axis = 0))
return np.array(returner)
def predictions(tokenizedtext):
output1 = model(**tokenizedtext)
output2 = model1(**tokenizedtext)
logits1 = output1.logits
logits1 = logits1.detach().numpy()
logits2 = output2.logits
logits2 = logits2.detach().numpy()
# print(logits1, logits2)
predictionresult = nparraymeancalc(logits1,logits2)
return np.array(predictionresult)
def labelassign(predictionresult):
labels = []
for i in predictionresult:
label_id = i.argmax()
labels.append(label_id)
return labels
tokenizeddata = tokenizer("----YOUR_TEXT---", return_tensors = 'pt', padding = True, truncation = True)
result = predictions(tokenizeddata)
print(labelassign(result))
```
Output for "I LOVE YOU":
```
1) Positive: 0.994
2) Negative: 0.000
3) Neutral: 0.006
```
|