You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Please read Apache License, Version 2.0 before downloading this model.
Log in or Sign Up to review the conditions and access this model content.
JTES speech emotion recognition model for S3PRL using izanami-wav2vec2-large
imprt/izanami-wav2vec2-large-jtes-er
This model is a speech emotion recognition model for S3PRL trained on JTES(Japanese Twitter-based Emotional Speech v1.1) using imprt/izanami-wav2vec2-large as an upstream model.
JTES is an emotional speech database that can be used for emotion recognition as well as recognition and synthesis of speech with various emotions. The database was designed by compiling tweets acquired from Twitter (X) and selecting emotion-dependent tweets considering phonetic and prosodic balance [1].
Demo: How to use with S3PRL
cd s3prl
git checkout v0.4.17
# copy all files
# store imprt/izanami-wav2vec2-large s3prl/izanami-wav2vec2-large-s3prl.pt in s3prl/upstream_models/ directory.
# store JTES(v1.1) in "s3prl/jtes_v1.1".
pip install -e ".[all]"
cd s3prl
# evaluate Session1
python3 run_downstream.py -m evaluate -e result/downstream/izanami-wav2vec2-large-jtes-er_fold1/dev-best.ckpt -o "config.downstream_expert.datarc.eval_batch_size=1"
RESULTS
Accuracy
session | accuracy |
---|---|
session1 | 0.7794 |
session2 | 0.7819 |
session3 | 0.8224 |
session4 | 0.7549 |
session5 | 0.8671 |
average | 0.8012 |
Citation
Citing SUPERB
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
Citing JTES
[1] Emika Takeishi, Takashi Nose, Yuya Chiba, Akinori Ito, "Construction and Analysis of Phonetically and Prosodically Balanced Emotional Speech Database", Proceedings of Oriental COCOSDA, pp. 16−21, 2016.
@INPROCEEDINGS{7918977,
author={Takeishi, Emika and Nose, Takashi and Chiba, Yuya and Ito, Akinori},
booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},
title={Construction and analysis of phonetically and prosodically balanced emotional speech database},
year={2016},
volume={},
number={},
pages={16-21},
keywords={Speech;Databases;Speech recognition;Entropy;Emotion recognition;Context;Standardization;emotional speech database;speech corpus;emotion recognition;emotional speech recognition},
doi={10.1109/ICSDA.2016.7918977}}
License
Model tree for imprt/izanami-wav2vec2-large-jtes-er
Base model
imprt/izanami-wav2vec2-large