metadata
datasets:
- Cnam-LMSSC/vibravox
language: fr
library_name: transformers
license: mit
tags:
- audio
- audio-to-audio
- speech
model-index:
- name: EBEN(M=?,P=?,Q=?)
results:
- task:
type: speech-enhancement
name: Bandwidth Extension
dataset:
name: Vibravox["YOUR_MIC"]
type: Cnam-LMSSC/vibravox
args: fr
metrics:
- type: stoi
value: '???'
name: Test STOI, in-domain training
- type: n-mos
value: '???'
name: Test Noresqa-MOS, in-domain training
Model Card
- Developed by: Cnam-LMSSC
- Model: EBEN(M=?,P=?,Q=?) (see publication in IEEE TASLP - arXiv link)
- Language: French
- License: MIT
- Training dataset:
speech_clean
subset of Cnam-LMSSC/vibravox - Samplerate for usage: 16kHz
Overview
This bandwidth extension model, trained on Vibravox body conduction sensor data, enhances body-conducted speech audio by denoising and regenerating mid and high frequencies from low-frequency content.
Disclaimer
This model, trained for a specific non-conventional speech sensor, is intended to be used with in-domain data. Using it with other sensor data may lead to suboptimal performance.
Link to BWE models trained on other body conducted sensors :
The entry point to all EBEN models for Bandwidth Extension (BWE) is available at https://huggingface.co./Cnam-LMSSC/vibravox_EBEN_models.
Training procedure
Detailed instructions for reproducing the experiments are available on the jhauret/vibravox Github repository.
Inference script :
import torch, torchaudio
from vibravox.torch_modules.dnn.eben_generator import EBENGenerator
from datasets import load_dataset
model = EBENGenerator.from_pretrained("Cnam-LMSSC/EBEN_YOUR_MIC")
test_dataset = load_dataset("Cnam-LMSSC/vibravox", "speech_clean", split="test", streaming=True)
audio_48kHz = torch.Tensor(next(iter(test_dataset))["audio.YOUR_MIC"]["array"])
audio_16kHz = torchaudio.functional.resample(audio_48kHz, orig_freq=48_000, new_freq=16_000)
cut_audio_16kHz = model.cut_to_valid_length(audio_16kHz[None, None, :])
enhanced_audio_16kHz = model(cut_audio_16kHz)