File size: 6,901 Bytes
34bfa0f
 
 
 
04f0a91
34bfa0f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
# Herta Voice Changer  
  
## Introduction  
  
This AI model is based on **SoftVC VITS Singing Voice Conversion**. Refer to this [Github Repository](https://github.com/svc-develop-team/so-vits-svc/tree/4.0) from the 4.0 branch. This model was inspired by [Herta](https://honkai-star-rail.fandom.com/wiki/Herta) from [Honkai Star Rail](https://hsr.hoyoverse.com/en-us/). This model can be used to convert the original voice from an audio file into this character's voice.  
  
## How to Prepare Audio Files  
  
Your audio files should be `shorter than 10 seconds`, have no `BGM`, and have a sampling rate of `44100 Hz`.  
  
1. Create a new folder inside the `dataset_raw` folder (This folder name will be your `SpeakerID`).  
2. Put your audio files into the folder you created above.  
  
### Note:  
  
1. Your audio files should be in `.wav` format.  
2. If your audio files are longer than 10 seconds, I suggest you trim them down using your desired software or [audio slicer GUI](https://github.com/flutydeer/audio-slicer).  
3. If your audio files have **BGM**, please remove it using a program such as [Ultimate Vocal Remover](https://ultimatevocalremover.com/). The `3_HP-Vocal-UVR.pth` or `UVR-MDX-NET Main` is recommended.  
4. If your audio files have a sampling rate different from 44100 Hz, I suggest you resample them using [Audacity](https://www.audacityteam.org/) or by running `python resample.py` in your `CMD`.  
  
## How to Build Locally  
  
1. Clone the repository from the 4.0 branch: `git clone https://github.com/svc-develop-team/so-vits-svc.git`  
2. Put your `prepared audio` into the `dataset_raw` folder.  
3. Open your **Command Line** and install the `so-vits-svc` library: `%pip install -U so-vits-svc-fork`  
4. Navigate to your project directory using the **Command Line**.  
5. Run `svc pre-resample` in your prompt.  
6. After completing the step above, run `svc pre-config`.  
7. After completing the step above, run `svc pre-hubert`. **(This step may take a while.)**.  
8. After completing the step above, run `svc train -t`. **(This step will take a while based on your `GPU` and the number of `epochs` you want.)**.  
  
### How to Change Epoch Value Locally  
The meaning of `epoch` is the number of training iterations for your model. **Example: if you set the epoch value to 10000, your model will take 10000 steps to finish** `(default epoch value is 10000)`. To change your `epoch value`:  
  
1. Go to your project folder.  
2. Find the folder named `config`.  
3. Inside that folder, you should see `config.json`.  
4. In `config.json`, there should be a section that looks like this:

```json
    "train": {
        "log_interval": 200,
        "eval_interval": 800,
        "seed": 1234,
        "epochs": <PUT YOUR VALUE HERE>,
        "learning_rate": 0.0001,
        "betas": [0.8, 0.99]
    }
```

This can be done after `svc pre-config` has already finished.


### How to inferance in local.
To perform inference locally, navigate to the project directory, create a Python file, and copy the following lines of code:

```python
your_audio_file = 'your_audio.wav'

audio, sr = librosa.load(your_audio_file, sr = 16000, mono = True)
raw_path = io.BytesIO()
soundfile.write(raw_path, audio, 16000, format = 'wav')
raw_path.seek(0)

model = Svc('logs/44k/your_model.pth', 'logs/44k/config.json')

out_audio, out_sr = model.infer('<YOUR SPEAKER ID>', 0, raw_path, auto_predict_f0 = True)
soundfile.write('out_audio.wav', out_audio.cpu().numpy(), 44100)
```

The output file will be in the same directory as your input audio file with the name `your_audio_out.wav`

## How to Build in Google Colab  
  
Refer to [My Google Colab](https://colab.research.google.com/drive/1V91RM-2xzSqbmTIlaEzWZovca8stErk0?authuser=3#scrollTo=hhJ2MG1i1vfl) or the [Official Google Colab](https://colab.research.google.com/github/34j/so-vits-svc-fork/blob/main/notebooks/so-vits-svc-fork-4.0.ipynb) for the steps.  
  
### Google Drive Setup  
  
1. Create an empty folder (this will be your project folder).  
2. Inside the project folder, create a folder named `dataset_raw`.  
3. Create another folder inside `dataset_raw` (this folder name will be your `SpeakerID`).  
4. Upload your prepared audio files into the folder created in the previous step.  
  
### Google Colab Setup  
  
1. Mount your Google Drive:  
   ```python  
   from google.colab import drive  
   drive.mount('/content/drive')  
   ```

2. Install dependencies:
   ```python
   !python -m pip install -U pip setuptools wheel  
    %pip install -U ipython  
    %pip install -U torch torchaudio --index-url https://download.pytorch.org/whl/cu118  
   ```

3. Install `so-vits-svc` library:
   `%pip install -U so-vits-svc-fork  `

4. Resample your audio files:
   `!svc pre-resample`

5. Pre-config:
   `!svc pre-config`

6. Pre-hubert (this step may take a while):
   `!svc pre-hubert`

7. Train your model (this step will take a while based on your Google Colab GPU and the number of epochs you want):
    `!svc train -t`

### How to Change Epoch Value in Google Colab  
  
The term "epoch" refers to the number of times you want to train your model. For example, if you set the epoch value to 10,000, your model will take 10,000 steps to complete (the default epoch value is 10,000).  
  
To change the epoch value:  
  
1. Go to your project folder.  
2. Find the folder named `config`.  
3. Inside that folder, you should see `config.json`.  
4. In `config.json`, there should be a section that looks like this:  

```json
    "train": {
        "log_interval": 200,
        "eval_interval": 800,
        "seed": 1234,
        "epochs": <PUT YOUR VALUE HERE>,
        "learning_rate": 0.0001,
        "betas": [0.8, 0.99]
    }
```

This can be done after `svc pre-config` has already finished.


### How to Perform Inference in Google Colab  
  
After training your model, you can use it to convert any original voice to your model voice by running the following command:  
  
```shell  
!svc infer drive/MyDrive/your_model_name/your_audio_file.wav --model-path drive/MyDrive/your_model_name/logs/44k/your_model.pth --config-path drive/MyDrive/your_model_name/logs/44k/your_config.json  
```
The output file will be named `your_audio_file.out.wav`

### Note:  
  
1. Your Google Drive must have at least 5 GB of free space. If you don't have enough space, consider registering a new Google account.  
2. Google Colab's Free Subscription is sufficient, but using the Pro version is recommended.  
3. Set your Google Colab Hardware Accelerator to `GPU`.  
  
## Credits  
  
1. [zomehwh/sovits-models](https://huggingface.co./spaces/zomehwh/sovits-models) from Hugging Face Space  
2. [svc-develop-team/so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) from GitHub repository  
3. [voicepaw/so-vits-svc-fork](https://github.com/voicepaw/so-vits-svc-fork) from GitHub repository