When I run the example code, it automatically downloads the file.Can I save the downloaded file in a specified directory?
when i run the demoοΌwill download this files,Can I save the downloaded file in a specified directory?
import outetts
# Configure the model
model_config = outetts.HFModelConfig_v1(
model_path="D:/Github/OuteAI/OuteTTS-0.2-500M",
language="en", # Supported languages in v0.2: en, zh, ja, ko
)
# Initialize the interface
interface = outetts.InterfaceHF(model_version="0.2", cfg=model_config)
# Optional: Create a speaker profile (use a 10-15 second audio clip)
# speaker = interface.create_speaker(
# audio_path="path/to/audio/file",
# transcript="Transcription of the audio file."
# )
# Optional: Save and load speaker profiles
# interface.save_speaker(speaker, "speaker.json")
# speaker = interface.load_speaker("speaker.json")
# Optional: Load speaker from default presets
interface.print_default_speakers()
speaker = interface.load_default_speaker(name="male_1")
output = interface.generate(
text="Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and it can be implemented in software or hardware products.",
# Lower temperature values may result in a more stable tone,
# while higher values can introduce varied and expressive speech
temperature=0.1,
repetition_penalty=1.1,
max_length=4096,
# Optional: Use a speaker profile for consistent voice characteristics
# Without a speaker profile, the model will generate a voice with random characteristics
speaker=speaker,
)
# Save the synthesized speech to a file
output.save("output.wav")
# Optional: Play the synthesized speech
# output.play()
(.venv) PS D:\Temp\PythonProjects\OuteTTSDemo> python d:\Temp\PythonProjects\OuteTTSDemo\app.py
2024-11-28 16:42:43.757 | INFO | outetts.wav_tokenizer.audio_codec:ensure_model_exists:61 - Downloading WavTokenizer model from https://huggingface.co./novateur/WavTokenizer-large-speech-75token/resolve/main/wavtokenizer_large_speech_320_24k.ckpt
C:\Users\Administrator\AppData\Roaming\outeai\tts\wavtokenizer_large_speech_75_token\wavtokenizer_large_speech_320_24k.ckpt: 100%|βββββ| 1.63G/1.63G [03:05<00:00, 9.45MiB/s]
D:\Temp\PythonProjects\OuteTTSDemo.venv\lib\site-packages\torch\nn\utils\weight_norm.py:143: FutureWarning: torch.nn.utils.weight_norm
is deprecated in favor of torch.nn.utils.parametrizations.weight_norm
.
WeightNorm.apply(module, name, dim)
making attention of type 'vanilla' with 768 in_channels
D:\Temp\PythonProjects\OuteTTSDemo.venv\lib\site-packages\outetts\wav_tokenizer\decoder\pretrained.py:101: FutureWarning: You are using torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict_raw = torch.load(model_path, map_location="cpu")['state_dict']
tokenizer_config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 982k/982k [00:00<00:00, 3.67MB/s]
vocab.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.78M/2.78M [00:00<00:00, 4.47MB/s]
merges.txt: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.67M/1.67M [00:00<00:00, 2.69MB/s]
tokenizer.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.4M/12.4M [00:03<00:00, 3.43MB/s]
added_tokens.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 114k/114k [00:00<00:00, 12.9MB/s]
special_tokens_map.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 83.0k/83.0k [00:00<00:00, 15.6MB/s]
Yes, you can manually specify the paths to all required files in the configuration. Hereβs an example:
model_config = outetts.HFModelConfig_v1(
model_path="path/to/model/folder", # Path to the model file
tokenizer_path="path/to/tokenizer/folder", # Path to the tokenizer file
language="en", # Supported languages in v0.2: en, zh, ja, ko
wavtokenizer_model_path="path/to/wavtokenizer_large_speech_320_24k.ckpt" # Path to the WavTokenizer file
)
This will load from your specified directory instead of downloading them automatically.