Skip to main content

Inference Methods

infer()

audio = tts.infer(
text: str,
ref_audio: str = None,
ref_codes: Tensor = None,
ref_text: str = None,
voice: dict = None,
max_chars: int = 256,
silence_p: float = 0.15,
crossfade_p: float = 0.0,
temperature: float = 1.0,
top_k: int = 50,
skip_normalize: bool = False,
)

Parameters

ParameterTypeDescription
textstrText to synthesize
ref_audiostrPath to reference audio for voice cloning
ref_codesTensorPre-encoded reference codes
ref_textstrTranscript of reference audio
voicedictPreset voice dict from get_preset_voice()
max_charsintMax characters per chunk (default 256)
silence_pfloatSilence duration between chunks in seconds
crossfade_pfloatCrossfade duration between chunks
temperaturefloatSampling temperature
top_kintTop-k sampling
skip_normalizeboolSkip text normalization

Returns

numpy.ndarray — Audio waveform at 24 kHz.

Voice Priority

  1. voice dict (from preset)
  2. ref_audio + ref_text
  3. ref_codes + ref_text
  4. Default preset voice

infer_batch()

audios = tts.infer_batch(texts: List[str], ...)

Returns List[numpy.ndarray]. PyTorch mode uses true batch generation; GGUF processes sequentially.


infer_stream()

for chunk in tts.infer_stream(text: str, ...):
play_audio(chunk)

Yields numpy.ndarray chunks (GGUF only).


save()

tts.save(audio: numpy.ndarray, output_path: str)

encode_reference()

codes = tts.encode_reference(ref_audio_path: str)
# Returns: torch.Tensor

close()

tts.close()
# Or use context manager:
with Vieneu() as tts:
audio = tts.infer(text="...")