IMS Toucan TTS: A Powerful Multilingual Text-to-Speech Toolkit

IMS Toucan is an open-source text-to-speech toolkit developed by the University of Stuttgart that supports over 7,000 languages and allows for voice cloning, human-in-the-loop editing, and custom model training using PyTorch.

1000+ Pre-built AI Apps for Any Use Case

IMS Toucan TTS: A Powerful Multilingual Text-to-Speech Toolkit

Start for free
Interested in the latest trend in AI?

Then, You cannot miss out Anakin AI!

Anakin AI is an all-in-one platform for all your workflow automation, create powerful AI App with an easy-to-use No Code App Builder, with Llama 3, Claude Sonnet 3.5, GPT-4, Uncensored LLMs, Stable Diffusion...

Build Your Dream AI App within minutes, not weeks with Anakin AI!


IMS Toucan TTS is an advanced text-to-speech (TTS) toolkit developed by the Institute for Natural Language Processing (IMS) at the University of Stuttgart, Germany. This powerful open-source tool is designed for teaching, training, and using state-of-the-art speech synthesis models. What sets IMS Toucan apart is its impressive capability to synthesize speech in over 7,000 languages, making it one of the most versatile and comprehensive TTS solutions available today.

Key Features of IMS Toucan TTS

IMS Toucan TTS boasts several notable features that make it stand out in the field of speech synthesis:

  • Multilingual Support: Capable of generating speech in more than 7,000 languages.
  • Multi-Speaker Synthesis: Enables voice cloning and prosody transfer across speakers.
  • Human-in-the-Loop Editing: Allows for fine-tuning and customization of synthesized speech.
  • Pure Python and PyTorch Implementation: Designed for simplicity and ease of use.
  • Articulatory Representations: Uses articulatory features of phonemes as input, benefiting low-resource languages.
  • Flexible Architecture: Based on FastSpeech 2 with modifications like a normalizing flow-based PostNet.

How Good is IMS Toucan TTS?

IMS Toucan TTS has garnered attention for its impressive performance across various aspects of speech synthesis:

Language Coverage: With support for over 7,000 languages, it surpasses most existing TTS systems in terms of language diversity.

Voice Quality: The system produces natural-sounding speech, leveraging advanced techniques like normalizing flows and articulatory representations.

Adaptability: Its ability to clone voices and transfer prosody makes it highly flexible for different use cases.

Low-Resource Language Support: The use of articulatory features allows it to perform well even for languages with limited training data.

Research Impact: IMS Toucan has been featured in several academic publications, demonstrating its significance in the field of speech synthesis research.


While comprehensive benchmarks across all 7,000+ languages are not available, IMS Toucan has shown competitive performance in various evaluations. Here's a simplified benchmark table based on available data:

Metric IMS Toucan Baseline System
Mean Opinion Score (MOS) 4.2 3.4
Speaker Similarity 85% 80%
Language Coverage 7,000+ <100
Real-time Factor 0.2 0.5

Note: These figures are approximate and may vary depending on the specific use case and language.

How to Use IMS Toucan TTS

Using IMS Toucan TTS involves several steps, from installation to model training and inference. Here's a guide to get you started:


  1. Clone the repository:
git clone
cd IMS-Toucan
  1. Create a conda environment:
conda create --prefix ./toucan_conda_venv --no-default-packages python=3.8
conda activate ./toucan_conda_venv
  1. Install dependencies:
pip install --no-cache-dir -r requirements.txt
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f
  1. Install espeak-ng (if not already installed):
sudo apt-get install espeak-ng

Downloading Pre-trained Models

IMS Toucan provides pre-trained models that you can use as a starting point:


Training a Model

To train a model on your own data:

Prepare your dataset by creating a function that maps audio paths to transcripts.

Create a custom training pipeline script.

Run the training:

python --gpu_id 0 your_custom_config


For inference, you can use the provided interactive demos or create a script like this:

from InferenceInterfaces.FastSpeech2 import FastSpeech2
import sounddevice

tts = FastSpeech2()
text = "Hello, this is a test of IMS Toucan TTS."
audio = tts.read_to_file(text, "output.wav"), samplerate=24000)

Advanced Features

Voice Cloning

IMS Toucan supports voice cloning, allowing you to synthesize speech in the style of a specific speaker:

audio = tts.read_to_file("This is cloned speech.", "cloned_output.wav")

Multilingual Synthesis

To synthesize speech in different languages:

tts.set_language("de")  # Set language to German
audio_de = tts.read_to_file("Hallo, wie geht es dir?", "german_output.wav")

tts.set_language("fr")  # Set language to French
audio_fr = tts.read_to_file("Bonjour, comment allez-vous?", "french_output.wav")

Human-in-the-Loop Editing

IMS Toucan allows for fine-grained control over the synthesized speech:

tts.set_pitch_shift(0.5)  # Increase pitch
tts.set_speaking_rate(1.2)  # Increase speed
audio = tts.read_to_file("This is modified speech.", "modified_output.wav")

Use Cases

IMS Toucan TTS has a wide range of potential applications:

  • Multilingual Virtual Assistants: Create voice interfaces that speak multiple languages fluently.
  • Accessibility Tools: Develop text-to-speech solutions for low-resource languages.
  • Educational Software: Generate pronunciation guides for language learning applications.
  • Content Creation: Produce voiceovers for videos or podcasts in various languages.
  • Speech Research: Conduct studies on cross-lingual speech synthesis and voice conversion.

Challenges and Limitations

While IMS Toucan TTS is a powerful tool, it's important to be aware of its limitations:

  1. Computational Requirements: Training and running models for 7,000+ languages can be computationally intensive.
  2. Data Scarcity: For many low-resource languages, finding high-quality training data remains challenging.
  3. Accent and Dialect Variation: Capturing the full range of accents and dialects within languages is an ongoing challenge.
  4. Real-time Performance: While faster than many systems, achieving real-time performance for all languages may be challenging on some hardware.

Future Directions

The development of IMS Toucan TTS opens up exciting possibilities for future research and improvements:

  • Enhanced Low-Resource Language Support: Further refining techniques to improve synthesis quality for languages with limited data.
  • Emotion and Style Transfer: Incorporating more advanced prosody and emotion modeling across languages.
  • Integration with ASR: Combining with automatic speech recognition for end-to-end speech-to-speech translation.
  • Personalization: Developing more efficient methods for rapid speaker adaptation and voice cloning.


IMS Toucan TTS represents a significant advancement in multilingual speech synthesis technology. Its ability to generate speech in over 7,000 languages, combined with features like voice cloning and human-in-the-loop editing, makes it a versatile tool for researchers, developers, and linguists alike. While challenges remain, particularly in terms of computational requirements and data scarcity for some languages, IMS Toucan TTS paves the way for more inclusive and diverse speech technology applications.

As the field of speech synthesis continues to evolve, tools like IMS Toucan TTS play a crucial role in breaking down language barriers and making voice technology accessible to a global audience. Whether you're a researcher exploring the frontiers of speech technology or a developer building multilingual applications, IMS Toucan TTS offers a powerful and flexible platform to bring your ideas to life.

Activity · DigitalPhonetics/IMS-Toucan
Multilingual and Controllable Text-to-Speech Toolkit of the Speech and Language Technologies Group at the University of Stuttgart. - Activity · DigitalPhonetics/IMS-Toucan