Transcription
Overview
The ElevenLabs Speech to Text (STT) API turns spoken audio into text with state of the art accuracy. Our Scribe v2 model adapts to textual cues across 90+ languages and multiple voice styles. To try a live demo please visit our Speech to Text showcase page.
Step-by-step guide for using speech to text in ElevenLabs.
Learn how to integrate the speech to text API into your application.
Learn how to transcribe audio with ElevenLabs in realtime with WebSockets.
Companies requiring HIPAA compliance must contact ElevenLabs Sales to sign a Business Associate Agreement (BAA) agreement. Please ensure this step is completed before proceeding with any HIPAA-related integrations or deployments.
Models
State-of-the-art speech recognition model
Real-time speech recognition model
Example API response
The following example shows the output of the Speech to Text API using the Scribe v2 model for a sample audio file.
The output is classified in three category types:
word- A word in the language of the audiospacing- The space between words, not applicable for languages that don’t use spaces like Japanese, Mandarin, Thai, Lao, Burmese and Cantoneseaudio_event- Non-speech sounds like laughter or applause
Concurrency and priority
Concurrency is the concept of how many requests can be processed at the same time.
For Speech to Text, files that are over 8 minutes long are transcribed in parallel internally in order to speed up processing. The audio is chunked into four segments to be transcribed concurrently.
You can calculate the concurrency limit with the following calculation:
For example, a 15 minute audio file will be transcribed with a concurrency of 2, while a 120 minute audio file will be transcribed with a concurrency of 4.
The above calculation is only applicable to Scribe v1 and v2. For Scribe v2 Realtime, see the concurrency limit chart.
Advanced features
Keyterm prompting and entity detection come at an additional cost. See the API pricing page for detailed pricing information.
Keyterm prompting
Highlight up to 100 words or phrases to bias the model towards transcribing them. This is useful for transcribing specific words or sentences that are not common in the audio, such as product names, names, or other specific terms. Keyterms are more powerful than biased keywords or customer vocabularies offered by other models, because it relies on the context to decide whether to transcribe that term or not.
To learn more about how to use keyterm prompting, see the keyterm prompting documentation.
Entity detection
Scribe v2 can detect several categories of entities in the transcript, providing their exact timestamps. This is useful to highlight credit card numbers, names, medical conditions or SSNs.
For a full list of supported entities, see the entity detection documentation.
Supported languages
The Scribe v1 and v2 models support 90+ languages, including:
Afrikaans (afr), Amharic (amh), Arabic (ara), Armenian (hye), Assamese (asm), Asturian (ast), Azerbaijani (aze), Belarusian (bel), Bengali (ben), Bosnian (bos), Bulgarian (bul), Burmese (mya), Cantonese (yue), Catalan (cat), Cebuano (ceb), Chichewa (nya), Croatian (hrv), Czech (ces), Danish (dan), Dutch (nld), English (eng), Estonian (est), Filipino (fil), Finnish (fin), French (fra), Fulah (ful), Galician (glg), Ganda (lug), Georgian (kat), German (deu), Greek (ell), Gujarati (guj), Hausa (hau), Hebrew (heb), Hindi (hin), Hungarian (hun), Icelandic (isl), Igbo (ibo), Indonesian (ind), Irish (gle), Italian (ita), Japanese (jpn), Javanese (jav), Kabuverdianu (kea), Kannada (kan), Kazakh (kaz), Khmer (khm), Korean (kor), Kurdish (kur), Kyrgyz (kir), Lao (lao), Latvian (lav), Lingala (lin), Lithuanian (lit), Luo (luo), Luxembourgish (ltz), Macedonian (mkd), Malay (msa), Malayalam (mal), Maltese (mlt), Mandarin Chinese (zho), Māori (mri), Marathi (mar), Mongolian (mon), Nepali (nep), Northern Sotho (nso), Norwegian (nor), Occitan (oci), Odia (ori), Pashto (pus), Persian (fas), Polish (pol), Portuguese (por), Punjabi (pan), Romanian (ron), Russian (rus), Serbian (srp), Shona (sna), Sindhi (snd), Slovak (slk), Slovenian (slv), Somali (som), Spanish (spa), Swahili (swa), Swedish (swe), Tamil (tam), Tajik (tgk), Telugu (tel), Thai (tha), Turkish (tur), Ukrainian (ukr), Umbundu (umb), Urdu (urd), Uzbek (uzb), Vietnamese (vie), Welsh (cym), Wolof (wol), Xhosa (xho) and Zulu (zul).
Breakdown of language support
Word Error Rate (WER) is a key metric used to evaluate the accuracy of transcription systems. It measures how many errors are present in a transcript compared to a reference transcript. Below is a breakdown of the WER for each language that Scribe v1 and v2 support.
Excellent (≤ 5% WER)
Belarusian (bel), Bosnian (bos), Bulgarian (bul), Catalan (cat), Croatian (hrv), Czech (ces), Danish (dan), Dutch (nld), English (eng), Estonian (est), Finnish (fin), French (fra), Galician (glg), German (deu), Greek (ell), Hungarian (hun), Icelandic (isl), Indonesian (ind), Italian (ita), Japanese (jpn), Kannada (kan), Latvian (lav), Macedonian (mkd), Malay (msa), Malayalam (mal), Norwegian (nor), Polish (pol), Portuguese (por), Romanian (ron), Russian (rus), Slovak (slk), Spanish (spa), Swedish (swe), Turkish (tur), Ukrainian (ukr) and Vietnamese (vie).
High Accuracy (>5% to ≤10% WER)
Armenian (hye), Azerbaijani (aze), Bengali (ben), Cantonese (yue), Filipino (fil), Georgian (kat), Gujarati (guj), Hindi (hin), Kazakh (kaz), Lithuanian (lit), Maltese (mlt), Mandarin (cmn), Marathi (mar), Nepali (nep), Odia (ori), Persian (fas), Serbian (srp), Slovenian (slv), Swahili (swa), Tamil (tam) and Telugu (tel)
Good (>10% to ≤20% WER)
Afrikaans (afr), Arabic (ara), Assamese (asm), Asturian (ast), Burmese (mya), Hausa (hau), Hebrew (heb), Javanese (jav), Korean (kor), Kyrgyz (kir), Luxembourgish (ltz), Māori (mri), Occitan (oci), Punjabi (pan), Tajik (tgk), Thai (tha), Uzbek (uzb) and Welsh (cym).
Moderate (>25% to ≤50% WER)
Amharic (amh), Ganda (lug), Igbo (ibo), Irish (gle), Khmer (khm), Kurdish (kur), Lao (lao), Mongolian (mon), Northern Sotho (nso), Pashto (pus), Shona (sna), Sindhi (snd), Somali (som), Urdu (urd), Wolof (wol), Xhosa (xho), Yoruba (yor) and Zulu (zul).
FAQ
Can I use speech to text API with video files?
Yes, the API supports uploading both audio and video files for transcription.
What are the file size and duration limits for the Speech to Text API?
Files up to 3 GB in size and up to 10 hours in duration are supported.
Which audio and video formats are supported in the API?
The API supports the following audio and video formats:
- audio/aac
- audio/x-aac
- audio/x-aiff
- audio/ogg
- audio/mpeg
- audio/mp3
- audio/mpeg3
- audio/x-mpeg-3
- audio/opus
- audio/wav
- audio/x-wav
- audio/webm
- audio/flac
- audio/x-flac
- audio/mp4
- audio/aiff
- audio/x-m4a
Supported video formats include:
- video/mp4
- video/x-msvideo
- video/x-matroska
- video/quicktime
- video/x-ms-wmv
- video/x-flv
- video/webm
- video/mpeg
- video/3gpp
When will you support more languages?
ElevenLabs is constantly expanding the number of languages supported by our models. Please check back frequently for updates.
Does speech to text API support webhooks?
Yes, asynchronous transcription results can be sent to webhooks configured in webhook settings in the UI. Learn more in the webhooks cookbook.
Is a multichannel transcription mode supported in the API?
Yes, the multichannel STT feature allows you to transcribe audio where each channel is processed independently and assigned a speaker ID based on its channel number. This feature supports up to 5 channels. Learn more in the multichannel transcription cookbook.
How does billing work for the speech to text API?
ElevenLabs charges for speech to text based on the duration of the audio sent for transcription. Billing is calculated per hour of audio, with rates varying by tier and model. See the API pricing page for detailed pricing information.