Streaming and Caching with Supabase

Generate and stream speech through Supabase Edge Functions. Store speech in Supabase Storage and cache responses via built-in CDN.

Introduction

In this tutorial you will learn how to build and edge API to generate, stream, store, and cache speech using Supabase Edge Functions, Supabase Storage, and ElevenLabs.

Prefer to jump straight to the code?

Find the example project on GitHub.

Requirements

Setup

Create a Supabase project locally

After installing the Supabase CLI, run the following command to create a new Supabase project locally:

$supabase init

Configure the storage bucket

You can configure the Supabase CLI to automatically generate a storage bucket by adding this configuration in the config.toml file:

./supabase/config.toml
1[storage.buckets.audio]
2public = false
3file_size_limit = "50MiB"
4allowed_mime_types = ["audio/mp3"]
5objects_path = "./audio"

Upon running supabase start this will create a new storage bucket in your local Supabase project. Should you want to push this to your hosted Supabase project, you can run supabase seed buckets --linked.

Configure background tasks for Supabase Edge Functions

To use background tasks in Supabase Edge Functions when developing locally, you need to add the following configuration in the config.toml file:

./supabase/config.toml
1[edge_runtime]
2policy = "per_worker"

When running with per_worker policy, Function won’t auto-reload on edits. You will need to manually restart it by running supabase functions serve.

Create a Supabase Edge Function for Speech generation

Create a new Edge Function by running the following command:

$supabase functions new text-to-speech

If you’re using VS Code or Cursor, select y when the CLI prompts “Generate VS Code settings for Deno? [y/N]“!

Set up the environment variables

Within the supabase/functions directory, create a new .env file and add the following variables:

supabase/functions/.env
1# Find / create an API key at https://elevenlabs.io/app/settings/api-keys
2ELEVENLABS_API_KEY=your_api_key

Dependencies

The project uses a couple of dependencies:

Since Supabase Edge Function uses the Deno runtime, you don’t need to install the dependencies, rather you can import them via the npm: prefix.

Code the Supabase Edge Function

In your newly created supabase/functions/text-to-speech/index.ts file, add the following code:

supabase/functions/text-to-speech/index.ts
1// Setup type definitions for built-in Supabase Runtime APIs
2import 'jsr:@supabase/functions-js/edge-runtime.d.ts';
3import { createClient } from 'jsr:@supabase/supabase-js@2';
4import { ElevenLabsClient } from 'npm:elevenlabs';
5import * as hash from 'npm:object-hash';
6
7const supabase = createClient(
8 Deno.env.get('SUPABASE_URL')!,
9 Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
10);
11
12const client = new ElevenLabsClient({
13 apiKey: Deno.env.get('ELEVENLABS_API_KEY'),
14});
15
16// Upload audio to Supabase Storage in a background task
17async function uploadAudioToStorage(stream: ReadableStream, requestHash: string) {
18 const { data, error } = await supabase.storage
19 .from('audio')
20 .upload(`${requestHash}.mp3`, stream, {
21 contentType: 'audio/mp3',
22 });
23
24 console.log('Storage upload result', { data, error });
25}
26
27Deno.serve(async (req) => {
28 // To secure your function for production, you can for example validate the request origin,
29 // or append a user access token and validate it with Supabase Auth.
30 console.log('Request origin', req.headers.get('host'));
31 const url = new URL(req.url);
32 const params = new URLSearchParams(url.search);
33 const text = params.get('text');
34 const voiceId = params.get('voiceId') ?? 'JBFqnCBsd6RMkjVDRZzb';
35
36 const requestHash = hash.MD5({ text, voiceId });
37 console.log('Request hash', requestHash);
38
39 // Check storage for existing audio file
40 const { data } = await supabase.storage.from('audio').createSignedUrl(`${requestHash}.mp3`, 60);
41
42 if (data) {
43 console.log('Audio file found in storage', data);
44 const storageRes = await fetch(data.signedUrl);
45 if (storageRes.ok) return storageRes;
46 }
47
48 if (!text) {
49 return new Response(JSON.stringify({ error: 'Text parameter is required' }), {
50 status: 400,
51 headers: { 'Content-Type': 'application/json' },
52 });
53 }
54
55 try {
56 console.log('ElevenLabs API call');
57 const response = await client.textToSpeech.convertAsStream(voiceId, {
58 output_format: 'mp3_44100_128',
59 model_id: 'eleven_multilingual_v2',
60 text,
61 });
62
63 const stream = new ReadableStream({
64 async start(controller) {
65 for await (const chunk of response) {
66 controller.enqueue(chunk);
67 }
68 controller.close();
69 },
70 });
71
72 // Branch stream to Supabase Storage
73 const [browserStream, storageStream] = stream.tee();
74
75 // Upload to Supabase Storage in the background
76 EdgeRuntime.waitUntil(uploadAudioToStorage(storageStream, requestHash));
77
78 // Return the streaming response immediately
79 return new Response(browserStream, {
80 headers: {
81 'Content-Type': 'audio/mpeg',
82 },
83 });
84 } catch (error) {
85 console.log('error', { error });
86 return new Response(JSON.stringify({ error: error.message }), {
87 status: 500,
88 headers: { 'Content-Type': 'application/json' },
89 });
90 }
91});

Code deep dive

There’s a couple of things worth noting about the code. Let’s step through it step by step.

1

Handle the incoming request

To handle the incoming request, use the Deno.serve handler. In the demo we don’t validate the request origin, but you can for example validate the request origin, or append a user access token and validate it with Supabase Auth.

From the incoming request, the function extracts the text and voiceId parameters. The voiceId parameter is optional and defaults to the ElevenLabs ID for the “Allison” voice.

Using the object-hash library, the function generates a hash from the request parameters. This hash is used to check for existing audio files in Supabase Storage.

1Deno.serve(async (req) => {
2// To secure your function for production, you can for example validate the request origin,
3// or append a user access token and validate it with Supabase Auth.
4console.log("Request origin", req.headers.get("host"));
5const url = new URL(req.url);
6const params = new URLSearchParams(url.search);
7const text = params.get("text");
8const voiceId = params.get("voiceId") ?? "JBFqnCBsd6RMkjVDRZzb";
9
10const requestHash = hash.MD5({ text, voiceId });
11console.log("Request hash", requestHash);
12
13// ...
14})
2

Check for existing audio file in Supabase Storage

Supabase Storage comes with a smart CDN built-in allowing you to easily cache and serve your files.

Here, the function checks for an existing audio file in Supabase Storage. If the file exists, the function returns the file from Supabase Storage.

1const { data } = await supabase
2 .storage
3 .from("audio")
4 .createSignedUrl(`${requestHash}.mp3`, 60);
5
6if (data) {
7 console.log("Audio file found in storage", data);
8 const storageRes = await fetch(data.signedUrl);
9 if (storageRes.ok) return storageRes;
10}
3

Generate Speech as a stream and split into two branches

Using the streaming capabilities of the ElevenLabs API, the function generates a stream. The benefit here is that even for larger text, you can start streaming the audio back to your user immediately, and then upload the stream to Supabase Storage in the background.

This allows for the best possible user experience, making even large text blocks feel magically quick. The magic here happens on line 17, where the stream.tee() method branches the readablestream into two branches: one for the browser and one for Supabase Storage.

1try {
2 const response = await client.textToSpeech.convertAsStream(voiceId, {
3 output_format: "mp3_44100_128",
4 model_id: "eleven_multilingual_v2",
5 text,
6 });
7
8 const stream = new ReadableStream({
9 async start(controller) {
10 for await (const chunk of response) {
11 controller.enqueue(chunk);
12 }
13 controller.close();
14 },
15 });
16
17 // Branch stream to Supabase Storage
18 const [browserStream, storageStream] = stream.tee();
19
20 // Upload to Supabase Storage in the background
21 EdgeRuntime.waitUntil(uploadAudioToStorage(storageStream, requestHash));
22
23 // Return the streaming response immediately
24 return new Response(browserStream, {
25 headers: {
26 "Content-Type": "audio/mpeg",
27 },
28 });
29} catch (error) {
30 console.log("error", { error });
31 return new Response(JSON.stringify({ error: error.message }), {
32 status: 500,
33 headers: { "Content-Type": "application/json" },
34 });
35}
4

Upload the audio stream to Supabase Storage in the background

The EdgeRuntime.waitUntil method on line 20 in the previous step is used to upload the audio stream to Supabase Storage in the background using the uploadAudioToStorage function. This allows the function to return the streaming response immediately to the browser, while the audio is being uploaded to Supabase Storage.

Once the storage object has been created, the next time your users makes a request with the same parameters, the function will return the audio file from the Supabase Storage CDN.

1// Upload audio to Supabase Storage in a background task
2async function uploadAudioToStorage(
3 stream: ReadableStream,
4 requestHash: string,
5) {
6 const { data, error } = await supabase.storage
7 .from("audio")
8 .upload(`${requestHash}.mp3`, stream, {
9 contentType: "audio/mp3",
10 });
11
12 console.log("Storage upload result", { data, error });
13}

Run locally

To run the function locally, run the following commands:

$supabase start

Once the local Supabase stack is up and running, run the following command to start the function and observe the logs:

$supabase functions serve

Try it out

Navigate to http://127.0.0.1:54321/functions/v1/text-to-speech?text=hello%20world to hear the function in action.

Afterwards, navigate to http://127.0.0.1:54323/project/default/storage/buckets/audio to see the audio file in your local Supabase Storage bucket.

Deploy to Supabase

If you haven’t already, create a new Supabase account at database.new and link the local project to your Supabase account:

$supabase link

Once done, run the following command to deploy the function:

$supabase functions deploy

Set the function secrets

Now that you have all your secrets set locally, you can run the following command to set the secrets in your Supabase project:

$supabase secrets set --env-file supabase/functions/.env

Test the function

The function is designed in a way that it can be used directly as a source for an <audio> element.

1<audio
2 src="https://${SUPABASE_PROJECT_REF}.supabase.co/functions/v1/text-to-speech?text=Hello%2C%20world!&voiceId=JBFqnCBsd6RMkjVDRZzb"
3 controls
4/>

You can find an example frontend implementation in the complete code example on GitHub.

Built with