Integration with AI SDK
Elysia provides support for response streaming with ease, allowing you to integrate with Vercel AI SDKs seamlessly.
Response Streaming
Elysia supports continuous streaming of a ReadableStream and Response, allowing you to return streams directly from the AI SDKs.
import { Elysia } from 'elysia'
import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'
new Elysia().get('/', () => {
const stream = streamText({
model: openai('gpt-5'),
system: 'You are Yae Miko from Genshin Impact',
prompt: 'Hi! How are you doing?'
})
// Just return a ReadableStream
return stream.textStream
// UI Message Stream is also supported
return stream.toUIMessageStream()
})Elysia will handle the stream automatically, allowing you to use it in various ways.
Server-Sent Events
Elysia also supports Server-Sent Events for streaming responses by simply wrapping a ReadableStream with the sse function.
import { Elysia, sse } from 'elysia'
import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'
new Elysia().get('/', () => {
const stream = streamText({
model: openai('gpt-5'),
system: 'You are Yae Miko from Genshin Impact',
prompt: 'Hi! How are you doing?'
})
// Each chunk will be sent as a Server Sent Event
return sse(stream.textStream)
// UI Message Stream is also supported
return sse(stream.toUIMessageStream())
})As Response
If you don't need type safety from the stream for further usage with Eden, you can return the stream directly as a response.
import { Elysia } from 'elysia'
import { ai } from 'ai'
import { openai } from '@ai-sdk/openai'
new Elysia().get('/', () => {
const stream = streamText({
model: openai('gpt-5'),
system: 'You are Yae Miko from Genshin Impact',
prompt: 'Hi! How are you doing?'
})
return stream.toTextStreamResponse()
// UI Message Stream Response will use SSE
return stream.toUIMessageStreamResponse()
})Manual Streaming
If you want to have more control over the streaming, you can use a generator function to yield the chunks manually.
import { Elysia, sse } from 'elysia'
import { ai } from 'ai'
import { openai } from '@ai-sdk/openai'
new Elysia().get('/', async function* () {
const stream = streamText({
model: openai('gpt-5'),
system: 'You are Yae Miko from Genshin Impact',
prompt: 'Hi! How are you doing?'
})
for await (const data of stream.textStream)
yield sse({
data,
event: 'message'
})
yield sse({
event: 'done'
})
})Fetch
If the AI SDK doesn't support the model you're using, you can still use the fetch function to make requests to the AI SDKs and stream the response directly.
import { Elysia, fetch } from 'elysia'
new Elysia().get('/', () => {
return fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`
},
body: JSON.stringify({
model: 'gpt-5',
stream: true,
messages: [
{
role: 'system',
content: 'You are Yae Miko from Genshin Impact'
},
{ role: 'user', content: 'Hi! How are you doing?' }
]
})
})
})Elysia will proxy the fetch response with streaming support automatically.
For additional information, please refer to the AI SDK documentation