Skip to content
Cloudflare Docs

Recipes

This section provides practical examples and recipes for common use cases. These examples are done using Workers Binding but can be easely adapted to use the REST API instead.

Bring your own model

You can use AutoRAG for search while leveraging a model outside of Workers AI to generate responses. Here is an example of how you can use an OpenAI model to generate your responses.

import {openai} from '@ai-sdk/openai';
import {generateText} from "ai";
export interface Env {
AI: Ai;
OPENAI_API_KEY: string;
}
export default {
async fetch(request, env): Promise<Response> {
// Parse incoming url
const url = new URL(request.url)
// Get the user query or default to a predefined one
const userQuery = url.searchParams.get('query') ?? 'How do I train a llama to deliver coffee?'
// Search for documents in AutoRAG
const searchResult = await env.AI.autorag('my-rag').search({query: userQuery})
if (searchResult.data.length === 0) {
// No matching documents
return Response.json({text: `No data found for query "${userQuery}"`})
}
// Join all document chunks into a single string
const chunks = searchResult.data.map((item) => {
const data = item.content.map((content) => {
return content.text
}).join('\n\n')
return `<file name="${item.filename}">${data}</file>`
}).join('\n\n')
// Send the user query + matched documents to openai for answer
const generateResult = await generateText({
model: openai("gpt-4o-mini"),
messages: [
{role: 'system', content: 'You are a helpful assistant and your task is to answer the user question using the provided files.'},
{role: 'user', content: chunks},
{role: 'user', content: userQuery},
],
});
// Return the generated answer
return Response.json({text: generateResult.text});
},
} satisfies ExportedHandler<Env>;

Simple search engine

Using the search method you can implement a simple but fast search engine.

To replicate this example remember to:

  • Disable rewrite_query as you want to match the original user query
  • Configure your AutoRAG to have small chunk sizes, usually 256 tokens is enough
export interface Env {
AI: Ai;
}
export default {
async fetch(request, env): Promise<Response> {
const url = new URL(request.url)
const userQuery = url.searchParams.get('query') ?? 'How do I train a llama to deliver coffee?'
const searchResult = await env.AI.autorag('my-rag').search({query: userQuery, rewrite_query: false})
return Response.json({
files: searchResult.data.map((obj) => obj.filename)
})
},
} satisfies ExportedHandler<Env>;