← Back to honi.dev

createAgent

The core factory function that creates an AI agent backed by a Cloudflare Durable Object.

Signature

TypeScript
1function createAgent(config: AgentConfig): DurableObject

AgentConfig

FieldTypeRequiredDescription
namestringYesUnique identifier for this agent type
modelstringYesLLM model identifier (see supported models below)
systemstringNoSystem prompt / instructions for the agent
toolsRecord<string, Tool>NoNamed tools the agent can invoke
memoryMemoryConfig | 'tiered'NoMemory tier configuration
observabilityObservabilityConfigNoLogging and tracing configuration

Supported Models

Honi routes to the right provider automatically based on the model ID prefix. All non-core providers use optional peer deps — zero bundle cost unless installed.

ProviderPrefixExample modelEnv var
Anthropicclaude-*claude-sonnet-4-5ANTHROPIC_API_KEY
OpenAIgpt-*, o1, o3-*gpt-4o, o3-miniOPENAI_API_KEY
Googlegemini-*gemini-2.5-flash-previewGOOGLE_AI_API_KEY
Groqgroq/*groq/llama-3.3-70b-versatileGROQ_API_KEY
DeepSeekdeepseek-*deepseek-chat, deepseek-reasonerDEEPSEEK_API_KEY
Mistralmistral-*, codestral-*mistral-large-latestMISTRAL_API_KEY
xAIgrok-*grok-3, grok-3-miniXAI_API_KEY
Perplexitysonar*sonar-pro, sonar-reasoningPERPLEXITY_API_KEY
Together AItogether/*together/meta-llama/Llama-3.3-70B-Instruct-TurboTOGETHER_API_KEY
Coherecommand-*command-r-plus, command-a-03-2025COHERE_API_KEY
Azure OpenAIazure/*azure/gpt-4oAZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT
Workers AI@cf/*@cf/meta/llama-3.1-8b-instructAI binding (wrangler.toml)

Non-core providers require their AI SDK package:

Shell
$npm install @ai-sdk/google # Google Gemini
$npm install @ai-sdk/groq # Groq
$npm install @ai-sdk/deepseek # DeepSeek
$npm install @ai-sdk/mistral # Mistral
$npm install @ai-sdk/xai # xAI
$npm install @ai-sdk/perplexity # Perplexity
$npm install @ai-sdk/togetherai # Together AI
$npm install @ai-sdk/cohere # Cohere
$npm install @ai-sdk/azure # Azure OpenAI
$npm install @ai-sdk/cloudflare # Workers AI

HTTP Endpoints

A Honi agent automatically exposes the following HTTP endpoints:

POST /chat

Send a message to the agent and receive a response.

JSON
// Request
{
"message": "What's the weather today?",
"sessionId": "user-123" // optional
}
// Response
{
"response": "I don't have access to weather data...",
"sessionId": "user-123",
"toolCalls": []
}

DELETE /chat

Reset the agent's conversation history for a given session.

JSON
{ "sessionId": "user-123" }

Streaming (SSE)

Add Accept: text/event-stream to your POST /chat request to receive Server-Sent Events. Tokens are streamed as they are generated by the LLM:

Shell
$curl -N -X POST http://localhost:8787/chat \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{"message": "Hello"}'

Each SSE event has a type field: token, tool_call, tool_result, or done.

Full Example

TypeScript
1import { createAgent, tool } from 'honidev'
2import { z } from 'zod'
3
4export const agent = createAgent({
5 name: 'support-bot',
6 model: 'claude-sonnet-4-20250514',
7 system: 'You are a customer support agent.',
8 memory: 'tiered',
9
10 tools: {
11 lookupOrder: tool({
12 description: 'Look up a customer order',
13 input: z.object({ orderId: z.string() }),
14 async run({ orderId }, ctx) {
15 return ctx.env.DB.prepare('SELECT * FROM orders WHERE id = ?')
16 .bind(orderId).first()
17 }
18 }),
19 },
20
21 observability: {
22 logLevel: 'info',
23 aiGateway: { accountId: '...', gatewayId: 'my-gw' }
24 }
25})