A TypeScript provider for the Vercel AI SDK that enables you to use Heroku's AI inference capabilities in your applications. This provider supports both chat completions and embeddings through Heroku's AI infrastructure.
- π€ Chat Completions: Support for Claude 4.5 Sonnet, Claude 4 Sonnet, Claude 3.7 Sonnet, Claude 3.5 Haiku, gpt-oss-120b, Nova Lite, Nova Pro and other Heroku-hosted models
- π§ Embeddings: Generate vector embeddings using Cohere's multilingual embedding model
- πΌοΈ Image Generation: Create images with Stability AIβs Stable Image Ultra diffusion model
- π§ Tool Support: Function calling capabilities for building AI agents and assistants
- π‘ Streaming: Real-time streaming responses for chat completions
- π TypeScript: Full TypeScript support with comprehensive type definitions
- β‘ Dual Module Support: Compatible with both ESM and CommonJS projects
npm install heroku-ai-provider
# or
yarn add heroku-ai-provider
# or
pnpm add heroku-ai-providerBefore using this provider, you'll need:
- A Heroku account with access to AI services
- Heroku AI API keys for inference and/or embeddings
- The Vercel AI SDK installed in your project
npm install aiSet your Heroku AI API keys as environment variables:
# For chat completions
INFERENCE_KEY=your_inference_api_key
# For embeddings
EMBEDDING_KEY=your_embedding_api_key
# For image generations
DIFFUSION_KEY=your_diffusion_api_key
# Optional: Custom API endpoints
INFERENCE_URL=https://us.inference.heroku.com
EMBEDDING_URL=https://us.inference.heroku.com
DIFFUSION_URL=https://us.inference.heroku.comimport { heroku } from "heroku-ai-provider";
const model = heroku.chat("claude-4-sonnet");import { createHerokuAI } from "heroku-ai-provider";
const client = createHerokuAI({
chatApiKey: "your_inference_api_key",
embeddingsApiKey: "your_embedding_api_key",
chatBaseUrl: "https://us.inference.heroku.com/v1/chat/completions",
embeddingsBaseUrl: "https://us.inference.heroku.com/v1/embeddings",
imageApiKey: "your_diffusion_api_key",
imageBaseUrl: "https://us.inference.heroku.com/v1/images/generations",
});import { generateText } from "ai";
import { heroku } from "heroku-ai-provider";
const { text } = await generateText({
model: heroku.chat("claude-4-sonnet"),
prompt: "What is the capital of France?",
});
console.log(text); // "The capital of France is Paris."import { streamText, stepCountIs } from "ai";
import { heroku } from "heroku-ai-provider";
const { textStream } = await streamText({
model: heroku.chat("claude-3-haiku"),
prompt: "Write a short story about a robot learning to paint.",
});
for await (const delta of textStream) {
process.stdout.write(delta);
}import { generateText } from "ai";
import { heroku } from "heroku-ai-provider";
const { text } = await generateText({
model: heroku.chat("claude-4-sonnet"),
system: "You are a helpful assistant that explains complex topics simply.",
prompt: "Explain quantum computing",
});import { generateText, tool, stepCountIs } from "ai";
import { heroku } from "heroku-ai-provider";
import { z } from "zod";
const { text } = await generateText({
model: heroku.chat("claude-4-sonnet"),
prompt: "What is the weather like in New York?",
tools: {
getWeather: tool({
description: "Get the current weather for a location",
parameters: z.object({
location: z.string().describe("The city and state"),
}),
execute: async ({ location }) => {
// Simulate weather API call
return {
location,
temperature: 72,
condition: "sunny",
};
},
}),
},
stopWhen: stepCountIs(5),
});import { generateText, tool, stepCountIs } from "ai";
import { heroku } from "heroku-ai-provider";
import { z } from "zod";
const { text, steps } = await generateText({
model: heroku.chat("claude-4-sonnet"),
prompt:
"Check the weather in New York and then suggest appropriate clothing.",
tools: {
getWeather: tool({
description: "Get the current weather for a location",
parameters: z.object({
location: z.string().describe("The city and state"),
}),
execute: async ({ location }) => {
return {
location,
temperature: 45,
condition: "rainy",
humidity: 80,
};
},
}),
suggestClothing: tool({
description: "Suggest appropriate clothing based on weather conditions",
inputSchema: z.object({
temperature: z.number().describe("Temperature in Fahrenheit"),
condition: z.string().describe("Weather condition"),
humidity: z.number().optional().describe("Humidity percentage"),
}),
execute: async ({ temperature, condition, humidity }) => {
return {
suggestions: [
"Waterproof jacket",
"Warm layers",
"Waterproof shoes",
"Umbrella",
],
reasoning: `Given ${temperature}Β°F and ${condition} weather${humidity ? ` with ${humidity}% humidity` : ""}, you'll want to stay warm and dry.`,
};
},
}),
},
stopWhen: stepCountIs(5),
});
console.log("Final response:", text);
console.log("Tool execution steps:", steps.length);import { embed } from "ai";
import { heroku } from "heroku-ai-provider";
const { embedding } = await embed({
model: heroku.embedding("cohere-embed-multilingual"),
value: "Hello, world!",
});
console.log(embedding); // [0.1, 0.2, -0.3, ...]import { embedMany } from "ai";
import { heroku } from "heroku-ai-provider";
const { embeddings } = await embedMany({
model: heroku.embedding("cohere-embed-multilingual"),
values: ["First document", "Second document", "Third document"],
});
console.log(embeddings.length); // 3import { createEmbedFunction } from "heroku-ai-provider";
// Create a reusable embed function
const embedText = createEmbedFunction({
apiKey: process.env.EMBEDDING_KEY!,
model: "cohere-embed-multilingual",
});
const embedding = await embedText("Hello, world!");
console.log(embedding); // [0.1, 0.2, -0.3, ...]import { experimental_generateImage as generateImage } from "ai";
import { heroku } from "heroku-ai-provider";
const result = await generateImage({
model: heroku.image("stable-image-ultra"),
prompt: "A watercolor illustration of a lighthouse at sunrise",
size: "1024x1024",
});
const imageBytes = result.image.uint8Array;
console.log("Generated image bytes length:", imageBytes.length);interface HerokuProviderSettings {
// API keys (falls back to environment variables)
chatApiKey?: string; // INFERENCE_KEY
embeddingsApiKey?: string; // EMBEDDING_KEY
imageApiKey?: string; // DIFFUSION_KEY
// Base URLs (falls back to environment variables or defaults)
chatBaseUrl?: string; // INFERENCE_URL
embeddingsBaseUrl?: string; // EMBEDDING_URL
imageBaseUrl?: string; // DIFFUSION_URL
}claude-4-sonnet- Claude 4 Sonnet by Anthropicclaude-4-5-sonnet- Claude 4.5 Sonnet by Anthropicclaude-3-haiku- Claude 3 Haiku by Anthropicclaude-3-7-sonnet- Claude 3.7 Sonnet by Anthropicclaude-3-5-haiku- Claude 3.5 Haiku by Anthropicclaude-3-5-sonnet-latest- Claude 3.5 Sonnet by Anthropicgpt-oss-120b- gpt-oss-120b by OpenAInova-lite- Nova Lite by Amazonnova-pro- Nova Pro by Amazon
cohere-embed-multilingual- Multilingual embedding model by Cohere
stable-image-ultra- Stable Image Ultra diffusion model by Stability AI
// app/api/chat/route.ts
import { streamText, stepCountIs } from "ai";
import { heroku } from "heroku-ai-provider";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: heroku.chat("claude-4-sonnet"),
messages,
stopWhen: stepCountIs(5), // Enable multi-step tool conversations
});
return result.toDataStreamResponse();
}// app/api/chat/route.ts
import { streamText, tool } from "ai";
import { heroku } from "heroku-ai-provider";
import { z } from "zod";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: heroku.chat("claude-4-sonnet"),
messages,
tools: {
getTime: tool({
description: "Get the current time",
inputSchema: z.object({
timezone: z
.string()
.optional()
.describe("Timezone (e.g., 'America/New_York')"),
}),
execute: async ({ timezone = "UTC" }) => {
return {
time: new Date().toLocaleString("en-US", { timeZone: timezone }),
timezone,
};
},
}),
},
stopWhen: stepCountIs(5),
});
return result.toDataStreamResponse();
}import express from "express";
import { generateText } from "ai";
import { heroku } from "heroku-ai-provider";
const app = express();
app.post("/chat", async (req, res) => {
const { prompt } = req.body;
const { text } = await generateText({
model: heroku.chat("claude-3-haiku"),
prompt,
});
res.json({ response: text });
});The provider includes comprehensive error handling with user-friendly error messages:
import {
createHerokuAI,
isConfigurationError,
isTemporaryServiceError,
} from "heroku-ai-provider";
try {
const result = await generateText({
model: heroku.chat("claude-4-sonnet"),
prompt: "Hello!",
});
} catch (error) {
if (isConfigurationError(error)) {
console.error("Configuration error:", error.message);
// Handle API key or URL configuration issues
} else if (isTemporaryServiceError(error)) {
console.error("Service error:", error.message);
// Handle temporary service issues (retry logic)
} else {
console.error("Unexpected error:", error);
}
}- Issue: "Chat API key is required" or "Embeddings API key is required"
- Solution: Ensure your API keys are set in environment variables or passed directly to
createHerokuAI()
- Issue: "Unsupported chat model" or "Unsupported embedding model"
- Solution: Check that you're using a supported model from the list above
- Issue: Connection timeouts or network failures
- Solution: Verify your internet connection and that Heroku's AI services are accessible
- Issue: "Invalid URL format" errors
- Solution: Ensure custom URLs are valid and use HTTP/HTTPS protocol
- Issue: Tools are called but AI doesn't provide final response
- Solution: Ensure you configure
stopWhen(for example,stopWhen: stepCountIs(5)) so the model can complete multi-step tool conversations
- Issue: "Unrecognized request argument" errors when using tools
- Solution: This provider automatically filters out problematic schema properties (like
$schema) that some validation libraries add
We welcome contributions! Please follow these steps:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and add tests
- Ensure tests pass:
npm test - Lint your code:
npm run lint - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
# Clone the repository
git clone https://github.com/julianduque/heroku-ai-provider.git
cd heroku-ai-provider
# Install dependencies
pnpm install
# Run tests
pnpm test
# Build the project
pnpm build
# Lint code
pnpm lintThe project uses Jest for testing. Run tests with:
# Run all tests
pnpm test
# Run tests in watch mode
pnpm test --watch
# Run tests with coverage
pnpm test --coverageThis project is licensed under the Apache 2.0 License - see the LICENSE file for details.
- π Documentation: Heroku AI Documentation
- π Issues: GitHub Issues
- Vercel AI SDK - The AI SDK this provider integrates with
- Heroku CLI - Command line interface for Heroku
- Heroku AI Services - Official Heroku AI documentation