Key Features
- Unified AI SDK: Built on top of Vercel’s AI SDK for consistent AI interactions
- OpenRouter Integration: Access multiple LLMs through a single API
- Server Actions: Type-safe AI operations with Next.js server actions
- Streaming Support: Real-time AI responses with streaming capabilities
- Type Safety: Full TypeScript support for AI operations
- RAG Support: Built-in Retrieval Augmented Generation capabilities
- Embeddings: Text embeddings and semantic search functionality
- Custom Models: Extensible model registry for custom AI providers
Getting Started
To start using AI features in your application:- Set up your environment variables
- Configure OpenRouter
- Use the AI SDK components
- Implement server actions for AI operations
Available AI Features
Basic Features
- Chat interfaces
- Text generation
- Image generation
- Structured data extraction
Advanced Features
- RAG (Retrieval Augmented Generation)
- Semantic search with embeddings
- Custom model integration
- Advanced streaming with progress tracking
- Circuit breaker and retry patterns
Architecture
The AI integration follows a modular architecture:Best Practices
-
Model Selection
- Choose models based on your use case
- Consider cost and performance trade-offs
- Use the most efficient model for your needs
-
Error Handling
- Implement proper error boundaries
- Use retry logic for transient failures
- Monitor API usage and errors
-
Performance
- Use streaming for long-running operations
- Implement proper caching strategies
- Monitor response times and costs
-
Security
- Validate all inputs and outputs
- Implement proper access controls
- Monitor for abuse and rate limiting