Supported LLMs
Vantage supports multiple large language model (LLM) providers through Intuidy AI. You can use the built-in default provider, or connect your own API key to any of the supported providers below.
Provider Overview
| Provider | Models | API Key Required | Setup Complexity |
|---|---|---|---|
| Intuidy AI | Default model | No | None — works out of the box |
| OpenAI | GPT-4o, GPT-4o-mini, GPT-4 Turbo, GPT-3.5 Turbo | Yes | Low |
| Claude (Anthropic) | Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku | Yes | Low |
| Gemini (Google) | Gemini Pro, Gemini Ultra, Gemini 1.5 Pro | Yes | Low |
| DeepSeek | DeepSeek Chat, DeepSeek Coder | Yes | Low |
| Grok (xAI) | Grok-1, Grok-2 | Yes | Low |
| Mistral | Mistral Large, Mistral Medium, Mistral Small, Mixtral | Yes | Low |
Provider Details
Intuidy AI (Default)
The built-in AI provider that works with no configuration. Ideal for teams that want to start using AI immediately without managing API keys or provider accounts.
- Setup: None required
- Best for: Quick start, teams without existing AI provider accounts
- Token billing: Included in your Vantage subscription usage
OpenAI
The most widely-used AI provider, offering the GPT family of models.
- Website: platform.openai.com
- API Key URL: platform.openai.com/api-keys
- Recommended Model:
gpt-4ofor best quality,gpt-4o-minifor cost efficiency - Strengths: Broad general knowledge, strong reasoning, excellent code generation
- Considerations: Per-token pricing varies by model; GPT-4o is more capable but costs more than GPT-4o-mini
| Model | Context Window | Best For |
|---|---|---|
| GPT-4o | 128K tokens | Complex analysis, nuanced reasoning |
| GPT-4o-mini | 128K tokens | Fast, cost-effective general use |
| GPT-4 Turbo | 128K tokens | Legacy compatibility |
| GPT-3.5 Turbo | 16K tokens | Budget-friendly, simpler tasks |
Claude (Anthropic)
Known for detailed, thoughtful responses and strong performance on long-form analysis.
- Website: anthropic.com
- API Key URL: console.anthropic.com/settings/keys
- Recommended Model:
Claude 3.5 Sonnetfor best balance of quality and speed - Strengths: Long context handling, nuanced analysis, strong at following complex instructions
- Considerations: Excellent for compliance-focused use cases and detailed data interpretation
| Model | Context Window | Best For |
|---|---|---|
| Claude 3.5 Sonnet | 200K tokens | Best all-around performance |
| Claude 3 Opus | 200K tokens | Most capable, highest quality |
| Claude 3 Haiku | 200K tokens | Fast, cost-effective |
Gemini (Google)
Google's multimodal AI models with strong data analysis capabilities.
- Website: ai.google.dev
- API Key URL: aistudio.google.com/apikey
- Recommended Model:
Gemini 1.5 Profor best performance - Strengths: Large context windows, strong at structured data tasks, multimodal capabilities
- Considerations: Tight integration with Google ecosystem; ideal for teams already using Google services
| Model | Context Window | Best For |
|---|---|---|
| Gemini 1.5 Pro | 1M tokens | Large dataset analysis |
| Gemini Pro | 32K tokens | General-purpose tasks |
| Gemini Ultra | 32K tokens | Most capable |
DeepSeek
Emerging provider with strong performance on coding and analytical tasks.
- Website: deepseek.com
- API Key URL: platform.deepseek.com/api_keys
- Recommended Model:
DeepSeek Chatfor general use - Strengths: Competitive pricing, strong code analysis, good at structured reasoning
- Considerations: Newer provider; continuously expanding capabilities
| Model | Best For |
|---|---|
| DeepSeek Chat | General conversation and analysis |
| DeepSeek Coder | Code-heavy tasks and technical analysis |
Grok (xAI)
xAI's models designed for real-time, up-to-date information and analysis.
- Website: x.ai
- Recommended Model:
Grok-2for latest capabilities - Strengths: Real-time knowledge, conversational style, fast responses
- Considerations: Evolving model lineup; check for the latest available models
| Model | Best For |
|---|---|
| Grok-2 | Latest capabilities, general use |
| Grok-1 | Basic analysis, fast responses |
Mistral
European AI provider offering efficient, high-quality models.
- Website: mistral.ai
- API Key URL: console.mistral.ai/api-keys
- Recommended Model:
Mistral Largefor best quality - Strengths: Excellent performance-to-cost ratio, strong multilingual support, EU-hosted options
- Considerations: Great choice for European organizations with data residency requirements
| Model | Best For |
|---|---|
| Mistral Large | Complex analysis, highest quality |
| Mistral Medium | Balanced quality and speed |
| Mistral Small | Fast, cost-effective |
| Mixtral | Multi-expert architecture, diverse tasks |
Choosing a Provider
| Priority | Recommended Provider |
|---|---|
| No setup needed | Intuidy AI (default) |
| Best general quality | OpenAI (GPT-4o) or Claude (3.5 Sonnet) |
| Long documents / large datasets | Gemini (1M token context) or Claude (200K context) |
| Cost efficiency | OpenAI (GPT-4o-mini), Mistral (Small), or DeepSeek |
| EU data residency | Mistral |
| Code & technical analysis | DeepSeek Coder or OpenAI (GPT-4o) |
Switching Providers
You can switch providers at any time:
- Go to Settings → AI Features → Intuidy AI
- Select the new provider
- Enter API credentials if required
- Choose a model
- Click Save AI Settings
All AI features — assistant, tile summaries, workflow nodes — will immediately use the new provider.