5 min readUpdated Mar 2, 2026

LLM Provider APIs

This page explains how Vantage integrates with each AI provider's API, covering authentication, request methods, and the technical details of how data flows between Vantage and the LLM providers.


Architecture

When you use any AI feature in Vantage, the platform:

  1. Assembles context — gathers relevant data (tile data, context snippets, custom instructions)
  2. Constructs a prompt — builds a structured message array with system and user messages
  3. Sends the request — calls the selected provider's API with your credentials
  4. Processes the response — parses the result and renders it in the UI or workflow
Vantage Client → Vantage API → Provider API → LLM ↑ │ └────────────── Response ◄────────────────────────┘

All communication between Vantage and LLM providers happens server-side. Your API keys and data never pass through the user's browser.


Authentication

Each provider uses API key-based authentication. Vantage stores your credentials encrypted and includes them in the Authorization header when making API requests.

ProviderAuth Header Format
OpenAIAuthorization: Bearer sk-...
Claudex-api-key: sk-ant-...
GeminiAPI key as query parameter
DeepSeekAuthorization: Bearer sk-...
GrokAuthorization: Bearer xai-...
MistralAuthorization: Bearer ...
Intuidy AIManaged internally — no user credentials needed

API Methods

Chat Completion

The primary API method used by the Global AI Assistant, Popup AI Chat, and most workflow nodes.

How it works:

  1. Vantage constructs a message array (system prompt + user message + conversation history)
  2. The request is sent to the provider's chat completion endpoint
  3. The response is parsed and returned to the user

Key parameters sent:

ParameterDescriptionDefault
modelThe selected model IDVaries by provider
messagesArray of {role, content} message objectsRequired
max_tokensMaximum response length500–1000 depending on use
temperatureRandomness control (0 = deterministic, 1 = creative)0.7

Streaming Chat Completion

Used by the AI Assistant and Popup AI Chat for real-time, token-by-token responses. The response streams back as Server-Sent Events (SSE), displayed incrementally in the UI.

Benefits:

Summary Generation

Used by Tile Summaries. A specialized request format that includes tile metadata:

The provider returns a concise, plain-language summary of what the data shows.


Provider-Specific Endpoints

ProviderBase URLChat Endpoint
OpenAIhttps://api.openai.com/v1/chat/completions
Claudehttps://api.anthropic.com/v1/messages
Geminihttps://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent
DeepSeekhttps://api.deepseek.com/v1/chat/completions
Grokhttps://api.x.ai/v1/chat/completions
Mistralhttps://api.mistral.ai/v1/chat/completions

Most providers follow the OpenAI-compatible API format, making it straightforward for Vantage to support them with minimal differences in request construction.


Data Sent to Providers

When Vantage makes an API request, the following data may be included:

Data TypeWhen IncludedPurpose
User messageAlwaysThe question or instruction
System promptAlwaysEstablishes AI's role and behavior
Context snippetsWhen enabledCompany overview, industry, custom instructions
Tile dataTile summaries, popup chatThe dataset being analyzed
Tile metadataTile summaries, popup chatTitle, chart type, axis configuration
Conversation historyMulti-turn conversationsPrevious messages for continuity
Workflow row dataWorkflow AI nodesThe data rows being processed

Important: Only the minimum data necessary is sent. Vantage does not send your full database, account information, or credentials to AI providers.


Error Handling

Common API errors and how Vantage handles them:

ErrorCauseVantage Behavior
401 UnauthorizedInvalid or expired API keyDisplays error message prompting re-authentication
429 Rate LimitedToo many requests per minuteRetries with exponential backoff
500 Server ErrorProvider outageDisplays error with option to retry
TimeoutProvider took too long to respondDisplays timeout message; suggests reducing data volume
Context Length ExceededRequest too large for modelAutomatically truncates and retries, or suggests enabling data sampling

Model Listing

Vantage automatically fetches available models from each connected provider. You can refresh the model list at any time from Settings → AI Features → Intuidy AI by clicking the refresh icon next to the provider.