OpenAI Integration Documentation
Overview
The OpenAI Integration Service provides methods for interacting with the OpenAI API, facilitating chat completions, AI-driven summaries, and conversational interfaces that leverage the capabilities of OpenAI’s transformers. This enables users to tap into the rich conversational and analytical abilities of AI within the Vantage analytics and data platform.
API Documentation: OpenAI API Reference
Purpose
This integration primarily serves users looking to enhance their data analysis and customer interaction processes by generating contextually relevant AI responses, summaries, and insights derived from data displayed within dashboard tiles.
Settings
The OpenAI Integration has a variety of settings that dictate its behavior during API calls. Below are the details of each setting.
1. API Key
- Name:
apiKey - Input Type: String
- Description: This is the authentication key required to access the OpenAI API. It must be set to utilize the integration's functionality.
- Default Value: Not set (must be provided by the user).
2. Model
- Name:
model - Input Type: String
- Description: Specifies which OpenAI model to use for API requests (e.g.,
gpt-4o,gpt-4o-mini). If not explicitly set, the integration defaults togpt-4o-mini. Selecting different models can affect the quality, speed, and capabilities of responses. - Default Value:
gpt-4o-mini
3. Max Tokens
- Name:
maxTokens - Input Type: Numeric
- Description: Defines the maximum number of tokens (words or characters) in the response generated by the AI. The limit affects the length and detail of the response, with higher values allowing for more extensive answers while increased usage can impact billing.
- Default Value: 500 for general calls, 1000 for streaming.
4. Temperature
- Name:
temperature - Input Type: Numeric (0-1)
- Description: Controls the randomness of the response. Lower values yield more deterministic outputs (less varied), while higher values introduce more creativity (greater variations). This can directly influence the tone and unpredictability of the responses.
- Default Value: 0.7
How It Works
The integration performs various API operations such as chat completions and generating AI summaries for dashboard tiles. Upon initialization, it sets a base URL for API requests. Key methods include:
- authorize(): Configures request headers for authentication using the API key.
- chatCompletion(): Sends user messages to OpenAI and retrieves completions based on the specified model and parameters.
- streamChatCompletion(): Provides real-time chat completions using streaming capabilities.
- generateTileSummary(): Constructs a summary from tile data by analyzing the provided visualization context.
- chatWithContext(): Supports continuous interaction allowing users to ask follow-up questions based on historical data.
Data Expectations
The OpenAI Integration expects a variety of structured data inputs, including:
- An array of messages, each consisting of a role (system, user) and content (text).
- Tile-specific data which may be an array or an object.
- Other configuration options such as titles, visual types, and additional context that enrich the responses generated by AI.
Use Cases & Examples
Use Case 1: Customer Support Automation
A company can utilize the OpenAI Integration to automate responses to customer inquiries about dashboard data. By integrating the chat functionality, customers can ask specific questions and receive insightful replies, improving response efficiency.
Use Case 2: Data Analysis & Reporting
Data analysts can leverage the summary generation feature to quickly summarize complex datasets displayed in their dashboards, translating them into actionable insights. This speeds up report creation and enhances understanding.
Example Configuration: Generating a Dashboard Summary
For a dashboard tile displaying sales data over the last quarter, the settings might be configured as follows:
{
"tileTitle": "Quarterly Sales Overview",
"tileType": "bar",
"tileData": [
{"month": "January", "sales": 10000},
{"month": "February", "sales": 15000},
{"month": "March", "sales": 20000}
],
"context": "Overview of growth trends",
"userQuestion": "What are the key takeaways from this data?",
"tileConfig": {}
}In this scenario, the integration will analyze the sales data, consider the specific question, and generate a comprehensive summary with key insights, aiding the decision-making process for stakeholders reviewing the dashboard.
This would allow the user to receive clear, actionable insights from their data presentation without needing to deep-dive into the figures manually, optimizing both time and resources.
Billing Impacts
Utilizing the OpenAI API for chat completions, especially with varying max token limits and different models, can lead to significant billing differences based on usage. Users are encouraged to monitor token consumption closely to avoid unanticipated costs, particularly during high-volume periods. The API will charge per token processed in both requests and responses, making it critical for users to configure their settings and expectations accordingly for budget compliance.