5 min readUpdated Mar 2, 2026

Enrichment Logic Documentation

Overview

The Enrichment logic is a powerful feature of Vantage that leverages AI capabilities to enhance data records. It sends each row of input data to a large language model (LLM) for tasks such as classification, sentiment analysis, entity extraction, or summarization. The purpose of this functionality is to derive valuable insights from raw data automatically, allowing organizations to make data-driven decisions efficiently and effectively.

Configuration

Settings

Setting NameInput TypeDescriptionDefault Value
promptTemplateStringSpecifies the template used for generating prompts for the LLM. You can reference columns in the data using placeholders like {{column_name}}. Altering this affects how the model interprets the data, resulting in different outputs based on the task defined."" (empty string)
outputColumnStringDefines the name of the column in the output data where the AI model's results will be stored. Changing this will direct the output of the AI enrichment task to a different column in the resulting dataset."ai_result"
batchSizeNumericControls the number of rows sent to the AI model in a single batch for processing. Reducing this value would lead to more frequent interactions with the model but can increase overall processing time. The maximum allowed size is 50.50
taskStringSpecifies the type of task to be performed by the LLM. Acceptable values include “classify”, “sentiment”, “extract”, and “summarize.” Changing this affects the prompt used and, subsequently, the nature of the insights generated from the data."classify"

Functionality Flow

  1. Input Data Handling: The component processes input data by extracting rows and validating them. If the input does not conform to the expected format, it returns an empty dataset.

  2. Prompt Template Resolution: The component evaluates the provided promptTemplate. If it is not specified, a default based on the selected task is used.

  3. AI Integration Retrieval: The component attempts to fetch the preferred AI integration for the user, which is then used to send requests to the LLM.

  4. Batch Processing: Data is processed in manageable batches based on the batchSize, with an adjustment during the workflow builder mode to limit processing to 100 rows only.

  5. Prompt Interpolation: Each row’s data is incorporated into the prompt template using the interpolatePrompt function, which replaces placeholders with actual row values.

  6. Sending Requests to the AI: The concatenated prompts for each batch are sent to the selected AI integration, and responses are parsed and returned to output.

  7. Error Handling: If an error occurs during the AI call, the function captures the exception and appends an error message to the appropriate output rows.

  8. Final Output Compilation: All processed data is compiled, including any rows that were not processed due to preview mode limitations.

Expected Data Input

The component expects the input to be structured as rows of data, where each row can contain multiple columns. Supported formats include:

Use Cases & Examples

Use Cases

  1. Customer Sentiment Analysis: A retail company can use the enrichment logic to analyze customer feedback from a dataset, extracting sentiments to gauge customer satisfaction levels.

  2. Entity Extraction for CRM: A business may leverage the enrichment logic to identify and extract entities such as customer names, organizations, and dates from a dataset of unstructured customer interaction notes.

  3. Industry Classification: A tech firm could classify various company records into defined industry categories, improving their targeting for marketing campaigns.

Example Configuration

Use Case: Customer Sentiment Analysis

To perform sentiment analysis on customer feedback records, the configuration for the enrichment logic can be set as follows:

json
{
  "promptTemplate": "Analyze the sentiment of the following customer feedback: {{feedback}}",
  "outputColumn": "sentiment_result",
  "batchSize": 20,
  "task": "sentiment"
}
json
[
  {"feedback": "I love the product, it's amazing!"},
  {"feedback": "I'm not happy with the service."},
  {"feedback": "The experience was okay, nothing special."}
]
json
[
  {"feedback": "I love the product, it's amazing!", "sentiment_result": "positive"},
  {"feedback": "I'm not happy with the service.", "sentiment_result": "negative"},
  {"feedback": "The experience was okay, nothing special.", "sentiment_result": "neutral"}
]

AI Integration and Billing Implications

The enrichment logic involves interactions with AI services, where costs may vary based on the number of rows processed and the complexity of the tasks being performed. Each batch sent to the AI model counts towards billing, thus determining the economic impact. Users should monitor their batch sizes and the frequency of data enrichment operations to optimize their costs effectively. Additionally, utilizing preview mode helps to control expenses during development and testing phases.