AiQuerySettings Documentation
Overview
The AiQuerySettings component is pivotal for configuring how artificial intelligence (AI) processes and analyzes data queries within the Vantage analytics and data platform. This component provides users with options that significantly impact AI performance, cost, and the completeness of the insights generated. Users can toggle between different modes of dataset processing to suit their analytical needs, whether for quick insights or comprehensive analyses of larger datasets.
Settings
1. enableLargeDatasets
- Input Type: Boolean
- Description: This setting determines whether complete dataset processing is enabled. When set to
true, the system analyzes all records by partitioning large datasets into manageable chunks, allowing for parallel processing. Conversely, when set tofalse, the AI is restricted to processing a maximum of 100 records. - Default Value:
false
2. onLargeDatasetsChange
- Input Type: Callback function
- Description: This callback function is triggered whenever the
enableLargeDatasetssetting changes. It provides the new boolean value to the function, allowing for additional logic or operations contingent upon the change. - Default Value: No specific default value; the user must provide a function to handle changes.
How It Works
The AiQuerySettings component renders two primary modes for dataset processing:
-
Fast Mode: When
enableLargeDatasetsis set tofalse, this mode is activated.- Processing Capacity: AI analyzes up to 100 records.
- Time Expectancy: The expected processing time is approximately 5-10 seconds.
- Cost Implication: This mode incurs a low cost due to the limited dataset size.
-
Complete Mode: When
enableLargeDatasetsis set totrue, this mode activates a more comprehensive processing approach.- Processing Capacity: AI analyzes all records by dividing the dataset into chunks of 100 records, which are processed in parallel.
- Time Expectancy: Processing time is significantly longer, typically ranging from 20 to 60 seconds.
- Cost Implication: This mode incurs a higher cost due to the increased computational resources required for processing larger datasets.
In Complete Mode, a "map-reduce" approach is employed:
- Split: The dataset is segmented into chunks of 100 records.
- Process: Each chunk is processed in parallel, sending multiple requests to the AI.
- Merge: Results from each chunk are combined to form the final output.
Performance Comparison
A table illustrates the differences in processing speeds and API call requirements between Fast and Complete modes for datasets of various sizes, enabling users to make informed decisions based on their needs.
Use Cases & Examples
Use Case 1: Quick Data Insights for Dashboards
A business requires rapid insights from their data to be displayed on dashboards for real-time monitoring. Using Fast Mode allows the team to aggregate data quickly, ensuring low processing costs and timely updates.
Use Case 2: In-Depth Report Generation
An analytical team needs to generate comprehensive reports for critical decision-making. This requires a thorough analysis of extensive datasets, making Complete Mode the optimal choice to ensure all data is considered in the final report.
Detailed Example Configuration: Utilizing Complete Mode for Comprehensive Reporting
To configure the AiQuerySettings for generating an in-depth report from a dataset of 1,000 records, the following configuration would be applied:
<AiQuerySettings
enableLargeDatasets={true}
onLargeDatasetsChange={(value) => {
// Additional logic can be carried out when the processing mode changes
console.log(`Dataset processing mode changed to: ${value ? 'Complete' : 'Fast'}`);
}}
/>In this example:
- The
enableLargeDatasetsis set totrue, indicating that analysis of all records is desired. - The
onLargeDatasetsChangefunction logs the change in processing mode, which could also trigger notifications or further actions depending on application requirements.
This configuration will result in a performance impact where the processing time for the dataset will likely fall between 20 to 60 seconds, but it will ensure that all 1,000 records are included in the insights generated, offering a thorough analytical perspective crucial for decision-making.