dbWrite Logic Documentation
Overview
The dbWrite logic is a powerful component of the Vantage analytics and data platform. Its primary function is to write data to any connected database. Users can configure various settings to specify how data should be written, including the database selection, write operation, target table, and data mapping. This versatile node supports several databases, including SQL-based databases, MongoDB, Firestore, and others, making it suitable for wide-ranging applications in data management and analytics.
Settings
The dbWrite logic includes several settings, each playing a crucial role in determining how the data is written to the target database. Below is a comprehensive exploration of each setting:
1. Credential Reference (credentialRef)
- Input Type: Object
- Description: This setting defines the credentials required to connect to the database. It includes three sub-settings:
- strategy: The authentication strategy used (such as OAuth, API Key).
- credentialId: A unique identifier for the credentials.
- serviceKey: A key that defines the type of the database service connected.
- Effect of Changes: Modifying this setting affects the connection establishment to the database, determining which database and account are accessed.
- Default Value: None. Must be explicitly set.
2. Operation (operation)
- Input Type: String
- Description: Specifies the type of database write operation to execute. Supported operations include
INSERT,UPDATE,DELETE,insertOne,insertMany, etc. - Effect of Changes: Choosing a different operation will alter the desired action performed on the database. For instance, switching from
INSERTtoUPDATEwill result in attempting to modify existing records instead of adding new ones. - Default Value: None. Must be explicitly set.
3. Target Table (table)
- Input Type: String
- Description: Specifies the name of the target table or collection into which data will be written.
- Effect of Changes: Changing this value directs the operation to a different storage location. For instance, pointing to a
userstable instead of anorderstable. - Default Value: None. Must be explicitly set.
4. Query (query)
- Input Type: String/JSON
- Description: Represents the write query that will be executed against the database in SQL or JSON format.
- Effect of Changes: Adjusting this value modifies the specific write command sent to the database, affecting how data is structured.
- Default Value: None. Must be explicitly set.
5. Column Mapping (columnMapping)
- Input Type: Object
- Description: Maps upstream columns to the corresponding columns in the target database. This is particularly useful for batch write operations.
- Effect of Changes: Changing mappings affects how data fields from the application correspond to database fields, thereby potentially changing the output of the write operations.
- Default Value: Empty object
{}.
6. Where Column (whereColumn)
- Input Type: String
- Description: Defines the column used in the WHERE clause for UPDATE and DELETE operations. This is needed to identify which records to modify or delete.
- Effect of Changes: Changing this impacts which records are targeted for updates or deletions. If set incorrectly, it could lead to unintended data loss or modification.
- Default Value: None. Required for UPDATE and DELETE operations.
How It Works
The dbWrite logic operates by utilizing the provided settings to establish a connection with the specified database using the credentials provided. It follows these steps:
- Credential Resolution: It first resolves the database credentials defined in the
credentialRefsetting to ensure valid connection parameters are available. - Operation Execution: Based on the
operationspecified, it executes the relevant write operation, utilizing either SQL queries for relational databases or object queries for NoSQL databases like MongoDB. - Data Handling: If batch processing is required (as specified through
inputsandcolumnMapping), it processes the data into corresponding SQL or NoSQL commands before execution. - Result Normalization: After executing the write operations, it normalizes the output to return a consistent response format.
Data Expectations
The dbWrite logic expects the following types of data:
- Valid database credentials that match the connection requirements of various supported databases.
- A write operation type that matches the desired action (e.g., INSERT, UPDATE).
- A proper table name where the data will be written.
- A SQL string or JSON structure representing the write query.
- In batch operations, a list of objects corresponding to the rows being written, with optional column mappings indicating how these fields relate to the database schema.
Use Cases & Examples
Use Case 1: Customer Data Ingestion
A retail company wants to store incoming customer data from its e-commerce site into a relational database. Using the dbWrite, they can automatically insert new customer records whenever a new account is created without manual intervention.
Use Case 2: Inventory Management
A logistics company requires updates to its inventory records each time a shipment arrives or departs, dynamically updating quantities available in its database during runtime.
Example Configuration for a Customer Data Ingestion Use Case:
To achieve the goal of automatically inserting new customer records, the following setup may be used:
{
"inputs": {
"input1": {
"data": [
{
"name": "John Doe",
"email": "john@example.com",
"phone": "123-456-7890"
},
{
"name": "Jane Smith",
"email": "jane@example.com",
"phone": "987-654-3210"
}
]
}
},
"config": {
"credentialRef": {
"strategy": "API_KEY",
"credentialId": "123456",
"serviceKey": "db/connector/postgres"
},
"operation": "INSERT",
"table": "customers",
"columnMapping": {
"name": "full_name",
"email": "customer_email",
"phone": "contact_number"
}
}
}In this configuration:
- Inputs specify the list of customer records to be inserted.
- Credential Reference details the API key method for the PostgreSQL database.
- Operation indicates that new records should be inserted into the
customerstable with the specified column mappings.
This configuration would allow seamless integration of customer data into the database, enhancing operational efficiency.