Overview
The AI Query Agent Node enables automated decision-making in your workflows. It uses AI to analyze data, make intelligent decisions, and perform multiple actions based on dynamic conditions, mimicking human-like intelligence.
Use Cases
- Post-Call Processing: Analyze call transcripts and determine next actions
- Smart Routing: Decide which team or agent should handle a case
- Ticket Generation: Create support tickets with AI-generated summaries
- Email Composition: Generate personalized emails based on context
- Lead Qualification: Score and categorize leads automatically
- Callback Scheduling: Intelligently determine if and when to call back
- Data Extraction: Pull specific information from unstructured text
- Sentiment Analysis: Determine customer satisfaction and urgency
- Action Orchestration: Coordinate multiple follow-up actions
How It Works
- Input: Provide a query/instruction and context (e.g., call transcript)
- Processing: AI analyzes the context using selected tools and knowledge base
- Decision: AI determines appropriate actions based on your query
- Execution: AI calls selected tools to perform actions
- Output: Results and tool outputs are passed to next workflow blocks
Inputs
Query (Required)
User input or query that instructs the AI on what to do. Purpose:- Define the task for the AI
- Specify conditions and logic
- Describe desired outcomes
Tools
Select the tools/functions the AI can use to perform actions. Available Tools:- send_email: Send emails to customers
- send_whatsapp: Send WhatsApp messages
- schedule_callback: Schedule follow-up calls
- create_ticket: Generate support tickets
- query_knowledge_base: Search knowledge base
- webhook: Send data to external systems
- Custom tools: Your configured LLM tools
- Only enable tools needed for the task
- More tools = more processing time
- Test with minimal tools first
- Add tools as needed
System_Context
Provide additional context to help the AI make informed decisions. What to Include:- Call transcripts
- Customer details
- Previous interactions
- Order information
- Account status
- Any relevant data
Knowledge_Base
Select a knowledge base to pull relevant information. Use Cases:- Product information lookup
- Policy clarification
- Technical documentation
- FAQ answers
- Company procedures
- Select from your created knowledge bases
- AI will automatically search when needed
- Results are included in decision-making
Outputs
LLM_Response
The AI’s response based on your query and context. Contains:- Analysis summary
- Decisions made
- Actions taken
- Extracted information
- Recommendations
Tool Outputs
Results from tools that were called by the AI. Available Outputs (based on tools used):- email_sent: Boolean (true/false)
- whatsapp_sent: Boolean (true/false)
- callback_scheduled: Boolean (true/false)
- ticket_id: String (ticket number)
- webhook_response: Object (API response)
- knowledge_base_results: Array (search results)
- Connect to subsequent workflow blocks
- Use for conditional logic
- Log for monitoring
- Send to external systems
Configuration Examples
Example 1: Post-Call Email Follow-up
Query:send_email, query_knowledge_base
System_Context: [Call Transcript]
Example 2: Support Ticket Creation
Query:create_ticket, send_email
System_Context: [Call Transcript + Customer Details]
Example 3: Lead Qualification
Query:webhook, schedule_callback, send_email
System_Context: [Call Transcript + Lead Details]
Example 4: Smart Callback Scheduling
Query:schedule_callback
System_Context: [Call Transcript + Call Metadata]
Best Practices
Query Design
- Be Specific: Clearly define what you want the AI to do
- Use Numbers: List actions in numbered steps
- Set Conditions: Use if/then logic for decisions
- Define Criteria: Specify thresholds and rules
- Keep Focused: One clear objective per query
Context Provision
- Include Relevant Data: Only what’s needed for the decision
- Structure Clearly: Use labels and formatting
- Update Regularly: Ensure context is current
- Validate Data: Check for completeness
Tool Selection
- Minimal Set: Only enable necessary tools
- Test Individually: Verify each tool works
- Monitor Usage: Track which tools are called
- Optimize: Remove unused tools
Knowledge Base
- Keep Updated: Regularly refresh content
- Organize Well: Structure for easy retrieval
- Test Queries: Verify search accuracy
- Monitor Relevance: Check result quality
Advanced Use Cases
Multi-Step Workflows
Scenario: Complex post-call processingConditional Routing
Scenario: Department-based routingData Enrichment
Scenario: Customer profile enhancementMonitoring and Optimization
Track Performance
Key Metrics:- Execution Time: How long AI takes to process
- Tool Usage: Which tools are called most
- Success Rate: Percentage of successful actions
- Error Rate: Failed tool calls or decisions
Analyze Outputs
Review:- LLM response quality
- Decision accuracy
- Tool call appropriateness
- Context utilization
Iterate and Improve
Optimization:- Refine queries based on results
- Adjust tool selection
- Improve context structure
- Update knowledge base
Troubleshooting
AI Not Calling Tools
Possible Causes:- Tools not selected in configuration
- Query doesn’t clearly instruct tool usage
- Insufficient context for decision
- Verify tools are enabled
- Make query more explicit about tool usage
- Provide more detailed context
Incorrect Decisions
Possible Causes:- Ambiguous query
- Insufficient context
- Outdated knowledge base
- Clarify query with specific criteria
- Include all relevant context
- Update knowledge base content
Slow Processing
Possible Causes:- Too many tools enabled
- Large context size
- Complex query
- Reduce number of tools
- Optimize context length
- Simplify query
Next Steps
Send Email Node
Configure email sending
Schedule Callback Node
Set up callback scheduling
Webhook Node
Integrate with external systems
LLM Tools
Create custom tools
The AI Query Agent uses the LLM model configured in your account. Processing time depends on query complexity and context size.