Introduction: The Call AI action lets you connect a Large Language Model (LLM) (like ChatGPT or Anthropic models) to your flow. You can use it to classify messages, summarize text, translate content, extract structured data, and route contacts automatically—without writing complex logic yourself.
8–10 min read
Flows
Updated on: 18/12/2025
Use AI to classify, extract, or translate (then route automatically)
If you just need the essentials, follow this quick path:
- Pick one clear AI task (classification, extraction, translation, summary)
- Add a Call AI action in the right node
- Write a strict prompt with a constrained output format (number/labels/JSON)
- Split on the AI result to route contacts reliably
- Add a human escalation path for sensitive or uncertain cases
- Test with varied examples in the Simulator and tighten prompts if needed
This pattern keeps your flow logic simple while enabling advanced routing and automation.
Step-by-Step Process
Before you add AI, define a single clear task. Common use cases:
- Sentiment detection (angry vs. neutral vs. happy)
- Intent classification (support, billing, sales, etc.)
- Data extraction (name, city, order number, complaint topic)
- Auto-translation (messages and flow content)
- Summaries for human agents (tickets, escalations)
[CAPTURE: Flow diagram showing a “Collect message” step → Call AI → Split paths.]
- Open Flows and enter the flow editor.
- Add a new node (or stack an action using the + button).
- Select Call AI from the actions list.
- Save the node once your settings are complete.
[CAPTURE: Action dropdown showing “Call AI” selected inside a node editor.]
Inside the Call AI action, write your command in plain English. Be explicit about:
- What the AI must do
- What input it should use (the user’s last message / a stored result)
- What output format you want
Example (sentiment scoring):
- “Read the customer complaint and return a number from 1 to 5 where 1 = calm, 5 = very angry. Output only the number.”
[CAPTURE: Call AI editor showing a prompt with a strict output instruction.]
@locals_llm_output, depending on your environment).
- Add a Split by Expression node after the Call AI action.
- Evaluate the AI output variable (example:
@locals_llm_output). - Create categories that match the output you requested.
Example ranges:
- 1–2 → “Mild”
- 3 → “Needs Response”
- 4–5 → “URGENT”
[CAPTURE: Split by Expression evaluating @locals_llm_output with categories for ranges.]
If your flow supports ticketing or handoff to agents:
- On the “Needs Response” or “URGENT” branch, add an Open Ticket (or similar) action.
- Include useful context for the agent, such as:
- The original user message
- The AI classification (score/label)
- Any structured fields you collected earlier
[CAPTURE: Open Ticket action showing a topic selection and a description including the complaint text.]
- Open the Simulator in the flow editor.
- Try several realistic messages (calm, frustrated, very angry).
- Verify:
- The AI output is in the expected format
- The split routing works reliably
- The user-facing messages are appropriate
[CAPTURE: Simulator showing the action log with the AI output and the selected branch.]
Common Issues & Quick Fixes
The AI output is too long to split reliably
Problem: The model returns paragraphs, explanations, or multiple fields mixed together.
Fix:
- Update the prompt to require a strict output (one number, one label, or JSON only).
- Add “Output only …” and list the only allowed values.
- If you need multiple fields, request JSON and split on specific keys.
My split rules don’t match what the AI returns
Problem: Routing goes to “Other” because the AI returns unexpected wording (e.g., “Urgent!” instead of “urgent”).
Fix:
- Constrain outputs to exact tokens (e.g.,
mild,needs_response,urgent). - Update split rules to match exact expected output (including case, if relevant).
- Add a safe fallback path that asks a clarifying question or routes to human review.
AI is correct sometimes, but inconsistent across similar messages
Problem: Slight wording changes cause different classifications or formats.
Fix:
- Add 2–4 short examples in the prompt (input → expected output) to stabilize behavior.
- Reduce the task scope (one decision per Call AI step).
- Use a “human review” path for uncertain outcomes (or when output isn’t in allowed values).
