“Call AI” Action: Integrate ChatGPT or an LLM into Your Flows

Introduction: The Call AI action lets you connect a Large Language Model (LLM) (like ChatGPT or Anthropic models) to your flow. You can use it to classify messages, summarize text, translate content, extract structured data, and route contacts automatically—without writing complex logic yourself.

Use AI to classify, extract, or translate (then route automatically)

If you just need the essentials, follow this quick path:

  1. Pick one clear AI task (classification, extraction, translation, summary)
  2. Add a Call AI action in the right node
  3. Write a strict prompt with a constrained output format (number/labels/JSON)
  4. Split on the AI result to route contacts reliably
  5. Add a human escalation path for sensitive or uncertain cases
  6. Test with varied examples in the Simulator and tighten prompts if needed

This pattern keeps your flow logic simple while enabling advanced routing and automation.

Step-by-Step Process

1
Choose what you want AI to do in your flow

Before you add AI, define a single clear task. Common use cases:

  • Sentiment detection (angry vs. neutral vs. happy)
  • Intent classification (support, billing, sales, etc.)
  • Data extraction (name, city, order number, complaint topic)
  • Auto-translation (messages and flow content)
  • Summaries for human agents (tickets, escalations)

[CAPTURE: Flow diagram showing a “Collect message” step → Call AI → Split paths.]

💡
Tip: Start with one AI task per action. If you need multiple outcomes (e.g., sentiment + category + summary), use multiple Call AI nodes or request a structured output.

2
Add a Call AI action to your flow

  1. Open Flows and enter the flow editor.
  2. Add a new node (or stack an action using the + button).
  3. Select Call AI from the actions list.
  4. Save the node once your settings are complete.

[CAPTURE: Action dropdown showing “Call AI” selected inside a node editor.]

3
Write a clear AI instruction (prompt)

Inside the Call AI action, write your command in plain English. Be explicit about:

  • What the AI must do
  • What input it should use (the user’s last message / a stored result)
  • What output format you want

Example (sentiment scoring):

  • “Read the customer complaint and return a number from 1 to 5 where 1 = calm, 5 = very angry. Output only the number.”

[CAPTURE: Call AI editor showing a prompt with a strict output instruction.]

⚙️
Technical Detail: The result of the Call AI action is stored as a local variable that you can reference later in the flow (for example: @locals_llm_output, depending on your environment).

⚠️
Warning: If you don’t specify an output format, the AI may return long text that is harder to route on. Always constrain the response (number, JSON, short labels).

4
Route contacts using the AI result

  1. Add a Split by Expression node after the Call AI action.
  2. Evaluate the AI output variable (example: @locals_llm_output).
  3. Create categories that match the output you requested.

Example ranges:

  • 1–2 → “Mild”
  • 3 → “Needs Response”
  • 4–5 → “URGENT”

[CAPTURE: Split by Expression evaluating @locals_llm_output with categories for ranges.]

💡
Tip: If the AI returns text labels (e.g., “urgent”), make your split rules match those exact words to avoid misrouting.

5
Escalate to a human when needed (optional)

If your flow supports ticketing or handoff to agents:

  1. On the “Needs Response” or “URGENT” branch, add an Open Ticket (or similar) action.
  2. Include useful context for the agent, such as:
    • The original user message
    • The AI classification (score/label)
    • Any structured fields you collected earlier

[CAPTURE: Open Ticket action showing a topic selection and a description including the complaint text.]

⚠️
Warning: Don’t rely on AI alone for high-stakes decisions (medical, legal, safety). Use AI to assist routing, then keep a human review path.

6
Test and iterate in the Simulator

  1. Open the Simulator in the flow editor.
  2. Try several realistic messages (calm, frustrated, very angry).
  3. Verify:
    • The AI output is in the expected format
    • The split routing works reliably
    • The user-facing messages are appropriate

[CAPTURE: Simulator showing the action log with the AI output and the selected branch.]

💡
Tip: If routing is inconsistent, tighten your prompt: add examples, enforce “output only X”, and specify exact allowed values.

Common Issues & Quick Fixes

The AI output is too long to split reliably

Problem: The model returns paragraphs, explanations, or multiple fields mixed together.

Fix:

  • Update the prompt to require a strict output (one number, one label, or JSON only).
  • Add “Output only …” and list the only allowed values.
  • If you need multiple fields, request JSON and split on specific keys.
My split rules don’t match what the AI returns

Problem: Routing goes to “Other” because the AI returns unexpected wording (e.g., “Urgent!” instead of “urgent”).

Fix:

  • Constrain outputs to exact tokens (e.g., mild, needs_response, urgent).
  • Update split rules to match exact expected output (including case, if relevant).
  • Add a safe fallback path that asks a clarifying question or routes to human review.
AI is correct sometimes, but inconsistent across similar messages

Problem: Slight wording changes cause different classifications or formats.

Fix:

  • Add 2–4 short examples in the prompt (input → expected output) to stabilize behavior.
  • Reduce the task scope (one decision per Call AI step).
  • Use a “human review” path for uncertain outcomes (or when output isn’t in allowed values).