How to Enable and Manage LLM Models in Your Workspace

Introduction: RapidPro.app lets you connect a Large Language Model (LLM) provider in Workspace Settings so you can use AI-assisted features (like auto-translation for multi-language flows). Once your model is added, you can select it when building AI-enabled automations.

Enable AI models once (then reuse them in flows)

If you just need the essentials, follow this quick path:

  1. Open Settings → Artificial Intelligence in your workspace
  2. Choose your provider (OpenAI / Anthropic / etc.)
  3. Add your API key securely
  4. Select one or more models after key validation
  5. Name the model for easy reuse (Translation / Classifier / Assistant)
  6. Validate in a flow + Simulator before using at scale

Once configured, teammates can pick the right AI model from dropdowns when building AI-enabled automations.

Step-by-Step Process

1
Open the Artificial Intelligence settings
  1. Log in to your RapidPro.app workspace.
  2. Click the Settings (gear) icon.
  3. In the settings sidebar, click Artificial Intelligence (AI).

[CAPTURE: Workspace left sidebar showing the gear icon, then the Settings page with “Artificial Intelligence” highlighted.]

2
Choose your AI provider
  1. In the AI section, open the Provider dropdown.
  2. Select your provider (examples may include OpenAI, Anthropic, Google AI, or DeepSeek, depending on what your workspace supports).

[CAPTURE: AI settings page showing a provider dropdown with multiple providers listed.]

⚠️
Warning: Provider availability depends on your workspace configuration. If you don’t see a provider, it may not be enabled for your plan or environment.

3
Add your API key
  1. In the API Key field, paste the key from your provider’s dashboard.
  2. Save/confirm if the page requires it.

[CAPTURE: AI settings page showing an API Key input field (with the value blurred).]

⚠️
Warning: Treat API keys like passwords. Don’t paste them into messages, flows, or shared docs. Store them in a secure secret manager whenever possible.

4
Select available models
  1. After saving your key, look for the Model selector.
  2. Choose one or more models that your provider exposes (the list appears after authentication).
  3. Confirm/save your model selection.

[CAPTURE: AI settings page showing a list/dropdown of available models after adding the API key.]

⚙️
Technical Detail: The model list is loaded from your provider once the API key is validated, so you typically won’t see model options until the key is accepted.

5
Name your model for easy reuse
  1. Enter a clear internal name (example: Support Classifier, Translation Model, Flow Assistant).
  2. Save the configuration.

[CAPTURE: A “Model Name” field filled with a readable label.]

💡
Tip: Use names that reflect the job the model will do (translation vs. classification). It helps teammates pick the right model later.

6
Validate your setup in a real flow (recommended)
  1. Open a flow where AI is used (example: auto-translation or Call AI).
  2. Select your configured model (if prompted).
  3. Run a quick test in the Simulator to confirm the model responds as expected.

[CAPTURE: Flow editor showing an AI-enabled node referencing the configured model + simulator run output.]

💡
Tip: Test at least one “happy path” and one edge case (empty input, unexpected language, short message) so you can tighten prompts or fallbacks early.

Common Issues & Quick Fixes

I don’t see the Artificial Intelligence settings in my workspace

Problem: You can’t find the AI menu item or provider configuration screen.

Fix: Confirm your user role and workspace plan/environment supports AI configuration. If the option is restricted, ask a workspace Admin (or RapidPro.app support) to enable it for your environment.

I added the API key, but no models appear

Problem: The Model selector stays empty after entering credentials.

Fix: Re-check the API key (no extra spaces), confirm it’s from the selected provider, then save again. Models typically load only after the key is validated successfully.

Teammates pick the wrong model when building flows

Problem: People choose an expensive model for translation, or a translation model for classification.

Fix: Rename models using task-based names (e.g., “Translation Model”, “Support Classifier”) and document which flows should use which model in a short note or internal guide.