FlowDot Recipes

Recipes are multi-step agentic orchestrations that run locally on your machine. Unlike server-side workflows, recipes execute via the FlowDot CLI with full access to your local files and the ability to pause for human approval.

TL;DR - Quick Summary

FlowDot Recipes are agentic workflows for complex, multi-step tasks:

  • Design: Visual step editor on the web platform
  • Execute: flowdot recipes run <alias> "task"
  • 6 Step Types: Agent, Loop, Parallel, Gate, Branch, Invoke
  • Human Gates: Pause for approval with rich input options
  • Stores: Typed data containers for state management
  • Interpolation: {{inputs.request}} and {{stores.key}}
  • Local Execution: Full access to your codebase and files

Recipes vs Workflows

Understanding when to use recipes versus workflows:

Feature Workflows Recipes
Execution Server-side (FlowDot cloud) Local (your machine via CLI)
File Access Limited to uploaded files Full local file system access
Human Approval Not supported Gate steps with rich inputs
Duration Short-running (timeout limits) Long-running (hours/days)
Use Case API integrations, data processing Code tasks, multi-day projects

Visual Step Editor

The Recipe Editor provides a canvas-based interface for designing your recipe flow:

  • Drag-and-drop steps: Add steps from the sidebar
  • Visual connections: Draw lines between steps
  • Step configuration: Click to edit step properties
  • Store management: Define inputs and outputs
  • Entry point: Set which step runs first

6 Step Types

Recipes support six types of steps for different orchestration patterns:

Agent Agent Step

An LLM agent with tools. Use user_prompt to define the task. The tools array can reference built-ins, local MCP servers, or installed toolkits — see Agent Tools below.

{
  "user_prompt": "Analyze {{inputs.request}} and write results to {{stores.result}}",
  "tools": ["read", "search", "edit", "web-search"],
  "output_store": "result",
  "max_iterations": 10
}

Loop Loop Step

Iterate over an array, running a step for each item. Supports parallel execution.

{
  "loop_over": "files_to_process",
  "loop_variable": "current_file",
  "loop_step_id": "process-step-uuid",
  "parallel": true,
  "max_concurrent": 5
}

Parallel Parallel Step

Run multiple steps concurrently. All specified steps execute simultaneously.

{
  "parallel_step_ids": ["step-id-1", "step-id-2", "step-id-3"]
}

Gate Gate Step

Human approval checkpoint with rich input options. Pauses execution until user responds.

{
  "requires_approval": true,
  "approval_prompt": "Review the analysis:\n{{stores.analysis}}",
  "condition": "{{status}} === 'ready'",
  "input_options": {
    "button_mode": "preset",
    "preset": "approve_deny"
  },
  "input_output_store": "decision"
}

Gate input options:

  • preset: "approve_deny", "yes_no", "continue_cancel"
  • buttons: Custom buttons with label, value, style
  • allow_comment: Enable text input field
  • comment_required: Require text input

Branch Branch Step

Conditional routing based on expressions. Routes to different steps based on conditions.

{
  "conditions": [
    {"expression": "{{severity}} === 'high'", "then": "urgent-step"},
    {"expression": "{{severity}} === 'medium'", "then": "normal-step"}
  ],
  "default": "low-priority-step"
}

Invoke Invoke Step

Call another recipe as a sub-routine. Maps inputs and outputs between recipes.

{
  "recipe_hash": "abc123xyz",
  "input_mapping": {
    "target_input": "{{source_store}}"
  },
  "output_mapping": {
    "target_output": "local_store"
  }
}

Agent Tools (Built-in + MCP + Toolkits)

The tools array on an Agent step is not restricted to built-ins. At execution time the CLI runtime resolves each tool reference against three sources and hands the merged list to the LLM:

  • Built-in tools — the 8 tools shipped with the CLI runtime
  • Local MCP servers — any MCP server the user has configured in their CLI environment (stdio or HTTP), e.g. schwab, robinhood, electron-qa, playwright, or a custom server they wrote
  • FlowDot toolkits — any agent toolkit the user has installed on their account, with credentials auto-injected from the encrypted vault

This is what makes recipes portable across users. A recipe that needs browser automation, stock trading, or calendar access doesn't have to hardcode a specific vendor — it references an MCP namespace (or a wildcard) and the runtime resolves whatever the user actually has running.

Built-in Tools

NamePurpose
readRead files from disk
searchSearch the local codebase
analyzeStatic analysis of code
find-definitionJump to a function/class definition
web-searchSearch the web
editEdit existing files
execute-commandRun a shell command
create-fileCreate a new file

Tool Reference Syntax

SourceSyntaxExample
Built-in<name>read
Specific MCP toolmcp__<server>__<tool>mcp__schwab__get_accounts
All tools from one MCP servermcp__<server>__*mcp__electron-qa__*
All MCP tools from every servermcp__*mcp__*
Specific toolkit tooltoolkit__<name>__<tool>toolkit__spotify__search_tracks
All tools from one toolkittoolkit__<name>__*toolkit__trading__*

Mixed Arrays

You can freely mix built-ins, MCP references, and toolkit references in a single tools array:

{
  "user_prompt": "QA the site at {{current_task.url}}: {{current_task.goal}}",
  "tools": [
    "mcp__electron-qa__*",
    "create-file",
    "web-search"
  ],
  "output_store": "qa_results"
}

Lazy Runtime Resolution

Tool references are stored as plain strings in the recipe definition. There is no design-time validation that mcp__schwab__* exists — the CLI expands wildcards at execution time against whatever MCP servers and toolkits are registered in that user's environment. If a required server is absent, the wildcard silently expands to nothing and the agent gets no tools from it.

Consequences for recipe authors:

  • Document required MCP servers and toolkits in the recipe description so users know what to configure.
  • Design prompts generically where possible — "use the browser tool to visit the URL" — rather than naming a specific vendor's tool.
  • Apply prompt lecturing to MCP tools the same way you would to built-ins. Small models don't know what mcp__electron-qa__perform_action does from the name alone; the prompt must name the individual tools and show example invocations.

Worked Example: Generic Website QA

A recipe that takes a list of QA tasks, fans out across them in parallel, and writes results to disk — using whatever browser-automation MCP the user has configured:

{
  "name": "qa-websites",
  "description": "QAs a list of websites. Requires an electron-qa MCP server to be configured in the CLI environment.",
  "tools": ["mcp__electron-qa__*", "create-file"],
  "user_prompt": "You are QAing a single website task.\n\nTask: {{current_task}}\n\nMANDATORY PROCEDURE:\n1. Call 'mcp__electron-qa__launch_app' with the task's URL.\n2. Call 'mcp__electron-qa__describe_screen' to see what rendered.\n3. Use 'mcp__electron-qa__find_element' + 'mcp__electron-qa__perform_action' to exercise the feature.\n4. Call 'mcp__electron-qa__take_screenshot' and record the returned path.\n5. Call 'mcp__electron-qa__close_app'.\n6. Call 'create-file' to append a JSON line to ./qa-report.jsonl.\n\nDO NOT skip step 5. DO NOT invent selectors you haven't seen in describe_screen output."
}

The same recipe works for any user who has the electron-qa MCP server running — the recipe doesn't care which binary is providing those tools.

Stores Architecture

Stores are typed data containers that hold state throughout recipe execution.

Store Types

  • Input stores (is_input: true): Values provided when running the recipe
  • Output stores (is_output: true): Values returned after execution
  • Internal stores: Temporary values used during execution

The "request" Convention

When users run a recipe with a task argument:

flowdot my-recipe "analyze this codebase"

The CLI passes this as inputs.request. Name your primary input store request to receive it directly.

Data Types

Stores can be typed as: any, string, number, boolean, array, object

Template Interpolation

Reference stores and inputs in your step configurations using double-brace syntax:

  • {{inputs.request}} - Access CLI task argument
  • {{stores.result}} - Reference a store value
  • {{step.step_id}} - Access output from a previous step

Example Usage

// In an agent step's user_prompt:
"Analyze the code in {{inputs.request}} and save findings to {{stores.analysis}}"

// In a branch condition:
"{{stores.severity}} === 'critical'"

// In a gate approval_prompt:
"Review the following changes:\n{{stores.proposed_changes}}"

Step Connections

Steps are connected using next and on_error pointers:

  • next: The step to run after successful completion
  • on_error: The step to run if an error occurs

The recipe must have an entry_step_id set to define which step runs first.

// Example flow:
analyze-step (entry)
  → next: review-gate
    → next: implement-step
      → next: test-step (output)

CLI Execution

Recipes are designed on the web but executed locally via the FlowDot CLI.

Link a Recipe

# Link a recipe hash to a friendly alias
flowdot recipes link abc123xyz my-recipe

Run a Recipe

# Using the full command
flowdot recipes run my-recipe "your task here"

# Or using the shortcut
flowdot my-recipe "your task here"

# With input JSON
flowdot recipes run my-recipe --input '{"key": "value"}'

Recipe Management

# List linked recipes
flowdot recipes list

# Unlink a recipe
flowdot recipes unlink my-recipe

Example Recipes

Code Review Recipe

Analyze code, generate review, get approval, apply fixes

Steps:
1. Agent: Analyze codebase → stores.analysis
2. Agent: Generate review → stores.review
3. Gate: Approve review (approve_deny)
4. Branch: If approved → Agent: Apply fixes
5. Agent: Run tests → stores.test_results

Multi-File Processing Recipe

Process multiple files in parallel with final summary

Steps:
1. Agent: List files → stores.files (array)
2. Loop: For each file (parallel)
   → Agent: Process file → stores.results
3. Agent: Generate summary → stores.summary

Research & Report Recipe

Research topic, compile findings, get approval, publish

Steps:
1. Agent: Research topic (web-search) → stores.research
2. Agent: Compile report → stores.report
3. Gate: Review report (custom buttons: Approve/Edit/Reject)
4. Branch: Based on decision
   → Approve: Agent: Publish
   → Edit: Agent: Revise → Loop back to Gate
   → Reject: End