Learn . AI Engineering

    Agentic AI design patterns

    Four foundational patterns that make AI agents actually work in production. Each pattern can be used alone or composed with others. Start with the simplest one that solves your problem.

    Updated . 2026-05-02 . 8 min read

    Pattern 01

    Reflection

    The agent reviews its own output, identifies errors or gaps, and iterates. A critic prompt evaluates the work, and the generator prompt revises based on feedback.

    How it works

    1. 01Generator produces initial output
    2. 02Critic evaluates against criteria (accuracy, completeness, format)
    3. 03Generator receives critique and produces improved version
    4. 04Loop continues until quality threshold is met or budget is exhausted

    Production example

    Code generation agent writes a function, runs unit tests, reads failures, and rewrites until tests pass. Each iteration costs one LLM call but dramatically improves first-attempt success rates.

    When to use

    When output quality matters and you have measurable success criteria. Code, writing, data extraction, structured output generation.

    When to skip

    When latency is critical and 'good enough' on first attempt is acceptable. Real-time chat, simple lookups.

    Pattern 02

    Tool Use

    The agent is given access to external tools (APIs, databases, calculators, code interpreters) and decides when to use them. The LLM generates structured tool calls, receives results, and incorporates them.

    How it works

    1. 01User provides a query that requires external data or computation
    2. 02LLM decides which tool to call and generates structured arguments
    3. 03Tool executes and returns results
    4. 04LLM incorporates results into its response

    Production example

    Customer support agent checks order status via API, calculates refund amount, and updates CRM record, all within a single conversation turn.

    When to use

    When the agent needs real-time data, computation, or the ability to take actions in external systems.

    When to skip

    When all information is available in the prompt context. Adding tools to a pure knowledge-retrieval task adds complexity without benefit.

    Pattern 03

    Planning

    Before acting, the agent creates an explicit plan: a sequence of steps to achieve the goal. The plan can be revised as execution reveals new information.

    How it works

    1. 01Agent receives a complex goal
    2. 02Planner LLM decomposes goal into ordered steps
    3. 03Executor carries out each step, reporting results
    4. 04Planner revises remaining steps based on execution results

    Production example

    Data migration agent: plans extraction order based on foreign key dependencies, executes table-by-table, revises plan if a table has unexpected schema or data quality issues.

    When to use

    Complex tasks with dependencies between steps. Workflows where doing steps in the wrong order causes failures or wasted work.

    When to skip

    Simple, single-step tasks. Planning overhead is not worth it for straightforward queries.

    Pattern 04

    Multi-Agent Collaboration

    Multiple specialized agents work together. A supervisor or orchestrator routes subtasks to the right specialist. Agents can hand off context, debate, or work in parallel.

    How it works

    1. 01Supervisor receives complex task
    2. 02Supervisor decomposes and routes subtasks to specialist agents
    3. 03Specialists execute and return results
    4. 04Supervisor synthesizes results into final output

    Production example

    Insurance claims pipeline: intake agent extracts claim data from documents, fraud detection agent scores risk, routing agent assigns to the right adjuster, communication agent drafts the policyholder response.

    When to use

    When the problem space is broad enough that a single prompt cannot handle all aspects well. When different steps need different tools, models, or system prompts.

    When to skip

    When a single well-prompted agent can handle the full task. Multi-agent adds coordination overhead and debugging complexity.

    Composing patterns

    In practice, you combine them.

    The most capable production agents compose multiple patterns. A multi-agent system where each specialist uses tool calling and the supervisor uses planning. A code agent that combines tool use (code interpreter) with reflection (test execution and self-correction).

    The key is to start simple. Add reflection when quality matters. Add planning when tasks get complex. Add multi-agent when a single agent can't handle the breadth. Each layer adds capability but also adds debugging surface area.

    Next step

    Ready to apply these patterns?

    30 minutes. We'll map your use case to the right pattern combination and give you an honest build estimate.