NoticeAgentCogito.com is an independent educational reference. We are not affiliated with any AI vendor, framework maintainer, or research group. This site does not offer products for sale, does not receive vendor sponsorship, and does not publish ranked product recommendations. Framework documentation links are provided for reader reference only.
AgentCogito
II. Patterns
Last verified April 2026 - 9 sources

AI Agent Architectural Patterns: A Reference

Five canonical patterns drawn from the research literature and framework documentation. Patterns transfer across frameworks; frameworks come and go.

Why patterns matter

Frameworks turn over fast. The pattern underneath does not. A reader who has internalised ReAct, planner-executor, and the reflection family can read any new framework’s documentation in an afternoon and understand what it is doing. The patterns are the persistent vocabulary; the frameworks are the implementations of the moment.

The five patterns below cover most of what production agents do. They are not mutually exclusive; production systems combine them routinely (a supervisor multi-agent whose sub-agents each use ReAct internally, with an outer reflection loop that critiques the supervisor’s plan).

Yao et al. 2022, arXiv:2210.03629
ReAct

Interleaves reasoning and acting. The model alternates Thought and Action traces, observes the result, and continues until the goal is met. The most-cited LLM-agent pattern in the literature.

Read the reference
LangChain, AutoGen documentation
Plan-and-Execute

Decompose-then-act. The model produces a full plan up front in one LLM call, then a separate executor (often using ReAct internally) runs each step. Cheaper and more auditable than ReAct, less adaptive.

Read the reference
Madaan et al. 2023, Shinn et al. 2023
Reflection / Reflexion / Self-Refine

Iterative self-correction. The agent produces a candidate, critiques it, revises, and repeats. Self-Refine is single-model improvement; Reflexion adds verbal reinforcement across episodes; reflection wraps any inner loop.

Read the reference
Wu et al. 2023, arXiv:2308.08155
Multi-Agent Orchestration

Specialist agents coordinated by a supervisor or peer network. Four canonical topologies: supervisor/hierarchical, peer/mesh, pipeline/sequential, blackboard/shared-memory.

Read the reference
Schick et al. 2023, OpenAI/Anthropic docs
Tool Use / Function Calling

The underlying capability all other patterns depend on. The model calls external tools by emitting structured arguments; the host code executes them. Two flavours: structured function-calling and freeform ReAct-style tool use.

Read the reference

Selection guide

Pattern choice follows from task characteristics. The questions below are the ones that empirically determine which pattern fits.

  • Is the plan stable? If the steps to reach the goal are knowable up front, plan-and-execute is cheaper and more auditable. If the plan must adapt to mid-task surprises, ReAct’s interleaved reasoning fits better.
  • Is tool latency high? If each tool call takes seconds or minutes (a long database query, a remote API), plan-and-execute amortises planning cost across slow execution. If tools are fast, ReAct’s extra LLM calls are not a bottleneck.
  • Is the task error-sensitive? Tasks where a single wrong action is hard to undo (sending an email, executing a trade) benefit from reflection or human-in-the-loop interrupts before action.
  • Are sub-tasks parallelisable? If the work decomposes into independent sub-tasks, a multi-agent topology with parallel execution beats a single ReAct loop.
  • Does the iteration budget allow critique? Reflection patterns multiply token cost; they fit reasoning-heavy tasks where quality matters more than latency.

Patterns combine

The most common production architecture as of 2026 is layered: a supervisor agent at the top whose only job is planning, specialist sub-agents below that each use ReAct (with tool calling) for their narrow task, and an outer reflection loop that critiques the supervisor’s output before returning to the user. This is not five separate patterns competing; it is five composable primitives stacked.

Sources and Further Reading

  1. S. Yao et al., ReAct: Synergizing Reasoning and Acting, arXiv:2210.03629 (2022).
  2. N. Shinn et al., Reflexion: Language Agents with Verbal Reinforcement Learning, arXiv:2303.11366 (2023).
  3. A. Madaan et al., Self-Refine: Iterative Refinement with Self-Feedback, arXiv:2303.17651 (2023).
  4. T. Schick et al., Toolformer: Language Models Can Teach Themselves to Use Tools, arXiv:2302.04761 (2023).
  5. Q. Wu et al., AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation, arXiv:2308.08155 (2023).
  6. L. Wang et al., A Survey on LLM-based Autonomous Agents, arXiv:2308.11432 (2023).
  7. Z. Xi et al., The Rise and Potential of LLM-Based Agents, arXiv:2309.07864 (2023).
  8. Anthropic, Building effective agents (2024).
  9. LangChain blog, Planning Agents.