Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.swarms.world/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The AdvisorSwarm implements the advisor strategy described in Anthropic’s research (April 2026). It pairs a cheaper executor model that drives the task end-to-end with a powerful advisor model consulted on-demand between executor turns. The executor runs every turn. The advisor is on-demand — consulted between executor turns when budget allows. Both agents read from and write to the same shared conversation context. The advisor never calls tools or produces user-facing output. This is provider-agnostic: any model supported by LiteLLM works for either role. The swarm follows this workflow:
  1. User task goes into the shared conversation
  2. Before each executor turn, the advisor reads the full shared context and provides guidance (if budget allows)
  3. The executor reads the full shared context (including any advisor guidance) and produces output
  4. Both advisor guidance and executor output are added to the shared conversation
  5. Repeat for max_loops executor turns

Installation

pip install -U swarms

Key Features

FeatureDescription
Executor-Driven LoopThe executor runs every turn — it’s the main driver
On-Demand AdvisorAdvisor is consulted between turns, not in a fixed sequence
Shared ContextBoth agents read from and write to the same conversation
Budget Controlmax_advisor_uses caps advisor consultations per run
Provider-AgnosticAny LiteLLM-supported model works for either role
Custom AgentsPass pre-configured agents with tools, MCP, or any Agent settings

Attributes

name
str
default:"AdvisorSwarm"
Human-readable name
description
str
default:"An executor-advisor swarm..."
Description of the swarm’s purpose
executor_model_name
str
default:"claude-sonnet-4-6"
Model for the executor agent
advisor_model_name
str
default:"claude-opus-4-6"
Model for the advisor agent
executor_system_prompt
str
default:"Built-in"
System prompt for the executor
advisor_system_prompt
str
default:"Built-in"
System prompt for the advisor
max_advisor_uses
int
default:"3"
Max advisor consultations per run(). 0 = executor runs alone.
max_loops
int
default:"1"
Number of executor turns
output_type
OutputType
default:"dict-all-except-first"
Format for output (dict, str, list, final, json, yaml)
verbose
bool
default:"False"
Enable detailed logging
executor_agent
Agent
default:"None"
Pre-configured Agent for execution (e.g., with tools or MCP)
advisor_agent
Agent
default:"None"
Pre-configured Agent for advising
tools
List[Callable]
default:"None"
Tools available to the executor agent only
Raises:
ExceptionCondition
ValueErrorIf max_advisor_uses < 0, max_loops < 1, or model names are empty

Methods

run()

Execute the advisor-executor orchestration flow.
def run(self, task: str, img: str = None, imgs: List[str] = None) -> Any
Parameters:
  • task (str): The task to accomplish
  • img (str, optional): Optional single image input
  • imgs (List[str], optional): Optional list of image inputs
Returns: Formatted conversation history according to output_type

batched_run()

Run the swarm on multiple tasks sequentially.
def batched_run(self, tasks: List[str]) -> List[Any]
Parameters:
  • tasks (List[str]): List of task strings
Returns: List of results, one per task

Usage Examples

Basic Usage

from swarms import AdvisorSwarm

swarm = AdvisorSwarm(
    executor_model_name="claude-sonnet-4-6",
    advisor_model_name="claude-opus-4-6",
    max_advisor_uses=3,
    max_loops=1,
    verbose=True,
)

result = swarm.run(
    "Write a Python function that implements binary search on a sorted list. "
    "Include proper error handling, type hints, and edge cases."
)

print(result)

Multi-Turn with Advisor Guidance

Run the executor for multiple turns, with the advisor providing guidance before each:
from swarms import AdvisorSwarm

swarm = AdvisorSwarm(
    executor_model_name="claude-sonnet-4-6",
    advisor_model_name="claude-opus-4-6",
    max_advisor_uses=3,
    max_loops=3,
)

result = swarm.run("Design and implement a REST API rate limiter in Python")

Custom Executor with Tools

Pass a pre-configured executor agent with tools while keeping the advisor tool-free:
from swarms import Agent, AdvisorSwarm


def write_file(filename: str, content: str) -> str:
    """Write content to a file."""
    with open(filename, "w") as f:
        f.write(content)
    return f"Written: {filename}"


executor = Agent(
    agent_name="Executor",
    model_name="claude-sonnet-4-6",
    max_loops=1,
    tools=[write_file],
)

swarm = AdvisorSwarm(
    executor_agent=executor,
    advisor_model_name="claude-opus-4-6",
)

result = swarm.run("Create a Python module for string manipulation utilities")

Executor Only (No Advisor)

Set max_advisor_uses=0 to run the executor alone:
from swarms import AdvisorSwarm

swarm = AdvisorSwarm(
    executor_model_name="claude-sonnet-4-6",
    advisor_model_name="claude-opus-4-6",
    max_advisor_uses=0,
    max_loops=1,
)

result = swarm.run("Simple task that doesn't need advisor guidance")

Different Providers

The swarm is provider-agnostic. Use any models LiteLLM supports:
from swarms import AdvisorSwarm

# OpenAI models
swarm = AdvisorSwarm(
    executor_model_name="gpt-4.1-mini",
    advisor_model_name="gpt-4.1",
)

# Mix providers
swarm = AdvisorSwarm(
    executor_model_name="gpt-4.1-mini",
    advisor_model_name="claude-opus-4-6",
)

Architecture Details

Shared Context

Both agents read from and write to the same Conversation object. This mirrors the Anthropic diagram where the advisor reads the same context as the executor. On each turn:
  1. The advisor reads conversation.get_str() — sees everything so far
  2. The advisor’s guidance is added to the conversation
  3. The executor reads conversation.get_str() — sees the task, any prior output, and the advisor’s guidance
  4. The executor’s output is added to the conversation

Advisor Budget

The max_advisor_uses parameter controls how many times the advisor is consulted:
max_advisor_usesmax_loopsBehavior
01Executor runs alone — no advisor
11Advisor guides once, executor runs once
33Advisor guides before each of 3 executor turns
13Advisor guides first turn only, executor runs 3 turns

Multi-Turn Execution

When max_loops > 1, the executor runs multiple turns. Each turn, it reads the full conversation — including its own previous output and any advisor guidance — so it can build on prior work. The advisor’s budget is distributed across turns: it is consulted before each executor turn until the budget is exhausted.

Source Code

View the source code on GitHub