Documentation Index
Fetch the complete documentation index at: https://docs.swarms.world/llms.txt
Use this file to discover all available pages before exploring further.
SequentialWorkflow exposes two streaming methods that yield tokens from each agent in pipeline order, in real time. Each agent’s tokens are streamed the moment the LLM produces them; once an agent finishes, its full output is handed off to the next agent — same hand-off as run(), just streamed.
workflow.run_stream(task)— sync generatorworkflow.arun_stream(task)— async generator- Pass
with_events=Trueto either to receive structuredagent_start/token/agent_endevents instead of plain token strings.
Building the Pipeline
Sync Streaming
Plain token strings, yielded in pipeline order. Agent 1’s tokens stream first, then Agent 2’s, then Agent 3’s.Async Streaming
Structured Events with with_events=True
By default the stream yields plain token strings. Pass with_events=True to receive event dicts instead — useful when you want to render a separate panel per agent, attribute every token to the emitting agent, or know exactly when each agent starts and finishes.
| Type | Fields | When emitted |
|---|---|---|
agent_start | agent | Right before an agent begins streaming |
token | agent, token | For every token the agent emits |
agent_end | agent, output | After the agent finishes; carries the full output |
max_loops > 1 and drift_detection are not applied in streaming mode. Use workflow.run() if you need those.Related
- SequentialWorkflow — the underlying architecture
- Agent Streaming — single-agent streaming with
run_stream/arun_stream - Streaming — full overview of every streaming mode