Tutorial: Multi-Agent Workflows
Build workflows where multiple agents collaborate on complex tasks.
What You'll Build
A multi-agent workflow where:
- A coordinator agent manages complex tasks
- Specialist agents handle specific subtasks
- Agents communicate and share results
- Parallel execution improves efficiency
Time: 25-30 minutes
Prerequisites
- Fleet installed with an AI provider configured
- Familiarity with creating basic agents
How Multi-Agent Works
Fleet supports two patterns for multi-agent workflows:
1. Task Spawning
A parent agent spawns child agents using the spawn tool. Children run independently and return results.
2. Inter-Agent Messaging
Agents communicate via agent_message, sending requests and receiving responses.
Part 1: Task Spawning
Create a coordinator that delegates to specialists.
Step 1: Create Specialist Agents
First, create agents for specific tasks:
Security Reviewer (security-reviewer):
You are a security specialist. Review code for:
- Injection vulnerabilities
- Authentication issues
- Data exposure risks
- Insecure configurations
Be thorough and specific. Return a structured report.
Performance Analyst (performance-analyst):
You are a performance specialist. Analyze code for:
- Inefficient algorithms
- Memory leaks
- N+1 query patterns
- Caching opportunities
Provide specific recommendations with estimated impact.
Documentation Writer (doc-writer):
You are a documentation specialist. For the given code:
- Write clear function/class documentation
- Create usage examples
- Document edge cases and limitations
Output in the project's documentation format.
Step 2: Create the Coordinator
Create a "Code Review Coordinator" agent:
You are a code review coordinator. For comprehensive reviews:
## Workflow
1. Analyze the scope of the review request
2. Spawn specialist agents in parallel:
- security-reviewer for security analysis
- performance-analyst for performance review
- doc-writer for documentation gaps
3. Wait for all results
4. Synthesize findings into a unified report
5. Prioritize issues and recommendations
## Spawning Agents
Use the spawn tool to run specialists:
- spawn(agent: "security-reviewer", prompt: "Review security for: [files]")
- spawn(agent: "performance-analyst", prompt: "Analyze performance of: [files]")
Spawn agents in parallel when their tasks are independent.
## Report Format
### Executive Summary
Brief overview of findings.
### Critical Issues
Must-fix items from all specialists.
### Recommendations
Prioritized list of improvements.
### Detailed Reports
Full reports from each specialist.
Step 3: Configure Subagent Access
In the coordinator's AGENT.md:
---
name: code-review-coordinator
description: Coordinates comprehensive code reviews with specialist agents
allowed_subagents:
- security-reviewer
- performance-analyst
- doc-writer
permissions:
execution_tier: read_only
---
Step 4: Test the Workflow
Ask the coordinator:
Perform a comprehensive review of src/api/
The coordinator will spawn specialists and synthesize their results.
Part 2: Parallel Research
Use multiple agents to research different aspects simultaneously.
Step 1: Create Research Agents
API Researcher:
You research API documentation and best practices.
Given a topic, find relevant patterns, examples, and recommendations.
Codebase Analyst:
You analyze existing codebases for patterns and conventions.
Given a topic, find how it's currently implemented and suggest improvements.
Standards Reviewer:
You research industry standards and compliance requirements.
Given a topic, identify relevant standards and requirements.
Step 2: Create Research Coordinator
You are a research coordinator. For technical decisions:
## Workflow
1. Identify the key research questions
2. Spawn researchers in parallel:
- api-researcher for external patterns
- codebase-analyst for internal patterns
- standards-reviewer for compliance needs
3. Synthesize findings into actionable recommendations
4. Present trade-offs clearly
## Output
### Question
Restate the technical question.
### Research Summary
Key findings from each researcher.
### Recommendation
Synthesized recommendation with rationale.
### Trade-offs
What we gain and lose with each option.
Part 3: Inter-Agent Communication
For ongoing collaboration, use agent_message.
Step 1: Create a Domain Expert
Create a "Database Expert" agent:
You are a database expert. You can answer questions about:
- Schema design
- Query optimization
- Indexing strategies
- Migration planning
When other agents message you, provide concise, actionable advice.
Step 2: Use from Another Agent
Another agent can consult the expert:
When you need database advice:
1. Use agent_list() to find available experts
2. Use agent_message(agent_id, question) to ask
3. Wait for the response
4. Apply the advice to your task
Example:
agent_message(agent_id="database-expert", message="What index would optimize this query: SELECT * FROM users WHERE email = ?")
Step 3: Configure Permissions
The calling agent needs access to messaging tools:
tools:
- agent_list
- agent_message
- spawn
Part 4: Validation Pipeline
Chain agents for sequential validation.
Step 1: Define Pipeline Stages
Create agents for each stage:
Code Generator:
Generate code based on requirements.
Output clean, tested, documented code.
Test Generator:
Given code, generate comprehensive tests.
Cover happy path, edge cases, and error conditions.
Reviewer:
Review code and tests for issues.
Approve or request changes.
Step 2: Create Pipeline Coordinator
You orchestrate a code generation pipeline:
## Pipeline Stages
1. **Generate**: Spawn code-generator with requirements
2. **Test**: Spawn test-generator with the generated code
3. **Review**: Spawn reviewer with code and tests
4. **Iterate**: If reviewer requests changes, go back to step 1
## Quality Gates
- Code must pass linting
- Tests must achieve 80% coverage
- Reviewer must approve
Continue iterating until all gates pass or max iterations reached.
## Output
Final approved code and tests, with review summary.
Best Practices
Keep Specialists Focused
Each agent should do one thing well:
# Good: Focused specialist
You are a security reviewer. Focus only on security issues.
# Bad: Jack of all trades
You review security, performance, style, and documentation.
Limit Subagent Access
Restrict which agents can be spawned:
allowed_subagents:
- security-reviewer
- performance-analyst
# Only these can be spawned
Handle Failures Gracefully
If a spawned agent fails:
1. Log the error
2. Attempt retry once
3. Continue with available results
4. Note the gap in your report
Use Parallel Spawning
When tasks are independent, spawn in parallel:
# Good: Parallel spawning
Spawn security-reviewer, performance-analyst, and doc-writer simultaneously.
# Bad: Sequential when not needed
First spawn security-reviewer, wait for results, then spawn performance-analyst...
Set Reasonable Timeouts
Long-running subagents can slow down workflows. Configure compaction for long conversations:
compaction_strategy: summarization
Complete Example: Research Team
Coordinator AGENT.md
---
name: research-coordinator
description: Coordinates multi-perspective research on technical topics
model:
provider: anthropic
model: claude-sonnet-4-20250514
allowed_subagents:
- api-researcher
- codebase-analyst
- standards-reviewer
parameters:
- name: topic
type: string
required: true
description: Research topic or question
permissions:
execution_tier: read_only
compaction_strategy: summarization
---
You are a research coordinator. Research the topic:
## Process
1. Break the topic into research questions
2. Assign questions to appropriate researchers
3. Spawn researchers in parallel
4. Synthesize findings
5. Present unified recommendations
## Researchers Available
- **api-researcher**: External patterns, documentation, best practices
- **codebase-analyst**: Internal patterns, existing implementations
- **standards-reviewer**: Compliance, industry standards
## Output Format
### Topic
### Key Questions
- Question 1
- Question 2
### Findings
Summary from each researcher.
### Recommendation
Synthesized recommendation with confidence level.
### Next Steps
Concrete action items.
Running the Team
fleet run research-coordinator --param topic="Should we use GraphQL or REST for our new API?"