Prompt Boston Logo
Back to Engineering & Development

Agentic AI Architecture

Engineering & Development

Multi-Agent Orchestration Design
Designing agentic AI systems with orchestrator and specialist agents
Design a multi-agent orchestration architecture for: [complex task or domain]

System context:
- Primary use case: [what the system does]
- Data sources: [APIs, databases, external services]
- User interaction model: [chat, dashboard, batch, API]
- Latency requirements: [real-time, near-real-time, async]

Design the architecture:
1. Lead/orchestrator agent — what model, what system prompt, what routing logic
2. Specialist agents — define each with role, scoped tools, model tier (capable vs fast)
3. Tool definitions — for each agent, what external tools it can call and with what parameters
4. Context passing strategy — how context flows between lead and specialists
5. Guardrails — what the lead is NOT allowed to do (e.g., no ad-hoc SQL, no hallucinated citations)
6. Fallback behavior — what happens when a specialist fails or returns empty
7. Answer trace / observability — how users see which agents contributed

Provide concrete system prompts for the orchestrator and at least two specialists. Include the tool schema definitions.

Try this prompt in:

ChatGPT, Claude, and Perplexity will open with the prompt pre-filled. For Gemini, you'll need to paste the prompt manually.

Specialist Agent Prompt Engineering
Writing scoped sub-agent prompts for multi-agent pipelines
Create a focused specialist agent prompt for: [agent role/domain]

Agent context:
- Part of a multi-agent system for: [overall system purpose]
- This agent's specialty: [specific domain]
- Available tools: [list of tools this agent can call]
- Input: [what the orchestrator passes to this agent]
- Expected output format: [structured JSON, prose, table, etc.]

Write the specialist system prompt that:
1. Defines the agent's identity and expertise boundary
2. Lists specific responsibilities (what it MUST do)
3. Lists explicit constraints (what it must NOT do)
4. Specifies when and how to use each available tool
5. Defines the output schema with required fields
6. Includes error handling — what to return if tools fail or data is missing
7. Sets tone and precision level appropriate to the domain

Also provide:
- 3 example input/output pairs
- Edge cases the agent should handle gracefully
- A validation checklist for the agent's output quality

Try this prompt in:

ChatGPT, Claude, and Perplexity will open with the prompt pre-filled. For Gemini, you'll need to paste the prompt manually.

Tool-Augmented Agent Design
Designing agents that interact with external tools and APIs
Design a tool-augmented AI agent for: [task requiring external data]

Agent purpose: [what it helps users do]
External sources: [APIs, databases, search engines, file systems]

For each tool, define:
- Tool name and description (as it appears in the function schema)
- Parameters with types, descriptions, and validation rules
- Expected response shape and how the agent should interpret results
- Rate limits, caching strategy, and fallback behavior
- When the agent should vs. should not call this tool

Design the routing logic:
- How the agent decides which tool to call based on user input
- How it chains multiple tool calls for multi-step questions
- How it synthesizes results from parallel tool calls
- How it handles conflicting data from different sources

Include the complete function/tool definitions in OpenAI or Anthropic tool schema format.

Try this prompt in:

ChatGPT, Claude, and Perplexity will open with the prompt pre-filled. For Gemini, you'll need to paste the prompt manually.

Agent Validation & Testing Strategy
Quality assurance for multi-agent AI systems
Create a validation and testing strategy for this multi-agent system:

System: [description of the agentic system]
Agents: [list orchestrator + specialist agents]
Tools: [external tools/APIs the agents can call]
Critical requirements: [accuracy, latency, safety constraints]

Design tests for:
1. Individual agent behavior — does each specialist produce correct output for known inputs?
2. Orchestrator routing — does the lead route to the right specialist for each query type?
3. Tool call correctness — are tool parameters valid and results correctly parsed?
4. End-to-end flows — does the full pipeline produce correct answers for representative questions?
5. Guardrail enforcement — confirm agents refuse out-of-scope requests
6. Failure modes — test tool timeouts, empty results, malformed API responses
7. Answer quality — rubric for evaluating completeness, accuracy, and citation correctness

Provide:
- Test case templates with input, expected behavior, and pass/fail criteria
- A scoring rubric for answer quality (1-5 scale with anchors)
- Recommended eval tooling and CI integration approach

Try this prompt in:

ChatGPT, Claude, and Perplexity will open with the prompt pre-filled. For Gemini, you'll need to paste the prompt manually.

Context Routing & Memory Strategy
Designing context flow between agents in orchestrated systems
Design a context routing and memory strategy for: [multi-agent system description]

Agents in the system: [list agents with roles]
Conversation model: [single-turn, multi-turn, long-running session]
Context sources: [user message, tool results, previous agent outputs, session history]

Define:
1. What context the orchestrator passes to each specialist (full vs. filtered vs. summarized)
2. How specialist outputs are accumulated (append, merge, replace)
3. Whether specialists can see each other's outputs (sequential accumulation vs. isolated parallel)
4. Session memory approach — what persists across turns and what is discarded
5. Context window management — how to handle prompts that approach token limits
6. Metadata and tracing — what audit fields travel with each context handoff

Provide concrete examples:
- A multi-turn conversation showing context flow at each step
- A scenario where context filtering prevents a specialist from being confused by irrelevant data
- A scenario where accumulated context from Agent A improves Agent B's output

Try this prompt in:

ChatGPT, Claude, and Perplexity will open with the prompt pre-filled. For Gemini, you'll need to paste the prompt manually.