You have come a long way. In Chapter 1, you saw the problem: your AI chat resets every session, loses context, and cannot run tools on its own. In Chapter 3, you defined two profiles — an SEO manager and a research specialist — and learned how profiles, skills, memory, toolsets, gateway, and cron combine into a working agent. In Chapter 8, you watched the research specialist run through the agent loop to find keywords, with human approval at the judgment gate. In Chapter 9, you wrote skills for both roles and learned what makes one agent sharper than another. In Chapter 10, you expanded to four profiles on a kanban board and learned how to coordinate multiple agents without file conflicts. In Chapter 11, you added memory — style rules that persist across sessions, competitive knowledge that compounds over time. In Chapter 12, you automated the weekly cycle with cron jobs that run keyword scans on Monday morning and deliver results to your Slack channel.
This chapter is the capstone. It brings everything together into a complete, practical agent team design. You will see which roles should be durable profiles and which should be subagents. You will understand how PR-only publishing works — agents draft, but only you decide what merges. And you will see two configurations side by side: a minimum viable setup that gets you started with one or two agents, and a mature setup that runs a full team with scheduled scans, shared memory, and publishing safeguards.
By the end, you will have a complete blueprint for your own SEO agent team — and the principles to apply the same design thinking to any other workflow.
Your SEO workflow needs seven distinct roles. Some you have already built in prior chapters. Some are new — but they follow the same design principles you have been applying since Chapter 3: narrow scope, clear output contracts, matched toolsets, and judgment gates where human decisions matter.
Finds keyword ideas using web search. Runs the keyword research skill — search, filter, score, save. Writes results to keywords.md. Carries memory about which sources work for your niche and which filters you prefer. Toolsets: web search, file read/write.
Reads the keyword file, then examines the top-ranking pages for each target keyword. Reports on what the top results cover, what gaps exist, and what formats rank well. Writes findings to serp-analysis.md. Carries memory about competitor patterns and SERP features in your niche. Toolsets: web search, file read/write.
Reads the keyword file and SERP analysis. Drafts the article following the content brief skill. Writes the draft to draft.md. Does not publish — waits for review. Carries memory about your brand voice, word count preferences, and structural patterns. Toolsets: file read/write, web search (for fact-checking).
Reads the draft. Checks for factual accuracy, structural coherence, grammar, and adherence to your style rules stored in memory. Returns a clean version or a list of revision notes to the writer. Does not publish — only reviews. Carries memory about recurring quality patterns and your editorial preferences. Toolsets: file read/write.
The coordination and judgment layer. Creates tasks on the kanban board. Reviews keyword selections, content briefs, and final drafts. Approves or blocks work. Holds your comprehensive style rules, publishing standards, and approval criteria in memory. Nothing publishes without the manager's sign-off — and the manager only signs off when you say so. Toolsets: file read/write, kanban.
Formats the approved draft for your publishing platform. Creates a pull request with the formatted file — but does not merge it. The PR sits open for your final review. Handles metadata (title, description, tags), internal linking, and image placeholder insertion. Toolsets: file read/write, code tools (for PR creation).
Checks whether your published articles are ranking for their target keywords. Reports gains, drops, and new opportunities. Runs weekly (or on demand) and delivers a performance summary. Carries memory about which articles tend to fluctuate, which keywords are volatile, and what ranking patterns look like in your niche. Toolsets: web search, file read/write.
Notice the progression. In Chapter 3, you started with two profiles. In Chapter 10, you expanded to four. Now you have seven roles — but not all of them need to be profiles. Some should be subagents. Let us look at which is which.
In Chapter 10, you learned the rule: profiles for roles that repeat and compound, subagents for one-off tasks that need fresh context. Here is how that rule applies to your seven roles:
The practical principle: start with fewer profiles and add more as the workflow stabilizes. Every profile adds maintenance overhead — memory hygiene, skill updates, toolset configuration. A five-profile team is easier to manage than a seven-profile team. Add the sixth and seventh only when the first five are running smoothly and the gaps are visible.
In Chapter 11, you learned that each profile has its own isolated memory. The research specialist does not see the SEO manager's style rules. The content writer does not see the SERP analyst's competitive data. This isolation keeps each agent focused — but it also means some information needs to be duplicated across profiles, or shared through a different mechanism.
There are two ways to share knowledge across profiles:
Add the same rule to each profile's memory separately. If you want every agent to know that your niche is education consulting, you add that fact to the research specialist's MEMORY.md, the SERP analyst's MEMORY.md, the content writer's MEMORY.md, and so on. This is redundant but reliable — each agent always has the rule in context, regardless of whether the shared file loads correctly.
Create a context file in the working directory (like AGENTS.md or SOUL.md) that all profiles read at session start. This file holds project-level information: niche, brand voice summary, content calendar, and publishing standards. Each profile sees the same shared rules without you duplicating them. The trade-off: context files compete for the same context window space as skills and memory, so keep the shared file compact.
A practical split: put broad identity rules in a shared context file (your niche, your brand voice summary, your publishing platform). Put role-specific preferences in per-profile memory (the research specialist's filter rules, the editor's grammar checklist, the SEO manager's detailed approval criteria). The shared file gives every agent the same baseline. Per-profile memory gives each agent its own expertise.
One rule that should live in every profile's memory, not just the shared file: the publishing restriction. Every agent should carry the instruction "never publish or merge without explicit human approval" in its own memory. This is a safety net — even if the shared context file fails to load, the publishing restriction is still in each agent's context.
In Chapter 9, you wrote "Do NOT publish. Wait for user approval" into the SEO manager's content brief skill. In Chapter 10, the SEO manager became the judgment gate — nothing moves to production without its sign-off, and the manager only signs off when you tell it to. In Chapter 12, you learned that publishing is in the "never automate" category: merging code into protected branches is a human decision, not agent territory.
This chapter formalizes that principle into a concrete publishing workflow: PR-only publishing. Every piece of content your team produces goes through three gates before it reaches your live site:
The editor (or SEO manager, in a minimum setup) reviews the draft for accuracy, style compliance, and structural quality. If the draft passes, it moves to publishing. If not, it goes back to the writer with revision notes.
The publishing support role (subagent or profile) formats the approved draft and creates a pull request. The PR contains the formatted content file, metadata, and any supporting changes. The PR is open — not merged.
You review the PR. You check the formatting, the metadata, the title. You decide whether to merge. The agent never merges on its own — not through cron, not through the approval system, not through skill instructions. The merge button is yours.
Why not allow auto-merge? Because publishing is a one-way action. Once content is live, it is indexed by search engines, seen by readers, and cached by aggregators. A mistakenly published draft with factual errors, wrong metadata, or off-brand messaging is expensive to undo. The PR-only workflow makes every publish an explicit, deliberate decision by you. The agents handle everything up to the PR — formatting, metadata, file creation. You handle the merge.
This is the same pattern from Chapter 12: automate the gathering, not the judgment. Cron jobs collect keyword data and deliver reports. The kanban board coordinates tasks. But the merge — the point where content goes from draft to live — stays with you.
You do not need seven roles to get value from Hermes. The minimum viable setup is the two profiles you defined back in Chapter 3: an SEO manager and a research specialist. No kanban board. No cron jobs. No SERP analyst, no dedicated writer, no editor, no performance reviewer. Just you, two agents, and a messaging channel.
Here is what the minimum setup looks like in practice:
You message it on Slack or through the CLI: "Find 20 keyword ideas for AI agent teams content." It runs its keyword research skill, saves results to keywords.md, and returns a summary. You review the keywords. If they are good, you move to the next step. If not, you ask for refinements.
You tell it: "Draft a content brief based on the keyword file." It reads keywords.md, drafts a brief, saves it, and waits for your approval. You review the brief, approve it, and the manager drafts the article. You review the article. If it passes, you format and publish it yourself — or ask the manager to open a PR.
In this setup, the SEO manager plays three roles combined: strategist, writer, and reviewer. That is fine for starting out — the output will not be as sharp as a dedicated specialist for each role, but you will learn what your workflow actually needs before investing in more profiles.
The key word is manual. You trigger every step. You review every output. You decide when to proceed. No cron jobs, no unattended work, no kanban board. The two agents do what you ask, when you ask it, and wait for your input between steps. This is the safest way to learn — and the fastest way to discover which parts of the workflow you want to automate next.
The mature setup is what you build toward once the minimum viable setup is running smoothly and you can see the gaps. It adds five things: more specialized profiles, kanban coordination, cron jobs for the weekly cycle, per-profile memory for each role, and PR-only publishing with a dedicated publishing step.
The mature setup is not more complex for the sake of complexity. Each addition solves a specific problem you encountered in the minimum setup:
Solution: Split into dedicated writer, editor, and reviewer profiles — each with narrow scope and focused memory
Solution: Cron jobs run the scan automatically every Monday at 9 AM and deliver results to Slack
Solution: Kanban board coordinates task assignment — the editor only starts after the writer marks the task complete
Solution: Memory files carry your rules forward — USER.md and MEMORY.md persist across sessions
Solution: PR-only publishing — the agent opens a PR, you review and merge. No auto-merge, ever.
Here is what your week looks like with the mature setup running. This combines every component from the prior chapters into one coherent cycle:
Cron fires the keyword research job. The research specialist loads its profile, keyword research skill, and memory (which sources work, which filters you prefer). It runs web searches, filters results, saves keywords.md, and delivers a summary to your Slack channel. Toolsets restricted to web search and file operations.
Cron fires the SERP analysis job with context_from the keyword research job ID. The SERP analyst receives the keyword list in its prompt, analyzes top-ranking pages, saves serp-analysis.md, and delivers findings to Slack.
The SEO manager drafts a content brief based on both outputs and delivers it to Slack for your review. This is where you step in: you read the brief, approve the top keywords and the content angle, or ask for changes.
After your approval, the SEO manager creates kanban tasks for the content cycle: writing → editorial review → publishing. The content writer picks up the writing task, drafts the article, and marks the task complete. The editor picks up the review task, checks the draft against style rules, and either commits corrections or returns notes. The writer revises if needed. The SEO manager does a final review.
The approved draft goes to publishing support. The publisher formats the content, creates metadata, and opens a pull request. The PR sits open. You review the PR, check the formatting and metadata, and merge when you are satisfied. The content goes live only when you click merge.
Cron fires the performance review job. The performance reviewer checks ranking positions for your published articles, compares against prior weeks stored in memory, and delivers a performance summary to Slack. It reports gains, drops, and new keyword opportunities for the following week. If there is nothing notable, it returns [SILENT] — and you get no message, which is the "everything is fine" signal.
Notice the rhythm. Unattended work gathers data and delivers reports — you wake up Monday morning to keyword lists and SERP analyses already waiting in Slack. Attended work produces content and requires judgment — you review the brief, approve the draft, and merge the PR. Unattended work monitors performance — the Friday report tells you whether the week's effort paid off. The agents handle the routine. You handle the decisions. That is the design.
Also notice the consistency with everything you learned in prior chapters. The agent loop (Chapter 8) runs inside every profile. The skills (Chapter 9) define each agent's procedure. The kanban board (Chapter 10) coordinates the attended cycle. Memory (Chapter 11) carries forward accumulated knowledge. Cron (Chapter 12) handles the unattended cycle. And PR-only publishing keeps the final decision with you. Every component has a job, and the jobs fit together.
Whether you run the minimum viable setup or the mature one, the same six principles apply. These are the design rules that hold across every chapter in this guide:
Each profile does one thing well. The research specialist finds keywords — it does not draft content. The editor reviews drafts — it does not run keyword searches. Narrowness keeps each agent's context focused and its output consistent. A generalist agent tries to do everything and does nothing well.
Skills tell the agent how to do its work (step by step, with output contracts). Memory tells the agent what it has learned (preferences, facts, patterns). Keeping them separate — procedures in skills, facts in memory — makes each agent easier to debug, update, and improve.
Every workflow has points where a human decision matters: which keywords to pursue, whether a draft is good enough, whether to publish. Skills encode these gates ("wait for approval"). The approval system catches dangerous actions. Output contracts make results reviewable. You never lose control of the final decision.
Each profile gets only the tools it needs. The research specialist cannot run shell commands. The editor cannot publish. Restricting toolsets is not just a security measure — it keeps each agent from drifting into another agent's territory. If an agent does not have the tool, it cannot step outside its role.
Cron jobs and no-agent scripts handle data collection, health checks, and periodic reports. But the decisions — what to write about, whether the draft is ready, whether to merge the PR — stay with you. Unattended work is for the routine. Attended work is for the judgment. That division is what makes agent teams safe and useful.
Every session adds to memory — new preferences, new findings, new corrections. Over time, the accumulated context makes each agent sharper. But memory that grows without maintenance becomes noise. Review memory periodically, delete outdated facts, resolve contradictions, and keep files compact. Memory is useful only when it is accurate and relevant.
These six principles work for any agent team — not just SEO. If you were building a customer support team, the same rules apply: narrow roles (triage, research, response drafting), skills for procedures, judgment gates for sensitive replies, restricted toolsets (the triage agent does not send emails), automate the monitoring but not the response, and keep memory clean. The SEO example in this guide is one application. The principles are universal.
Results from any agent team depend on many factors outside any tool's control — the quality of your instructions, the reliability of your model provider, the competitiveness of your domain, and the consistency of your review process. No agent team guarantees better outcomes by itself. But a well-designed team, built on these principles, gives you a structured, reviewable, repeatable workflow — which is the foundation that better outcomes are built on.
You have been running the minimum viable setup (SEO manager + research specialist) for a month. The keyword research works well, but the SEO manager's drafts are inconsistent — sometimes great, sometimes off-brief. You think splitting the writer and editor into separate profiles would help. Before creating new profiles, what should you try first — and why?