In Chapter 5, you saw that Hermes runs two kinds of services: on-demand (CLI sessions you start and stop) and continuous (gateway, cron, and API server that run in the background). In Chapter 8, you walked through the agent loop — how a prompt becomes a tool call, how tool calls become results, and where you step in with approvals. In both chapters, you were in the loop: you sent the message, you reviewed the output, you decided what happened next.
But some work does not need you in the loop every time. A weekly keyword scan that runs every Monday at 9 AM. A watchdog script that checks whether your site is online every 15 minutes. A webhook that triggers your agent when a pull request lands on GitHub. These tasks run on a schedule or in response to an external event — and they run whether or not you are at your desk.
This chapter explains how Hermes handles unattended work: the cron scheduler for time-based tasks, the webhook system for event-based triggers, no-agent mode for script-only jobs, and the delivery system that sends results to you through the messaging platforms you already use.
Important boundary: only cron jobs and no-agent mode support true unattended work. On-demand sessions — CLI, gateway messages, API requests — are always human-in-the-loop. The agent waits for your input, processes it, and responds. Unattended work is a separate category with its own patterns and its own safeguards.
The cron scheduler is a background process that fires jobs at scheduled times. It is one of the continuous services from Chapter 5 — it runs whether or not you are actively chatting with your agent. When a job is due, the scheduler picks it up, runs it, saves the output, and optionally delivers the result to you.
Cron jobs are stored in a single file — ~/.hermes/cron/jobs.json — with atomic writes so the file never gets corrupted by a partial update. Each job has an ID, a schedule, a prompt, and optional settings for delivery, toolsets, and context.
Hermes supports four kinds of schedules:
Example: "30m", "2h", "1d"
One-shot from now. "30m" means run once, 30 minutes from creation. Good for reminders and deferred tasks.
Example: "every 30m", "every 2h"
Recurring from now. "every 2h" means run every two hours, starting from creation time. Good for periodic checks and reports.
Example: "0 9 * * 1"
Standard cron syntax. "0 9 * * 1" means every Monday at 9:00 AM. This is the most flexible format — any schedule a standard cron utility supports, Hermes supports too.
Example: "2026-06-15T10:00"
One-shot at a specific date and time. Good for scheduled launches, deadline reminders, and calendar-based tasks.
You do not need to memorize cron syntax. Hermes understands natural-language scheduling: you can tell your agent "set up a weekly keyword scan every Monday at 9 AM" and it translates that into the right cron expression. The agent also scans the prompt for injection attempts at creation time — if someone tries to sneak instructions into a cron prompt, the scanner catches it before the job is saved.
# Create a cron job through the agent
You: Schedule a weekly keyword scan every Monday at 9 AM
using the research specialist profile. Deliver the
results to me on Slack.
Agent: Created cron job "weekly-keyword-scan"
Schedule: every Monday at 09:00
Profile: research-specialist
Deliver: slack
Job ID: a1b2c3d4e5f6
The job will run automatically every Monday at 9 AM.
Results will be delivered to your Slack channel.The agent translates your natural-language request into a structured cron job. You can also create and manage jobs directly through the CLI or the cron API — but most people find it easier to tell the agent what they want and let it handle the configuration.
Not every scheduled task needs an AI model. A site uptime check is just a bash script. A disk space alert is a shell command. A data backup is a file copy operation. These tasks are fast, cheap, and deterministic — running a model to check whether a server is online would be wasteful.
No-agent mode solves this. When a cron job has no_agent set to true, the scheduler skips the entire agent loop. It runs the specified script, captures the output, and delivers it. No model call. No tool selection. No token cost. Just a script on a timer.
Three rules govern no-agent jobs:
Without an agent, the script IS the job. If no_agent is true but no script is set, the job cannot run and Hermes rejects it at creation time.
If the script produces no output (empty stdout), the job is considered successful but nothing is delivered. This is the "nothing to report" pattern — the watchdog ran, everything is fine, no alert needed.
If the script exits with a non-zero code, the output is delivered as an error alert. This is the "something is wrong" pattern — the watchdog found a problem and you need to know about it.
Here is a practical example — a watchdog script that checks whether your website returns a healthy HTTP response:
#!/usr/bin/env bash
# File: ~/.hermes/scripts/site-watchdog.sh
# Checks whether the target site returns HTTP 200.
# If healthy: no output (silent — nothing to report).
# If unhealthy: outputs an alert message (delivered to you).
STATUS=$(curl -s -o /dev/null -w "%{http_code}" \
https://your-website.com)
if [ "$STATUS" != "200" ]; then
echo "⚠️ Site alert: your-website.com returned \
HTTP $STATUS (expected 200). Check immediately."
fiWhen the site is healthy, the script produces no output — no message is delivered. When the site is down, the script prints an alert — and the scheduler delivers it to your configured channel. This is the classic watchdog pattern: silent when things are normal, loud when something is wrong.
A cron job that runs in the background and saves output to a local file is useful — but only if you remember to check the file. Most of the time, you want the results to come to you. That is what delivery does.
In Chapter 5, you saw the messaging gateway — the background process that connects your agent to more than twenty platforms. Delivery uses the same gateway infrastructure. When a cron job finishes, the scheduler can send the result to any platform you have configured: Telegram, Slack, Discord, WhatsApp, Signal, Email, and the rest.
Every cron job has a deliver setting that controls where the output goes:
Save output to disk only. No message is sent to any platform. Good for jobs where you check results manually or where the output feeds into another job.
Send the result back to the channel where the job was created. If you created the job on Telegram, the result goes to the same Telegram chat. This is the default when a job is created through a messaging platform.
Send the result to a named platform: "telegram", "slack", "discord", "email", and so on. You can also target a specific chat or thread: "telegram:-1001234567:17" delivers to thread 17 in a specific Telegram group.
Comma-separated: "slack,telegram" delivers to both platforms. You can also use "all" to deliver to every platform with a configured home channel.
There is also a silent marker. If the agent decides there is nothing worth reporting — no new keywords, no changes, no alerts — it can respond with the exact text [SILENT]. The scheduler sees this marker and skips delivery. The output is still saved locally for audit purposes, but no message lands in your Slack or Telegram. This prevents noise: you only get messages when there is something to act on.
Sometimes one job produces data that another job needs. The research specialist finds keywords; the SERP analyst needs those keywords to start its analysis. You could wait for the first job to finish and manually start the second — but that defeats the purpose of unattended work.
The context_from setting chains jobs together. When a job has a context_from value, the scheduler loads the most recent output from the referenced job and injects it into the current job's prompt before the agent starts. The agent sees the prior results as context — just like you would paste them into a chat session.
This is how multi-step workflows run end-to-end without human intervention:
The research specialist runs its keyword research skill every Monday at 9 AM. It saves the keyword list and delivers a summary to Slack. Job ID: a1b2c3d4e5f6.
The SERP analyst runs at 10 AM, with context_from set to the keyword research job ID. When it starts, the keyword list from Job 1 is already in its prompt. It does not need to read a file or wait for a handoff — the data is there.
The SEO manager runs at 11 AM, with context_from set to the SERP analysis job ID. It reads the analysis results and drafts a content brief for the week. Delivers the brief to Slack for your review.
Each job in the chain runs independently on its own schedule. The scheduler handles the data handoff — you do not need to coordinate file writes and reads between profiles. The output of Job 1 is saved to ~/.hermes/cron/output/, and Job 2 reads it from there when context_from is set.
The timing between jobs matters. Job 2 runs at 10 AM, one hour after Job 1. That gap gives the research specialist time to finish. If Job 1 has not completed by the time Job 2 starts, Job 2 loads whatever output exists — which might be empty or stale. Always leave enough buffer between chained jobs.
Cron handles time-based schedules — "run this every Monday." But some events are not on a schedule. A pull request lands on GitHub. A Stripe payment comes through. A form submission arrives. These are event-driven — they happen when they happen, and your agent should respond immediately.
Hermes handles event-driven triggers through its webhook platform. The webhook system runs an HTTP server that receives POST requests from external services. When a webhook arrives, the system validates the request, transforms the payload into an agent prompt, and runs the agent with the configured settings. The agent processes the event and responds.
Each webhook route defines:
Which event types to accept. The webhook filters incoming requests by event type (read from the request headers), so you can accept only the events you care about — like "pull_request" from GitHub or "payment_completed" from Stripe.
An HMAC secret for signature validation. Every incoming request is checked against this secret. If the signature does not match, the request is rejected. This prevents unauthorized third parties from triggering your agent.
A template string that formats the webhook payload into an agent prompt. The template can reference fields from the incoming JSON — so the agent receives a structured, readable prompt instead of raw JSON.
Optional skills to load for the agent when this webhook fires. A GitHub pull request webhook might load the code review skill. A Stripe webhook might load the payment processing skill.
Where to send the agent's response — back to the source (a GitHub comment on the PR) or to a messaging platform (Slack notification about the payment).
There is also a deliver_only option that skips the agent entirely. When set to true, the rendered prompt template is delivered directly — no LLM call, no agent loop. This is useful for push notifications and inter-system alerts where speed matters more than interpretation.
In Chapter 8, you saw the approval system: manual mode (default) requires your explicit permission for dangerous actions, smart mode uses an auxiliary LLM, and YOLO mode skips approvals entirely. The same system applies to cron jobs and webhook-triggered runs. But there is a critical difference: when a job runs unattended, you might not be at your desk to approve a dangerous action.
This means you need to design unattended jobs carefully. Here is how to think about it:
The pattern from Chapter 8 holds: use skills to encode judgment gates, use the approval system to catch dangerous tool calls, and use output contracts to make results reviewable. For unattended work, add a fourth principle: restrict the toolsets available to the cron job so the agent cannot reach tools it does not need.
Every cron job has an enabled_toolsets setting that limits which toolsets the agent can use during that job. A keyword scan only needs web search and file writing — so you disable shell execution, code execution, and every other toolset. The agent cannot call a dangerous tool because the tool is not in its schema. This is safer than relying on the approval system to catch a bad call when you are not around to approve or deny it.
In Chapter 9, you wrote the keyword research skill for the research specialist. In Chapter 10, you put the research specialist on the kanban board as a profile that runs every week. In Chapter 11, you added memory — the specialist remembers which sources work and which filters you prefer.
Now you automate the weekly cycle. Instead of manually telling the research specialist to start each Monday, you set up a cron job that fires on schedule. The specialist runs its keyword research skill, saves the results, and delivers a summary to your Slack channel — all without you lifting a finger.
Here is what the cron job looks like:
# Weekly keyword scan — created through the agent
You: Set up a weekly keyword scan for AI agent teams
content. Run it every Monday at 9 AM using the
research-specialist profile. Load the keyword-research
skill. Only enable the web-search and file toolsets.
Deliver the results to my Slack channel.
Agent: Created cron job "weekly-keyword-scan"
Schedule: 0 9 * * 1 (every Monday at 09:00)
Profile: research-specialist
Skill: keyword-research
Enabled toolsets: web_search, file
Deliver: slack
Job ID: a1b2c3d4e5f6When Monday at 9 AM arrives, the scheduler fires the job. The research specialist loads its profile (system prompt, memory), its keyword research skill, and the two enabled toolsets. It runs the research procedure: search for keywords, filter by relevance, save the results, and produce a summary. The summary is delivered to your Slack channel.
Here is what the delivered message looks like:
Cronjob Response: weekly-keyword-scan
(job_id: a1b2c3d4e5f6)
-------------
Weekly keyword scan complete for "AI agent teams"
content cluster.
Top 5 new keywords this week:
1. "AI agent team setup guide" — high relevance,
low competition
2. "multi-agent workflow examples" — medium volume,
growing trend
3. "how to schedule AI agent tasks" — informational,
new this week
4. "agent teams for content operations" — niche,
low competition
5. "persistent AI agents vs chat tools" — comparison
angle, stable volume
Full keyword list saved to keywords.md (22 keywords).
No changes from last week's top 5.
To stop or manage this job, send me a new message
(e.g. "stop reminder weekly-keyword-scan").The scheduler wraps the agent's response with a header identifying it as a cron delivery. The footer tells you how to manage the job. The agent's output sits in the middle — the same summary it would produce in a manual session.
Notice two things. First, the enabled_toolsets restriction: the research specialist only has web search and file tools. It cannot run shell commands, execute code, or modify files outside its scope. This is the fourth guardrail from the previous section — restrict toolsets for unattended jobs.
Second, the judgment gate still applies. The research specialist finds keywords and delivers a summary — but it does not start writing content. That is the SEO manager's job, and the SEO manager requires your review before drafting. The unattended part is the research; the judgment part is the handoff between research and content production. Cron handles the first; your approval handles the second.
If you want the full Monday morning pipeline to run end-to-end, you chain three jobs together:
Runs keyword research skill. Saves results. Delivers summary to Slack. Job ID: a1b2c3d4e5f6.
context_from: a1b2c3d4e5f6. Receives the keyword list from Job 1 in its prompt. Runs SERP analysis. Saves findings. Delivers analysis summary to Slack. Job ID: f6e5d4c3b2a1.
context_from: f6e5d4c3b2a1. Receives SERP analysis from Job 2. Drafts a content brief. Delivers brief to Slack — and waits for your review before anything publishes.
By noon on Monday, you have a keyword list, a SERP analysis, and a content brief waiting for your review in Slack. The unattended part gathered the data. The attended part — your review of the brief and your decision to proceed with writing — still requires you.
This is the boundary of unattended work in Hermes. Cron and no-agent mode handle the gathering and reporting. The approval system, skill instructions, and your judgment handle the decisions. The agent runs on its own when the task is well-defined and low-risk. You step in when the task needs founder judgment — which is exactly where a human should be.
Here is the practical checklist for creating your first cron job. Each step ensures the job is safe, functional, and delivers results where you need them.
You set up a weekly keyword scan that runs every Monday at 9 AM and delivers results to Slack. The scan runs successfully for three weeks. On the fourth week, you get no message. What are the three most likely causes, and how would you diagnose each?
Hermes agents do not "work on their own" in the general sense. On-demand sessions — CLI, messaging, API — are human-in-the-loop. You send a message, the agent responds, you decide what happens next. That is the default mode, and it covers most workflows.
Unattended work is a separate category with specific mechanisms: cron for time-based scheduling, no-agent mode for script-only watchdogs, webhooks for event-driven triggers, and delivery for getting results to you. Each mechanism has its own safeguards — restricted toolsets, the approval system, skill-encoded judgment gates, and the silent marker for noise reduction.
The design principle: automate the gathering, not the judgment. Cron jobs collect data and deliver reports. No-agent scripts check health and alert on problems. Webhooks respond to events and route information. But the decisions — what to publish, what to change, what to pursue — those stay with you. The agent handles the routine. You handle the judgment. That division is what makes unattended work safe and useful.