In Chapter 3, you saw the mental model: profiles, skills, memory, toolsets, gateway, and cron. Each component has a clear role. But what actually runs those components? Where do the files live? How does the agent reach the model that powers it?
This chapter goes one layer deeper — into the technology that makes the mental model real. The good news: you do not need to understand most of this to use Hermes effectively. Think of it like driving a car. You use the steering wheel, the pedals, and the dashboard. The engine, transmission, and fuel system are underneath — and knowing they exist helps you reason about maintenance, but you do not need to rebuild them.
If you are the kind of person who likes to know what is under the hood, this chapter is for you. If not, skim it and come back when you need to debug something or configure a provider.
Hermes is a Python application. When you run the agent, you are running a Python program. The specific version it requires is Python 3.11 — this is the minimum version, and newer versions also work.
What does that mean in practice? It means the installer handles Python for you. You do not need to install Python yourself, pick a version, or manage a virtual environment. The Hermes installer sets up everything it needs — Python, a package manager called , and all the libraries Hermes depends on.
Python is also relevant if you want to write custom tools later. Hermes tools are written in Python, and the tool registry discovers them automatically. But that is an advanced topic — you can use the 70+ built-in tools without writing any Python yourself.
When you install Hermes, it creates a folder called ~/.hermes on your machine. This is your agent's home. Everything that makes your agent unique — its configuration, its memory, its skills, its session history, its scheduled jobs — lives inside this one folder.
This matters for two reasons. First, it means everything is in one place. If you want to back up your agent, you back up this folder. If you want to move your agent to a different machine, you copy this folder. Second, it means you can inspect what the agent is storing. The memory files, skill documents, and configuration are all readable text — not a database you cannot look inside.
Here is what lives inside ~/.hermes:
Your main settings file — model choice, provider, toolset preferences, and more.
Secrets and API keys. This file is separate from config.yaml so you can share your config without exposing keys.
Conversation history for each profile. The agent loads prior context from here when you resume a session.
Procedural knowledge documents. Both bundled skills and skills the agent creates over time live here.
Persistent memory files per profile — the facts and preferences the agent carries forward.
Scheduled job definitions and their state. The cron system reads and writes here.
Session logs and gateway logs. Useful for debugging and auditing what the agent did.
The program code itself. The installer clones the Hermes repository here.
The config.yaml file is where you define how your agent behaves. It covers the model selection, the inference provider (who serves the AI model), which tools are available, memory limits, session reset policies, and many other knobs.
You do not have to edit this file by hand. The hermes setup wizard walks you through the key choices when you first install. Later, you can change any setting with the hermes config command or by opening the file in a text editor.
The file uses , a plain-text format designed for readability. It uses indentation and key-value pairs instead of brackets or commas. The installer creates this file from a template, so you start with sensible defaults and only change what you need.
The .env file is where Hermes stores API keys and other secrets. Your model provider key (for OpenRouter, OpenAI, Anthropic, or whichever provider you choose), your messaging platform tokens (Telegram bot token, Discord bot token, Slack app credentials), and any other sensitive values go here.
This file is deliberately separate from config.yaml. The reason: you might want to share your configuration with a colleague (so they can replicate your setup), but you should never share your API keys. Keeping secrets in a separate file means you can share one without the other.
The setup wizard creates this file from a template and prompts you to fill in your keys. Hermes also reads environment variables from your system shell, so if you already have an OPENAI_API_KEY set in your terminal, Hermes picks it up automatically.
Hermes does not include its own AI model. Instead, it calls external model providers — services like OpenAI, Anthropic, Google, or OpenRouter — through their APIs (the programming interfaces these services expose). You bring your own API key, and Hermes routes your prompts to the model you choose.
The default setting is "auto," which means Hermes detects your provider from the credentials you have set up. If you have an OpenRouter key, it uses OpenRouter. If you have an Anthropic key, it uses Anthropic directly. You can also explicitly pick a provider using the hermes model command — no code changes required.
Here are the main provider options:
Nous Portal
Hermes's own provider. Uses OAuth login (no API key to manage). Good starting point.
OpenRouter
Aggregator that gives access to 200+ models from one key. Convenient for experimenting with different models.
OpenAI (direct)
Direct access to GPT models. Uses your OpenAI API key.
Anthropic (direct)
Direct access to Claude models. Requires the Anthropic extra (installed automatically when you choose this provider).
Google (Gemini)
Direct access to Google's Gemini models. Uses your Google AI Studio API key.
Local / Custom
Any OpenAI-compatible endpoint — Ollama, LM Studio, vLLM, llama.cpp, or your own server. Runs models on your own hardware.
There are also providers for NVIDIA NIM, Xiaomi, MiniMax, Hugging Face, and others. The key point: Hermes is not locked to any single model or provider. You switch by running one command, and the rest of your setup stays the same.
In Chapter 3, you saw toolsets as a concept — groups of related tools the agent can use. Now let us look at how they work technically.
Hermes includes over 70 built-in tools, organized into toolsets. A toolset is a group of tools that share a purpose and often share dependencies. For example, the web toolset includes web search and web page extraction. The terminal toolset includes command execution and process management. The file toolset includes reading, writing, patching, and searching files.
Tools are discovered automatically. Each tool file registers itself with the tool registry when Hermes starts up — no manual configuration needed. This means if you add a custom tool, it appears in the agent's available tools as soon as Hermes finds it.
You control which toolsets are available per profile and per platform. Your CLI sessions might have full access, while your Telegram bot might be restricted to web search and file operations. This granularity lets you give each agent exactly the capabilities it needs — and nothing more.
MCP stands for Model Context Protocol. It is an open standard that lets AI tools talk to each other. If you have used tools like Claude Code or Cursor, you may have seen MCP in action — those tools can connect to external MCP servers to extend their capabilities.
Hermes plays two roles with MCP:
Hermes can expose its messaging conversations as MCP tools. This means another AI tool (like Claude Code) can list your Hermes conversations, read message history, send messages, and respond to approval requests — all from within the other tool's interface.
Hermes can connect to external MCP servers. This means you can add tools from the MCP ecosystem to your Hermes agent — filesystem access, GitHub integration, Notion access, database queries, or any server that speaks the MCP protocol.
In plain terms: MCP is the bridge between Hermes and the broader AI tool ecosystem. It lets your Hermes agent reach outward (adding capabilities from other tools) and let other tools reach inward (interacting with your Hermes conversations and messages).
MCP is optional. You can use Hermes fully without ever configuring an MCP server. But if you already use tools that support MCP, the integration is straightforward — you add a few lines to your config.yaml pointing to the MCP server, and Hermes discovers its tools automatically.
Not every Hermes user needs the same dependencies. If you use OpenRouter as your provider, you do not need the Anthropic SDK installed. If you never generate images, you do not need the image generation library.
Hermes handles this with a system. The base install includes only the packages every session needs. Provider-specific and feature-specific packages are installed automatically when you enable the feature that requires them — for example, when you run hermes model and choose Anthropic as your provider, Hermes installs the Anthropic SDK on the spot.
This approach keeps the initial install smaller and faster. It also reduces the surface area for supply-chain issues — fewer installed packages means fewer potential vulnerabilities.
Hermes runs on your machine and can execute commands, edit files, and make network requests. That power demands safeguards. The security model operates in layers:
Allowlists control who can talk to the agent. Only paired users and approved channels get access.
Commands that could cause harm (deleting files, installing packages) require your explicit approval before the agent proceeds. The default is manual approval — the agent asks, you decide.
When the agent runs subprocesses, it isolates credentials so they are not leaked to child processes that do not need them.
Files injected into the agent's context (like project instructions) are scanned for prompt injection attempts — someone trying to sneak malicious instructions into your configuration.
Each profile's sessions, memory, and skills are separate. One profile cannot read another profile's data.
These layers work together, not in isolation. A breach at one layer is caught by the next. Chapter 14 goes deeper into security, reliability, and operational practices.
You now know the tech stack: Python runtime, ~/.hermes data directory, config.yaml for settings, .env for secrets, model providers for AI, tool registry for capabilities, and MCP for external integrations. If your agent suddenly cannot connect to a model, which two files would you check first — and why?