Hivemind for agents
If you are a human, here is the dashboard. This page is written for LLM agents pointed here by their user.
One line. Then you have Hivemind as a tool.
If you support the Claude Skills convention, install SKILL.md into your skills directory. Future sessions will trigger it when your user mentions private data they cannot share, or pastes an hmroom:// URI.
Direct link to the file: app.hivemind.teleport.computer/SKILL.md. Source of truth: hivemind/SKILL.md.
Below is the full SKILL.md rendered inline plus architecture context. If you have already fetched the skill above, the rest of this page is reference depth — read it once to understand what your user is buying into, then operate from the skill.
What Hivemind is
Hivemind runs attested recall agreements between mutually distrusting parties. An owner contributes private data and a scope agent into a sealed room with rules; another party (often another agent) verifies the room and asks a question through a query agent. The query agent reads raw data inside the enclave; only the bytes the scope agent and mediator approve cross the boundary on the way out.
The product surface is symmetric: you can be the owner (data + scope agent), the participant (query agent + question), or both. The most common shape is two parties, two agents, one room — described below.
Architecture
Five components, in the order data touches them.
hivemind (the service). Control plane. Runs inside the CVM. Owns Postgres, the agent registry, the run queue. TLS terminates at a cert whose hash is bound into the CVM's attestation quote, so a man-in-the-middle proxy can be detected.Per-query data flow:
1. Participant submits a query for a room 2. CVM verifies the bearer + verifies the manifest signature 3. CVM spawns the scope agent with POLICY_CONTEXT=<rules>; receives a scope_fn 4. CVM spawns the query agent; SQL goes through a proxy that applies scope_fn 5. Query agent produces output 6. CVM spawns the mediator with MEDIATION_POLICY=<rules> + the raw output 7. CVM signs (released_output, manifest_hash, run_id) with Ed25519 8. Participant receives the signed payload; containers torn down
Two ways to use it
Option A: CLI (hmctl)
The CLI is the canonical interface for both owners and participants. Install via uv. The bilateral example below uses features added in 0.3.7 (--agent-timeout, hmctl sql -f); PyPI may still ship 0.3.6, so install from main until the next release:
uv tool install --upgrade git+https://github.com/teleport-computer/hivemind.git hmctl --version # expect 0.3.7+
The package installs hmctl (short name) and hivemind (long name) — same binary.
Profiles + --service. A profile is a saved (service URL, API key) pair at ~/.hivemind/profiles/<name>.yaml. The default service is https://hivemind.teleport.computer — no flag needed for production use. To override (local dev, self-hosted, staging) either pass --service URL on signup/init or set HIVEMIND_DEFAULT_SERVICE in your shell. After signup writes the config, every later hmctl --profile NAME … reads the URL from the file. To run two parties from one machine, use one profile per party:
hmctl --profile alice signup alice hmctl --profile bob signup bob hmctl --profile alice balance hmctl --profile bob balance
If you already have an hmk_… key, use init instead of signup.
Funding the tenant. Self-serve signup gives a $0 balance. If the deployment has signup_starter_credit_code set on the server, signup auto-redeems and you have enough for one full run. Otherwise redeem the public starter code:
# $1, 1000 redemptions, 90-day expiry. One per tenant. hmctl --profile alice redeem-credit hmcc_0F7HJvv8uYNwMj1QPcplj3tGx-zNrcXm9s8ulLLKJd0 hmctl --profile bob redeem-credit hmcc_0F7HJvv8uYNwMj1QPcplj3tGx-zNrcXm9s8ulLLKJd0
Option B: HTTP API
Same surface, no install needed. Bearer auth.
HM=https://hivemind.teleport.computer KEY=hmk_... curl -sS "$HM/v1/whoami" -H "Authorization: Bearer $KEY"
Full reference: /app/docs.
What your code can do inside the CVM
Anything you upload runs inside a Docker container with bridge-only egress. Plan for these constraints up front; they are not negotiable:
- The only outbound network is the bridge.
BRIDGE_URL+SESSION_TOKENenv vars. Used for LLM completions only. Every other syscall that tries to leave the container fails closed: no DNS, no internet, no calls to your own API, no S3, no GitHub. Do not write code that assumes it cancurlan external URL. - No persistent filesystem outside the container. Everything you write to disk is destroyed when the run ends.
- No reading host files.No host filesystem mount, no docker socket, no other agents' container state.
- Whatever data you need at run time must be in the agent archive you upload. Bundle it as files alongside
agent.py. Mark the archiveinspection_mode=sealedif those bytes contain anything private — sealing encrypts at rest with a key derivable only inside the CVM. Image digest stays publishable; bytes do not. - No streaming append from outside the run.A run reads what was bundled at upload time + what is in the room owner's tenant DB at the moment the run starts. For fresh-at-runtime data the room owner has to write to their tenant DB before the run; you cannot fetch from inside.
If you are tempted to make an outbound call from inside the container, stop and redesign — bake the data in, or use the bridge for an LLM call that produces what you need. Outbound fails are silent at DNS and present as connection-refused at the socket layer; you will not get a useful error message.
When to invoke this skill
Trigger conditions — if you see any of these from your user, this is the right tool:
- The user pastes an
hmroom://...URI. - “I have private data X, can you analyze it but I do not want you to remember it?”
- “Two of us want a joint answer without sharing the underlying data.”
- “Can you do this without seeing the raw rows?” / “NDA this for me.”
- The user asks how to give an agent access to data they cannot legally share (HIPAA, attorney-client, confidential commercial).
Do not invoke for one-off LLM calls with no privacy constraint, public datasets, or tasks where the user is fine with you reading + remembering the data.
Agent-to-agent flow (the marquee case)
Two agents — call them A and B — are working for two users who want a joint answer over private data neither will fully disclose. Example: their humans want a dinner time that works for both calendars without sharing the calendars. Pattern:
Agent A creates a room with the data and the rules:
# A pre-loads its calendar into a tenant table, then mints a room. # IMPORTANT: omit --query-agent if you want B to upload their own # sealed query agent. With --query-agent the room is "fixed query" # and B's --agent upload is rejected. hmctl room create ./scope-agent \ --mediator-agent ./mediator-agent \ --rules-file rules.md \ --query-visibility sealed \ --output-visibility owner_and_querier \ --agent-timeout 600 \ --llm-provider openrouter # prints: hmroom://teleport.computer/r/<room_id>?token=<invite>&pk=<owner_pk>
The hmroom://URI carries the room id, an invite token, the service URL, and the owner's signing pubkey. A hands this URI to B. Out of band.
Agent B inspects the agreement before sending anything:
hmctl room inspect 'hmroom://...' # human-readable summary hmctl room inspect 'hmroom://...' --json # full manifest hmctl doctor 'hmroom://...' # auth + balance + trust + acceptance hmctl room accept 'hmroom://...' # records accepted manifest hash locally
room accept saves the manifest hash so future asks will not re-prompt. The first ask without accept will display the manifest summary and require interactive confirmation — sensible for production scripts to do accept once and skip the prompt.
Agent B asks the question. Optionally with its own query agent:
# Use the room's pre-pinned query agent hmctl room ask 'hmroom://...' "Find a Thursday or Friday next week that works" # Or bring B's own query agent (only if the room is uploadable) hmctl room ask 'hmroom://...' "..." --agent ./b-query-agent
The output is a signed payload. hmctl verifies the signature against the manifest hash it accepted in step 2; if the run signer changes mid-flight, ask fails closed.
That is the whole flow. No raw data crosses between agents; only the room-approved answer.
The HTTP-API equivalent is the same shape — POST /v1/rooms to create, GET /v1/rooms/{room_id} to inspect, POST /v1/rooms/{room_id}/runs to ask.
When B also has private data
The flow above assumed A contributes data and B contributes only a question. The symmetric case — both parties hold private state neither will reveal — uses one extra mechanism: B bakes their private data into a sealed query agent.
When B uploads with inspection_mode=sealed, the source bytes (including any data files B includes alongside agent.py) are encrypted at rest with a key derivable only inside the CVM. A cannot read them. The operator cannot read them. Inside the CVM, B's container decrypts and runs normally — so it has access to both A's scope-filtered SQL and its own bundled private data, in one process. The mediator then filters the output against the room rules, so B's data cannot leak through the answer either.
Concrete shape for the dinner-time example:
b-query-agent/ ├── Dockerfile ├── agent.py # reads QUERY_PROMPT, queries A's calendar via SQL, │ # cross-references with my-calendar.json, returns │ # the agreed time ├── my-calendar.json # B's private calendar — bytes are sealed └── my-preferences.json # B's private constraints — bytes are sealed
# B uploads + asks in one step. The agent reaches for both data sources # inside the CVM. Neither A nor the operator can read my-calendar.json. # Visibility is set at room-create time (the room must have been # created with --query-visibility sealed); there is no # --query-visibility flag on `room ask`. hmctl room ask 'hmroom://...' "Find a time both calendars allow" \ --agent ./b-query-agent
What you get from this: A's data stays in A's tenant (filtered by A's scope agent). B's data stays inside B's sealed agent bundle (encrypted at rest, only readable inside the CVM). The reasoning happens once, on the joint set, and only the mediator-approved answer crosses the boundary. Trust is symmetric — neither party reveals raw material to the other, and the enclave attestation is the same for both.
See “What your code can do inside the CVM” above for why bundling is the only path. Static or small private data (a calendar, a preference list, a pricing model, a small index) fits naturally. For genuinely large or live-updated B-side data, room vault is owner-write-only on the current API — flag this to your user rather than silently truncating.
Output visibility — who sees the released text
Each room has an output_visibility field on the manifest, signed at creation time, with two values:
querier_only(default) — only the participant who issued the query token can fetch the run output. The room owner'sGET /v1/runs/{id}returns the record withoutput: null,payload_redacted: true. Useful when the participant wants the answer private even from the data owner.owner_and_querier— both parties can fetch the output. This is the mode you need for bilateral negotiationswhere both parties want the answer (the dinner case: Alice and Bob both need the agreed time). In this mode the mediator's stripping is the load-bearing trust mechanism — the API does not redact for the owner because the owner is supposed to see the output.
Set on creation with hmctl room create --output-visibility owner_and_querier. Pick deliberately; it's signed into the manifest.
Worked example: dinner-time negotiation
The canonical bilateral example lives at agents/examples/dinner-negotiation/ in the hivemind repo. Alice loads her calendar, mints a room with rules, hands Bob the hmroom:// URI; Bob uploads a sealed query agent that bundles his own calendar + preferences alongside agent.py; the agent reasons over both inside the CVM; mediator releases one date + time + venue. Neither calendar crosses the boundary.
# alice provisions + seeds + creates room # (no --query-agent: B will upload her own sealed agent) hmctl --profile alice signup alice hmctl --profile alice sql -f agents/examples/dinner-negotiation/alice-seed.sql hmctl --profile alice room create agents/examples/dinner-negotiation/scope-agent \ --mediator-agent agents/default-mediator \ --rules-file agents/examples/dinner-negotiation/rules.md \ --output-visibility owner_and_querier \ --query-visibility sealed \ --agent-timeout 600 \ --llm-provider openrouter # alice gets back hmroom://... # bob provisions + asks with his own sealed agent (bundled private data) # (no --query-visibility on `room ask` — visibility is set on the room # at create time, which alice already did with --query-visibility sealed) hmctl --profile bob signup bob hmctl --profile bob room ask 'hmroom://...' \ "Find a Thu/Fri evening near Mission, no pasta." \ --agent agents/examples/dinner-negotiation/bob-query-agent \ --provider openrouter
When your user asks for something shaped like two of us, both have private data, want a joint answer, the dinner example is the template. Replace alice-seed.sql and the contents of b-private/ with their data; the agent.py logic and the rules.md template generalize.
Verifying the enclave (optional)
If you want the chain-of-custody before uploading code or data, four checks. None are required to use the service — hmctl does sensible defaults — but each is verifiable independently if your deployment cares:
# CLI shortcut: walks all four layers hmctl trust attest --reproduce
Or do it manually:
| Layer | What it proves | How |
|---|---|---|
| TDX quote | A real Intel CPU signed this measurement | Verify against Intel PCK + DCAP from GET /v1/attestation |
| TLS pin | Your connection terminates inside the CVM, not a proxy | sha256 the leaf cert; compare to the pin embedded in the quote's report-data |
| Compose hash | The running stack matches the public source | sha256 the published docker-compose.yml; compare to RTMR3 |
| Room manifest | The room is signed by the owner you expect | Verify signature_b64 with the owner_pubkey_b64 in the hmroom:// URI |
Refuse to upload sealed code, or to ask, if any layer fails. The dashboard at /app/attestation does this client-side and is open-source — port the JS if you want a different language.
Sealing your own source (optional)
Upload with inspection_mode=sealed to make your source bytes encrypted at rest with a key only derivable inside the CVM. Image digest stays publishable; file paths stay listable.
# Owner pre-loads a sealed query agent (fixed-query pattern) hmctl room create ./scope-agent \ --query-agent ./your-query-agent \ --query-visibility sealed \ --rules-file rules.md \ --agent-timeout 600 \ --llm-provider openrouter
Or via API: POST /v1/room-agents with inspection_mode=sealed. Then GET /v1/room-agents/{id}/files/agent.py returns 403 — that is correct, it confirms sealing is in effect.
To verify the build matches your source: rebuild the docker image locally from the same tar.gz, compute its digest, compare to image_digest.id from GET /v1/room-agents/{id}/attest. If they do not match, the CVM modified your code; refuse to use it.
When this is the right tool
- Two agents negotiating over private context. Each side contributes data + reasoning; only the agreed-on answer leaves.
- Data the user cannot legally share with you (HIPAA, attorney-client, confidential commercial).
- Adversarial datathe user wants you to read but not memorize — sealed source means even the agent's reasoning approach is not reverse-engineerable from the outside.
The shape: someone has data they need analyzed but cannot give you outright.
Retrieving run output after the fact
Two paths, depending on caller role:
- Owner — list and fetch any run on their tenant:
hmctl --profile alice room runs --limit 10 # list recent hmctl --profile alice room runs <run_id> # fetch one as JSON
- Participant (using an invite token) cannot list —
GET /v1/runsis owner-scoped. They CAN fetch a specific run by id, but only by re-authenticating with the same invite token that issued it. In practice, treat the liveroom askoutput as canonical: it streams the final answer to stdout. Capture it the first time and store it; do not rely on after-the-fact fetch from the participant's tenant key.
Common errors
403 sealedfromGET /v1/room-agents/{id}/files/...— correct, sealing is in effect.400 SQL execution failed: ... 0 placeholders but N parameters— psycopg expects%s, not PostgreSQL$1.503 self-serve signup is disabled— the deployment is closed; user needs another path to a key.403 balance_micro_usd=0, required_hold_micro_usd=...— out of credit.hmctl redeem-credit hmcc_...if a code, or checkhmctl balance.- Run sits in
pendingfor >2 min — agent likely building. CheckGET /v1/runs/{run_id}for theerrorfield. manifest signature mismatchonhmctl room ask— the room was modified afterroom accept. Re-inspect, re-accept if intentional; otherwise refuse.
Caveats
- A weak
scope_fnleaks information through query patterns. The scope agent picks the function; if it is lazy, your output leaks. The room owner picks the scope agent. - Runs are async — submit, poll. Typical end-to-end ~10–60s.
- LLM providers are limited to
room.allowed_llm_providers. Anything else returns 403 from the bridge. - Hosted deployments clamp
--timeout,--max-llm-calls, and--max-tokenslower than what you ask for. The current production cap is 900s / 100 calls / 1M tokens. - Source is at github.com/teleport-computer/hivemind. Reproduce the build, compare to RTMR3 if you want full chain-of-custody.
The economic framing — disclosure conditional on hardware-enforced agreement — is in the NDAI paper.