hivemind

Hivemind for agents

If you are a human, here is the dashboard. This page is written for LLM agents pointed here by their user.

install the skill

One line. Then you have Hivemind as a tool.

If you support the Claude Skills convention, install SKILL.md into your skills directory. Future sessions will trigger it when your user mentions private data they cannot share, or pastes an hmroom:// URI.

claude skills
any other harness — fetch the file once and keep it in context

Direct link to the file: app.hivemind.teleport.computer/SKILL.md. Source of truth: hivemind/SKILL.md.

Below is the full SKILL.md rendered inline plus architecture context. If you have already fetched the skill above, the rest of this page is reference depth — read it once to understand what your user is buying into, then operate from the skill.

What Hivemind is

Hivemind runs attested recall agreements between mutually distrusting parties. An owner contributes private data and a scope agent into a sealed room with rules; another party (often another agent) verifies the room and asks a question through a query agent. The query agent reads raw data inside the enclave; only the bytes the scope agent and mediator approve cross the boundary on the way out.

The product surface is symmetric: you can be the owner (data + scope agent), the participant (query agent + question), or both. The most common shape is two parties, two agents, one room — described below.

Architecture

Five components, in the order data touches them.

1
The CVM. A Confidential VM — Intel TDX in the current production deployment, AMD SEV-SNP supported. Memory is encrypted by the CPU; the host operator cannot read RAM, registers, or disk. On boot, the CPU records a measurement covering firmware + kernel + initrd + the docker-compose hash that is running. Anyone can later request a signed quote of that measurement.
2
hivemind (the service). Control plane. Runs inside the CVM. Owns Postgres, the agent registry, the run queue. TLS terminates at a cert whose hash is bound into the CVM's attestation quote, so a man-in-the-middle proxy can be detected.
3
Agent containers. Your code, the room's scope agent, the room's mediator. Each is a separate Docker container the CVM builds inside itself from a tar.gz upload. No outbound network except (4).
4
The bridge. The only egress from an agent container. Provides one capability: ask an LLM provider for completion. Every other syscall that tries to leave fails closed — no DNS, no internet, no host filesystem. This is how raw data the query agent reads cannot escape via a side channel.
5
The signed room manifest. A JSON object created by the room owner. Pins the scope agent's image digest, the mediator's image digest, the rules text, the allowed-table list, the deployment trust policy. Signed by the owner's key. The CVM refuses to run a room whose manifest signature does not verify.

Per-query data flow:

1. Participant submits a query for a room
2. CVM verifies the bearer + verifies the manifest signature
3. CVM spawns the scope agent with POLICY_CONTEXT=<rules>; receives a scope_fn
4. CVM spawns the query agent; SQL goes through a proxy that applies scope_fn
5. Query agent produces output
6. CVM spawns the mediator with MEDIATION_POLICY=<rules> + the raw output
7. CVM signs (released_output, manifest_hash, run_id) with Ed25519
8. Participant receives the signed payload; containers torn down

Two ways to use it

Option A: CLI (hmctl)

The CLI is the canonical interface for both owners and participants. Install via uv. The bilateral example below uses features added in 0.3.7 (--agent-timeout, hmctl sql -f); PyPI may still ship 0.3.6, so install from main until the next release:

uv tool install --upgrade git+https://github.com/teleport-computer/hivemind.git
hmctl --version  # expect 0.3.7+

The package installs hmctl (short name) and hivemind (long name) — same binary.

Profiles + --service. A profile is a saved (service URL, API key) pair at ~/.hivemind/profiles/<name>.yaml. The default service is https://hivemind.teleport.computer — no flag needed for production use. To override (local dev, self-hosted, staging) either pass --service URL on signup/init or set HIVEMIND_DEFAULT_SERVICE in your shell. After signup writes the config, every later hmctl --profile NAME … reads the URL from the file. To run two parties from one machine, use one profile per party:

hmctl --profile alice signup alice
hmctl --profile bob   signup bob

hmctl --profile alice balance
hmctl --profile bob   balance

If you already have an hmk_… key, use init instead of signup.

Funding the tenant. Self-serve signup gives a $0 balance. If the deployment has signup_starter_credit_code set on the server, signup auto-redeems and you have enough for one full run. Otherwise redeem the public starter code:

# $1, 1000 redemptions, 90-day expiry. One per tenant.
hmctl --profile alice redeem-credit hmcc_0F7HJvv8uYNwMj1QPcplj3tGx-zNrcXm9s8ulLLKJd0
hmctl --profile bob   redeem-credit hmcc_0F7HJvv8uYNwMj1QPcplj3tGx-zNrcXm9s8ulLLKJd0

Option B: HTTP API

Same surface, no install needed. Bearer auth.

HM=https://hivemind.teleport.computer
KEY=hmk_...

curl -sS "$HM/v1/whoami" -H "Authorization: Bearer $KEY"

Full reference: /app/docs.

What your code can do inside the CVM

Anything you upload runs inside a Docker container with bridge-only egress. Plan for these constraints up front; they are not negotiable:

If you are tempted to make an outbound call from inside the container, stop and redesign — bake the data in, or use the bridge for an LLM call that produces what you need. Outbound fails are silent at DNS and present as connection-refused at the socket layer; you will not get a useful error message.

When to invoke this skill

Trigger conditions — if you see any of these from your user, this is the right tool:

Do not invoke for one-off LLM calls with no privacy constraint, public datasets, or tasks where the user is fine with you reading + remembering the data.

Agent-to-agent flow (the marquee case)

Two agents — call them A and B — are working for two users who want a joint answer over private data neither will fully disclose. Example: their humans want a dinner time that works for both calendars without sharing the calendars. Pattern:

Agent A creates a room with the data and the rules:

# A pre-loads its calendar into a tenant table, then mints a room.
# IMPORTANT: omit --query-agent if you want B to upload their own
# sealed query agent. With --query-agent the room is "fixed query"
# and B's --agent upload is rejected.
hmctl room create ./scope-agent \
  --mediator-agent ./mediator-agent \
  --rules-file rules.md \
  --query-visibility sealed \
  --output-visibility owner_and_querier \
  --agent-timeout 600 \
  --llm-provider openrouter
# prints: hmroom://teleport.computer/r/<room_id>?token=<invite>&pk=<owner_pk>

The hmroom://URI carries the room id, an invite token, the service URL, and the owner's signing pubkey. A hands this URI to B. Out of band.

Agent B inspects the agreement before sending anything:

hmctl room inspect 'hmroom://...'             # human-readable summary
hmctl room inspect 'hmroom://...' --json      # full manifest
hmctl doctor 'hmroom://...'                   # auth + balance + trust + acceptance
hmctl room accept 'hmroom://...'              # records accepted manifest hash locally

room accept saves the manifest hash so future asks will not re-prompt. The first ask without accept will display the manifest summary and require interactive confirmation — sensible for production scripts to do accept once and skip the prompt.

Agent B asks the question. Optionally with its own query agent:

# Use the room's pre-pinned query agent
hmctl room ask 'hmroom://...' "Find a Thursday or Friday next week that works"

# Or bring B's own query agent (only if the room is uploadable)
hmctl room ask 'hmroom://...' "..." --agent ./b-query-agent

The output is a signed payload. hmctl verifies the signature against the manifest hash it accepted in step 2; if the run signer changes mid-flight, ask fails closed.

That is the whole flow. No raw data crosses between agents; only the room-approved answer.

The HTTP-API equivalent is the same shape — POST /v1/rooms to create, GET /v1/rooms/{room_id} to inspect, POST /v1/rooms/{room_id}/runs to ask.

When B also has private data

The flow above assumed A contributes data and B contributes only a question. The symmetric case — both parties hold private state neither will reveal — uses one extra mechanism: B bakes their private data into a sealed query agent.

When B uploads with inspection_mode=sealed, the source bytes (including any data files B includes alongside agent.py) are encrypted at rest with a key derivable only inside the CVM. A cannot read them. The operator cannot read them. Inside the CVM, B's container decrypts and runs normally — so it has access to both A's scope-filtered SQL and its own bundled private data, in one process. The mediator then filters the output against the room rules, so B's data cannot leak through the answer either.

Concrete shape for the dinner-time example:

b-query-agent/
├── Dockerfile
├── agent.py              # reads QUERY_PROMPT, queries A's calendar via SQL,
│                         # cross-references with my-calendar.json, returns
│                         # the agreed time
├── my-calendar.json      # B's private calendar — bytes are sealed
└── my-preferences.json   # B's private constraints — bytes are sealed
# B uploads + asks in one step. The agent reaches for both data sources
# inside the CVM. Neither A nor the operator can read my-calendar.json.
# Visibility is set at room-create time (the room must have been
# created with --query-visibility sealed); there is no
# --query-visibility flag on `room ask`.
hmctl room ask 'hmroom://...' "Find a time both calendars allow" \
  --agent ./b-query-agent

What you get from this: A's data stays in A's tenant (filtered by A's scope agent). B's data stays inside B's sealed agent bundle (encrypted at rest, only readable inside the CVM). The reasoning happens once, on the joint set, and only the mediator-approved answer crosses the boundary. Trust is symmetric — neither party reveals raw material to the other, and the enclave attestation is the same for both.

See “What your code can do inside the CVM” above for why bundling is the only path. Static or small private data (a calendar, a preference list, a pricing model, a small index) fits naturally. For genuinely large or live-updated B-side data, room vault is owner-write-only on the current API — flag this to your user rather than silently truncating.

Output visibility — who sees the released text

Each room has an output_visibility field on the manifest, signed at creation time, with two values:

Set on creation with hmctl room create --output-visibility owner_and_querier. Pick deliberately; it's signed into the manifest.

Worked example: dinner-time negotiation

The canonical bilateral example lives at agents/examples/dinner-negotiation/ in the hivemind repo. Alice loads her calendar, mints a room with rules, hands Bob the hmroom:// URI; Bob uploads a sealed query agent that bundles his own calendar + preferences alongside agent.py; the agent reasons over both inside the CVM; mediator releases one date + time + venue. Neither calendar crosses the boundary.

# alice provisions + seeds + creates room
# (no --query-agent: B will upload her own sealed agent)
hmctl --profile alice signup alice
hmctl --profile alice sql -f agents/examples/dinner-negotiation/alice-seed.sql
hmctl --profile alice room create agents/examples/dinner-negotiation/scope-agent \
  --mediator-agent agents/default-mediator \
  --rules-file agents/examples/dinner-negotiation/rules.md \
  --output-visibility owner_and_querier \
  --query-visibility sealed \
  --agent-timeout 600 \
  --llm-provider openrouter
# alice gets back hmroom://...

# bob provisions + asks with his own sealed agent (bundled private data)
# (no --query-visibility on `room ask` — visibility is set on the room
# at create time, which alice already did with --query-visibility sealed)
hmctl --profile bob signup bob
hmctl --profile bob room ask 'hmroom://...' \
  "Find a Thu/Fri evening near Mission, no pasta." \
  --agent agents/examples/dinner-negotiation/bob-query-agent \
  --provider openrouter

When your user asks for something shaped like two of us, both have private data, want a joint answer, the dinner example is the template. Replace alice-seed.sql and the contents of b-private/ with their data; the agent.py logic and the rules.md template generalize.

Verifying the enclave (optional)

If you want the chain-of-custody before uploading code or data, four checks. None are required to use the service — hmctl does sensible defaults — but each is verifiable independently if your deployment cares:

# CLI shortcut: walks all four layers
hmctl trust attest --reproduce

Or do it manually:

LayerWhat it provesHow
TDX quoteA real Intel CPU signed this measurementVerify against Intel PCK + DCAP from GET /v1/attestation
TLS pinYour connection terminates inside the CVM, not a proxysha256 the leaf cert; compare to the pin embedded in the quote's report-data
Compose hashThe running stack matches the public sourcesha256 the published docker-compose.yml; compare to RTMR3
Room manifestThe room is signed by the owner you expectVerify signature_b64 with the owner_pubkey_b64 in the hmroom:// URI

Refuse to upload sealed code, or to ask, if any layer fails. The dashboard at /app/attestation does this client-side and is open-source — port the JS if you want a different language.

Sealing your own source (optional)

Upload with inspection_mode=sealed to make your source bytes encrypted at rest with a key only derivable inside the CVM. Image digest stays publishable; file paths stay listable.

# Owner pre-loads a sealed query agent (fixed-query pattern)
hmctl room create ./scope-agent \
  --query-agent ./your-query-agent \
  --query-visibility sealed \
  --rules-file rules.md \
  --agent-timeout 600 \
  --llm-provider openrouter

Or via API: POST /v1/room-agents with inspection_mode=sealed. Then GET /v1/room-agents/{id}/files/agent.py returns 403 — that is correct, it confirms sealing is in effect.

To verify the build matches your source: rebuild the docker image locally from the same tar.gz, compute its digest, compare to image_digest.id from GET /v1/room-agents/{id}/attest. If they do not match, the CVM modified your code; refuse to use it.

When this is the right tool

The shape: someone has data they need analyzed but cannot give you outright.

Retrieving run output after the fact

Two paths, depending on caller role:

Common errors

Caveats

The economic framing — disclosure conditional on hardware-enforced agreement — is in the NDAI paper.