The thesis

Every prior identity layer assumed humans on one side and systems on the other. AI agents broke that. They hold credentials a human wouldn't, they make decisions a human wouldn't have time to make, and they delegate to other agents in chains we can't see from the outside.

The fix is not "smarter models." Models cannot be trained out of being misled by the corpus they retrieve, the memory they consult, or the tool description they read. Refusal training is a property of language; trust enforcement has to be a property of what runs alongside the language.

That layer didn't exist. We're building it.

What we believe

§ 01 · Enforcement at runtime

The model can't be the arbiter.

If the only thing standing between an agent and a destructive action is the model deciding to refuse, you have a security review, not a security control. Lupid runs outside the model's reasoning loop, in code paths the model can't reach.

§ 02 · Audit is the product

The record is what you sell.

A security tool that doesn't produce a tamper-evident, replayable record isn't a security tool — it's a vibe. We build for the post-incident question first: "what happened, in what order, and how do you prove it?"

§ 03 · Open source as verification

You can read what we ship.

Lupid is Apache 2.0. The runtime, the policy plane, the gateway, and the endpoint shield daemon are public on github.com/LupidAI. Our trust signal is "here is the code," not "trust the brand."

§ 04 · Self-host by default

Your data stays put.

Lupid runs in your cluster. PostgreSQL for the control plane, ClickHouse for audit, Redis for hot path. There is no managed-data offering today. The agent records that prove what happened never leave your infrastructure.

§ 05 · No fake compliance theater

What we don't yet claim.

We don't have SOC 2, ISO 27001, or FedRAMP. When we do, the auditor's name and report date will be on the security page, not a logo wall. Until then: the source is open, the audit is tamper-evident, and the disclosure policy is real.

§ 06 · The brief is research

We write it, and we show our work.

The Lupid Brief covers real CVEs and disclosed attacks. EchoLeak, CurXecute, MINJA, browser-agent injection. Every post documents what the runtime catches and what it does not — published, not gatekept.

What we're working on

The runtime gateway, policy plane, and endpoint shield daemon are in active development. Public previews land on github.com/LupidAI as they stabilize.

If you are building autonomous agents at scale and want to compare notes on what an enforcement layer should do — we are looking for design partners. The fastest way to reach the team is the contact page or [email protected].

If you're a security researcher with an interesting class of agent attack — there's a coordinated disclosure policy and a public repository to file private advisories against.

Read what we ship.

The brief is the canonical record of what we believe, and what we've reproduced against the runtime.