Engineering notes, research briefs, and field reports from the team building Lupid. We write when we have something we'd want to read ourselves — rarely, slowly, with the work in front of us.
OpenAI shipped ChatGPT Atlas and admitted prompt injection may never be fully ‘solved’ for browser agents. A reading of why that admission is the thesis statement for runtime enforcement.
In October 2025, OpenAI shipped Atlas — a browser where the agent is the user. Two months later, they wrote that prompt injection is unlikely to ever be fully solved for this class of product. They shipped the product first. They published the admission second. A reading of what changes when every webpage is an attack surface and where an enforcement layer outside the browser actually catches.
Read the brief →Memory poisoning lets an attacker plant a payload in February that fires in April. A reading of MINJA-shaped attacks and what re-classifying memory at retrieval time catches that classifying it at write time does not.
Security ResearchA reading of CVE-2025-54135 (CurXecute), and why a permission list inside the agent's writable filesystem is a permission list with one footnote: or whoever else can write here.
Security ResearchCVE-2025-32711 was the first known attack on an AI agent that needed no user action at all. The agent's retrieval layer was the vector. A reading of what changes when the prompt-injection surface is the corpus, not the prompt.
Security ResearchHow a governance plane in the agent's path turns the lethal trifecta from inevitable into observable. A step-by-step reading of the gemini-cli supply-chain disclosure.
Security ResearchA walkthrough of an autonomous agent incident, the runtime that caught it, and what the record looks like twenty-four hours later.
Field EngineeringWhy the audit log is not an artifact of security work. It is the work. A short essay on what changes when the record becomes the product.
ManifestoNothing on file matches that query. Try a shorter term, or .