AI safety shouldn't depend on trusting the AI.
Every existing approach — training, guardrails, evaluations — is software defending against software. All of it can be removed, circumvented, or outgrown. We're building the infrastructure that makes safety permanent — enforced by hardware, outside the model's control entirely.
The EU AI Act makes auditable alignment mandatory for high-risk systems starting August 2026. 98% of enterprises are already increasing their AI governance budgets. The demand for real safety infrastructure isn't theoretical. It's here, and the decisions are being made right now.
David spent a year as a personal caretaker for a heavily autistic individual, helping him make real breakthroughs in mental health. That experience led to building an AI mental health companion for neurodivergent people — which placed in the top 3 at Pitch It!, Binghamton University's pitch competition.
About half a year in, he stopped. AI wasn't safe or reliable enough. ChatGPT gets diagnoses wrong more than half the time. A third of cancer treatment recommendations have errors. That's what pulled him into AI safety.
He spent winter break 2025 going deep on interpretability — first learning what a neural network was on December 15th. In roughly ten weeks, he built a full suite of interpretability and safety tools solo, built a defensive inversion of the leading AI safety-stripping tool, filed two patents, and discovered novel findings with no prior art in the field.
He also launched easywheels.io, serving ML engineers easy-install precompiled Python wheels, built Noosphere — a 3D embedding visualization platform for exploring conceptual representations across models — and is a founding advisor at budding student-founded startup Duino AI, where he built the AI backend for their agentic Arduino IDE.
We're looking for people who want to work on the hardest problem in AI. The team is small and the work is real — hardware enforcement, interpretability research, and building something that doesn't exist yet.
With HBM experience
1–2 researchers
Understanding what models know and how they represent it
Interested? Get in touch.