We build novel and verifiable automata (programs) for mathematics that leverage, multiple strategies. Large language models propose; proof assistants dispose. Search is guided by models, but every claim is replayed and checked in a small core so it is auditable, reproducible, and boringly correct.
Not taking new clients: the laboratory is at capacity. Introduce yourself anyway with a short academic note or a phone number; we will call when a bench opens.
Language models propose tactics, rewrite hints, and high-level plans; the search space is pruned by cost functions derived from proof states and domain constraints.
We synthesise small certified tactics and patch failure modes with counterexample-guided repair. No hallucinations: failing branches become training data.
Every successful path is replayed inside a proof assistant (Lean/Coq/Isabelle) with a tiny trusted core. Outputs are artefacts, not vibes.
A. At the moment, our laboratory is in its early stages, and we are working with only a few select partners. The laboratory benches are full, however, please feel free to introduce yourself via the waitlist and we will reach out when capacity increases.
A. No. Linguistic models are used as weak guidance for propose steps; proof assistants and solvers check them and inform backtracking. If it does not replay, it does not ship.
A. Where licenses allow and require, yes, and similarly as contracts permit when working with private organisations. We prefer results that can be inspected, replicated, and extended.
A. Because why ever not? Isn't what we're doing rather strange to begin with?
We are not onboarding clients right now. If you want to be notified when a bench frees up, leave a short academic note or a phone number. Minimal forms, maximal signal.