We see
what cameras can't.
Radar-camera foundation model built from the physics up, that learns without labels and gets better with every mile driven.
Vision has a weather problem.
Self-driving cannot deliver in rain, fog, snow, or glare. It blocks 20–40% of the addressable market and creates compounding regulatory liability.
A foundation model for radar-camera fusion.
We integrate with any 4D imaging radar to deliver perception in conditions that blind vision-only and LiDAR systems, and we learn from every mile, without labels.
All-weather
Near-zero disengagements where vision-only fails and LiDAR struggles.
Sensor-agnostic
Runs on any 4D imaging radar. No hardware lock-in.
Self-supervised
Every mile becomes training data. The model improves at near-zero marginal cost.
Software-only
Deploys as IP license on existing silicon. No new sensors, no manufacturing risk.
Active across automotive and defence.
-
UK Government — CAM Pathfinder
-
NVIDIA Inception
-
PNNL / Battelle
-
EnSilica
Built by founders out of Nokia Bell Labs and GM, with advisory board from Ford and EPFL. CVPR 2023–24. 4 US patents pending.