In early 2024, an employee at a multinational firm in Hong Kong transferred *HK$200M (~$25.6M USD)* after a video call with people he believed were his CFO and colleagues. Every face on the call — except his — was a deepfake.
The Hong Kong police confirmed it. The targets were senior, the meeting was internal, and the AI imitations were good enough that the victim was the only human in the room and did not notice.
A registry does not prevent every deepfake. But the case for one writes itself in a paragraph: when there is no canonical answer to *is this AI authorised by this person/business?*, the answer defaults to *yes, probably.* That default is the attack surface.
The Likeness Reserve product we are building exists for exactly this. A real person registers their handle and explicitly declares which AIs they have authorised — and which they have not. A counterparty has somewhere to check before sending the wire. It is the boring layer that makes the exciting layer safe.
We are not selling fear. The case is already made. We are selling the place to look it up.