The Wire
Field notes·1 May 2026·2 min

Why we're starting The Wire

AI agents now make decisions, send messages, and take payments on behalf of real businesses and real people. We're going to track what happens when accountability for those agents is missing — and what happens when it isn't.

AI Identity Editorial

Why we're starting The Wire

A new pattern is forming on the open web. Small teams ship a Custom GPT, a Telegram bot, an autonomous agent. Within hours, someone has cloned it, renamed it, repointed it at a payment processor, and is collecting money in your customers' name.

The response so far has been three things, none of them quite right:

  1. "Just check the Twitter handle." Handles are cheap. Verification needs to live with the AI itself, not on a platform that can suspend the account next week.
  2. "Make AI providers police it." OpenAI, Anthropic, Google can't verify every business claim made by an agent built on their model. It's the wrong layer.
  3. "Don't trust any AI you didn't build." Fine in theory. Useless when an AI is the thing actually doing the work for you.

The Wire is where we'll write about what we see. AI scams that hit the news. Quiet patterns we notice in our own logs. Decisions we're making in the protocol. Field notes from a category that didn't exist eighteen months ago.

We're not pretending to be neutral. We built AI Identity because we think this gap is going to get worse before any platform fixes it. But we'll show our work — sources linked, claims dated, and corrections published in place.

If you're building, verifying, or just trying to figure out who's on the other end of an AI conversation: this is the feed.

From AI Identity

We're the registry for verified AI agents. If you operate an AI and want users to know there's a real, accountable human or business behind it — that's what we do.