Manifesto

Tickets were a workaround. We're not going to keep them around.

For thirty years, the design of customer support software has been shaped by one constraint: humans are slow, and there are never enough of them. Every queue, every macro, every SLA timer exists to ration human attention across a flood of requests. The AI changes that constraint. The software hasn't caught up.

01Support tools were built for the wrong actor.

Open any support tool and you'll see the same shape. A list of tickets on the left. An ID, a subject, a status — open, pending, on hold, solved. A reply box on the right. A panel for tags and macros so the agent can answer faster.

That shape isn't a design choice. It's a load-balancing algorithm dressed up as a user interface. The whole product is a system for distributing scarce human attention across a queue that's always longer than the team that works it. Every feature you can name — round-robin assignment, SLA breaches, escalation rules, canned responses, deflection bots, satisfaction surveys — is downstream of the same assumption: a person has to type the answer, and we need to make that person efficient.

For a long time, that assumption was correct. So the software optimized for it ruthlessly, and got very good at it. But the assumption is no longer correct. And so the software is no longer a fit.

When the actor changes, the interface should change too. Most support tools today are still drawn around the old actor.

02The AI is good enough now. But it's bolted on, not built in.

The honest read on the current state of AI in support is this: for a meaningful slice of conversations — password resets, refund eligibility checks, integration questions, status lookups, common how-tos — a competent AI agent with access to your knowledge base, your product data, and a few internal APIs can produce a better reply than a junior human did six months ago. Faster, more consistent, available at 3am.

But almost every "AI support" feature shipped in the last two years has been bolted onto an existing ticket-shaped product. The chatbot lives at the front door. If it can't answer, it falls back to "create a ticket" — which puts the conversation right back into the same queue that existed before AI arrived. The AI was never given the permission, the tools, or the layout to actually finish the work.

This is what the industry means when it says "we added AI." It means: we put a deflection bot in front of the queue. The queue is unchanged.

A small example Look at any major support product's "AI Inbox." Every entry in the list is still a ticket. The AI's reply is a draft sitting inside that ticket, waiting for a human to approve. The product's center of gravity is the human's review queue, with AI assistance bolted to the side. The conversation hasn't moved.

03The unit of work is a journey, not a ticket.

Real customers don't open one ticket. They open a series of them, often spread over weeks: I can't log in on Monday, can you refund this duplicate charge on Wednesday, where is my export on Friday. In a queue, those are three unrelated rows assigned to three different agents. To the customer, it's one experience of your product, getting worse.

If you re-organize around the customer instead of around the queue, two things become possible. The AI can see the whole arc — the prior conversations, the product events, the open jobs — and reason about what's actually going on. And the human, when they're needed, walks into a context that's already been built, instead of starting from a one-line subject and a stranger's name.

We call that arc a journey. It's the natural unit of work in customer support. It's the thing the customer experiences. It's the thing the team should be measured on.

A queue measures whether you replied. A journey measures whether the customer got what they needed.

04"AI handles resolution. Humans handle judgment."

This is the line we keep coming back to, because it's the only one we've found that holds up under pressure. Not "AI replaces humans." Not "AI assists humans." Something more specific.

Resolution is the high-volume, pattern-matching work that fills inboxes today. How do I do X. Why isn't Y working. Can you change Z. A well-instrumented AI agent with read and write access to your product can finish most of these without a person in the loop, and do it in seconds.

Judgment is the work that's actually hard. The escalation that needs a refund larger than policy allows. The angry email that's about to become a Twitter thread. The pattern across ten conversations that points to a bug nobody filed. The pricing exception that needs a sales call. This is what your best support people are great at — and it's what the queue model has been burying them under busywork instead of letting them do.

The job of an AI-native support platform isn't to remove the human. It's to remove everything in front of the human that isn't judgment.

05Confidence is a first-class signal.

The reason most teams won't let AI auto-send replies is that they have no way to know when the AI is going to be wrong. So they put a human in front of every reply, the AI's speed advantage evaporates, and the experiment ends with a memo that says "AI isn't ready yet."

The actual problem is the missing signal. An AI that doesn't tell you how confident it is, on what, and based on what evidence, is unauditable — and you can't deploy unauditable systems to customers. An AI that does tell you — that says "I'm 92% sure this is the answer, here are the three sources I used, and here's the policy I'm relying on" — is something you can put a threshold on. Above 85%, ship it. Below, route to the team. Adjust the threshold per topic, per customer tier, per time of day.

Confidence isn't a UX flourish. It's the operational primitive that makes AI-native support possible. Without it, you're guessing. With it, you're tuning a system.

06The team's job changes. We should say so out loud.

If the AI is doing 70% of the resolution work, the team is doing something different than they were before. We don't think this is a problem to hide. We think it's the most interesting part of the shift.

The team's day used to be: read ticket, type reply, repeat. The team's day in an AI-native shop is: review the AI's edge cases and judgment calls, spot patterns the AI is missing, write the policies and knowledge the AI uses, and have the high-leverage conversations the AI shouldn't be having. It's a more intellectually serious version of the job. And it's a smaller team — there's no honest way to say it isn't.

The teams who get the most out of this shift treat their AI like a new hire that needs onboarding, supervision, and gradually expanded scope. The ones who treat it like a deflection bot bolted to the front of the queue get the worst of both worlds — angrier customers and a confused team.

07What we're building.

We're building a support platform where the journey is the unit of work, not the ticket. Where the AI does the resolution and exposes its reasoning. Where the team's interface is built around the judgment calls only they can make — and the work the AI handles silently doesn't show up at all unless something goes wrong.

It is not a chatbot bolted to the front of an inbox. It's the inbox, redrawn. Most of what's in a typical support tool today — queues, statuses, macros, round-robin — isn't there, because it's not what the work looks like anymore.

We are early. We are working with a small group of teams who want this to exist as much as we do. If that sounds like you, we'd like to talk.

— The support-arc team April 2026