Building Agentic Systems That Don't Frustrate Users

There's a lot of excitement right now about fully autonomous AI agents that can complete entire tasks without human intervention. The vision is compelling: tell the agent what you want in natural language, and it just gets done.

But after working on several agentic projects this year, we've noticed something interesting: in most real-world business scenarios, less autonomy is often better.

The problem with full autonomy

When you let an AI agent make decisions and act autonomously, a few problems tend to come up:

  • The agent makes wrong assumptions that lead it down the wrong path
  • It does things you didn't actually want it to do
  • When it gets stuck, you have to figure out where it went wrong
  • It feels out of control for the user

We worked on a project where the client wanted an agent that could draft responses to customer support emails. The fully autonomous approach would have the agent read the email, find relevant information, and send the response—all on its own.

But when we tested this, we found that even when the response was mostly correct, people still wanted to read it before it went out. The risk of sending something wrong was too high. The end result? People were spending just as much time reviewing and correcting as they would have spent writing it themselves.

A better approach: human-in-the-loop agents

We redesigned the system to give the human control at every step. Now it works like this:

  1. The agent drafts a response based on your email and the knowledge base
  2. The human reviews, edits, and approves it
  3. The human sends it

This doesn't sound as impressive as "fully autonomous," but it's actually much more useful in practice. The agent does the boring part (drafting the initial version and pulling in relevant information), and the human does the part that requires judgment (making sure the tone is right and the answer is correct).

Design principles for less frustrating agents

From our experience, here are some principles that work well when building agentic systems for business:

1. Start with the user's workflow, not the technology

Ask: where does AI actually help this person do their job better? Don't just stick an agent in because it's what everyone is talking about.

2. Give the user clear visibility

The user should always understand what the agent is planning to do before it does anything. Show your work.

3. Make it easy to correct course

If the agent goes in the wrong direction, the user should be able to step in and correct it quickly. Don't make them start over from scratch.

4. Prioritize predictability over surprise

It's more important for the agent's behavior to be predictable than it is for it to be "smart." Users need to be able to trust the system.

Wrapping up

We're not saying fully autonomous agents will never work. There are certainly domains where they make sense. But for most everyday business problems—where mistakes have consequences and context matters—we think the sweet spot is AI that works with people, not AI that tries to replace them.

The goal isn't to build systems that do everything automatically. The goal is to build systems that help people get their work done more easily.

Sometimes that means doing less automation, not more.