The dirty secret of AI: Most AI “agents” on the market today are not true agents.
AI companies know what the market wants – autonomous AI agents that can work independently on complex tasks. When AI doesn’t need step-by-step instructions, it will be able to take over entire human roles and will have infinitely more value to the enterprise.
The problem is, real agents are hard to build and deploy. So AI companies have rebranded basic AI automations as “agents,” so they can charge premium pricing before the tech is ready.
So if you’re an AI leader wondering why you haven’t deployed “hundreds of AI agents” like your competitors, let’s set the record straight:
Most companies don’t have real agents deployed yet and most aren’t ready to. Here’s what an agent really is, and what most companies are using instead.
The definition of an AI agent
A true AI agent is an autonomous, goal-driven system that senses triggers, selects and orchestrates tools, applies stored context, evaluates its own outputs, and decides how to achieve its goal without explicit step-by-step instructions.
That’s a long definition. So let’s break those pieces down:
- Sensing/triggering: The agent activates in response to real-world signals or events (e.g., a new report being published, an email coming in, a form being submitted etc.)
- Decision-making and planning: The agent is given access to systems and a goal to accomplish, and it decides how to produce the assigned output with the access it has.
- Acting with tools and memory: The agent learns how to use a set of tools, applies the business logic it's been given about those tools, and evaluates its own outputs against that business logic.
In short: rather than telling an AI system “search this news feed, summarize all articles related to AI, and send it in a bulleted list to this Slack channel,” you simply specify a goal: “Give me a daily update on AI news.” The agent is making choices about achieving an outcome, not executing on steps.
Most companies have built AI systems that just execute on steps, for 3 reasons:
- True agents are hard to build with a high level of accuracy. It’s called “error compounding.” Every time an agent has to make an LLM call or rely on a chained “sub-agent” in the system, the possibility of failure increases. Without step-by-step instructions and strict guardrails hardcoded, you’re introducing many failure points that build on each other.
- Coding is the easy part, imbuing human judgement is the challenge. People are good at subtle synthesis – e.g., understanding a manager’s communication quirks and adapting to them. 90% of the work with building agents is architecting these heuristics and applying human standards.
- Agents require a high tolerance for risk. The complexities listed above make agents fragile and error-prone if not built right and carefully controlled. So giving them full autonomy is not something many companies are currently willing to do.
TL;DR: Most companies lack the budget or technical skills to build a truly autonomous agent that can satisfy leadership’s risk appetite. Instead, they usually pursue two other paths.
What most companies have instead of agents
Most companies take one step back and remove full autonomy from their AI workflows, opting instead for more fail-safe options. They’re lower risk and lower reward, but more practical with given resources and risk tolerance.
1. AI automations
This is classic “if this, then that” logic, now enhanced with LLMs that can read and summarize unstructured data.
Using the example above, this looks like: “If an article has AI in the headline, then send me the link in Slack”.
These automations follow deterministic, pre-set rules with virtually no independence. They may leverage LLM capabilities, like summarizing a news article, but only inside tightly defined steps.
These are low-risk and great for accomplishing repetitive tasks in a repeatable way, but not agentic.
Common example of hype: Some leaders refer to their Custom GPTs as “agents,” but they’re really automations with lite agentic options. While they can connect to other tools via APIs to take actions, they are chat-initiated rather than automatically triggered, and follow a very specific set of custom instructions to accomplish an output.
2. Agentic workflows
Agentic workflows are a hybrid between an automation and a true agent, where an AI system is given one clearly defined goal and a constrained toolset to accomplish it with. The AI exercises some judgment on how to achieve the goal, but does not set its own objectives.
For example, an agentic workflow might route a customer service ticket to the right CS person by analyzing the level of urgency. It has some decision-making power over the outcome, but only when it comes to a specific variable.
With this moderate autonomy comes the moderate risk of inconsistency, but in a scope that is small and controllable. But agentic workflows do require higher effort than automations, to wire them in a way that ensures stable outputs.
Common example of hype: Microsoft Copilot Studio markets their outputs as agents … but they’re really agentic workflows. They are triggered by events, they can orchestrate a set of tools, and they have access to business logic through the Microsoft ecosystem, but they have limited autonomy inside of a bounded business task.
How to choose the right AI solution
You don’t need agents to be competitive right now, because your competitors don’t have them either. Instead, grade your use case on the following criteria:
- Risk. How risky is it to let AI handle the decision-making in this task? For example, if you hand over data analysis to an agent, you’re setting strategy under the assumption that it has an accurate understanding of the numbers. It might be a lot less risky to have it simply summarize key trends.
- Effort. Building an agent will require deep technical expertise and/or a ton of money. On the other hand, you can start building automations for $20/month. So what level of investment are you willing to make?
- Reward. What do you stand to gain? Think bigger than reduced headcount. If automating a task will help you do more business or increase revenue/margins, you have a case for a big investment. If it’s going to save one person 2 hours a week, you don’t.
All of these pieces work together. If there's a high reward, it’s likely worth a higher risk.
How leaders should be thinking about this
Agents are the “shiny object” of the moment, but automations and agentic workflows are the source of most ROI today. They can compress hours of work into minutes without exposing the company to high-stakes risk, and they’re faster, easier, and cheaper to deploy.
My advice to leaders trying to cut through the noise:
1. Beware the promise of agents. Anyone who tells you their company is run by agents doesn’t know what an agent is. Any software that tells you it’s building you agents should be graded against the definition we spelled out. And any solution should be chosen because it meets your risk/reward/effort tolerance, not your board or PR team’s wishlist.
2. Don’t engage in magical thinking. Agents are too often referred to like ambiguously-defined, smart human replacements. I get pitched to build multi-agentic systems that can accurately handle whole parts of a business weekly. That’s just not realistic with how fallible these systems are today. Root your ideation in existing capabilities, not promised ones.
3. Know that there are very few instances where you need an agent right now. High-autonomy agents are currently custom builds, and these only make sense to invest in when the ROI and risk profile align. Set your sights lower and win faster with well-designed automations and agentic workflows that produce immediate efficiency gains.