Published Mar 7, 2026

Correct, But Wrong: The AI Myopia Problem

Myopia is more insidious than hallucination. In myopia, the output passes checks that instill confidence.

Keith Brisson

Mar 7, 2026

An artificial being working at a table while humans are in a separate room talking

In this article, I propose adding a new class of AI tool error to our lexicon: myopia.

Myopia, the inability to see things that are far away, is meant to complement and differentiate from hallucination and slop. It is a symptom of systematic deficiencies in how AI tools are incorporated into workflows, and it is insidious. Myopia is frequently the result of "unknown unknowns,” affecting both human workers and AI systems, and it is challenging to detect and diagnose. Solving myopia is one of the greatest, if not the greatest, challenge and opportunity in AI product development.

An Example

Imagine a hypothetical product being developed for an enterprise software product. A ticket created in Linear says "Add access control" and it's assigned to Cody, the team's agentic coding agent. Suppose for the sake of argument that Cody is built on Claude Code or some other state-of-the-art tech; it doesn't matter which.

Cody is absolutely capable of implementing this ticket. It has access to the codebase in Github. It is powered by LLM models that have been trained on countless examples of auth systems in similar products. So Cody enthusiastically gets to work, choosing to implement an OAuth flow with role-based access control. It writes the code, runs its linter and compiler, and iterates. It tests the code in a sandbox and even loads the app in a browser. It works.

It works, but it's wrong. Cody built the wrong thing.

The team met a week prior and the PM decided that SAML and attribute-based access control are needed. So the code is worthless and must be redone. In fact it's not just worthless; the code has negative value because its existence is a distraction for the team and delays a proper implementation.

Now imagine that instead of Cody, the ticket had been assigned to a junior engineer—let's call her Cayla—who also wasn't in that meeting. She would have made the same choice. OAuth is the industry standard; it's what she used at her last company. She would have written the code, tested it, and felt confident. Looks good.

But then a senior engineer catches it in review. He was in the meeting. "We actually decided on SAML," he says, and shares the notes. Cayla immediately understands, not because she made an error, but because she was missing context she didn't know she was missing. With that context, she can do the job correctly.

Cody had no such senior engineer. No one shared the meeting notes. And so the rework loop began. The rework loop of AI is a real cost. It happens as developers use AI iteratively in tools like Cursor. It happens in asynchronous agentic coding workflows. It happens frequently enough to instill distrust in developers seeking to get-stuff-done: they constantly must ask "should I roll the dice and try using AI for this? Or just do it myself?" But Cody's failure was not a hallucination. Nor was it any of the other types of typical AI quality issues we find with individual models. It was myopia.

The Model is Just the Start of AI Quality Issues

"AI" products still produce poor quality output frequently. This is no secret. In fact, it's become somewhat en vogue at this point in the hype cycle to be a vocal critic of AI quality and exude cynical wisdom. But the reality is that the definitions of quality and correctness are nuanced and progress is being made in somewhat non-obvious ways.

AI quality issues arise from many sources: the models themselves, the data being fed to them, the model prompting and configuration, and the software systems that turn individual model calls into agentic behavior.

Over the past several years, the prominence and importance of errors in each of these components has varied. Initially, models were bad at even simple tasks, like repeating ID numbers found in their input. Then hallucinations became the biggest issues when models were asked to produce general knowledge and happily spit out plausible-sounding-but-wrong facts. Hallucination was jarring precisely because it had no human analogue: people don't confidently fabricate facts from thin air the way early models did. It felt alien.

Over time, the models themselves have become larger, better trained, and more reliable. Thus the components around the model have become more important. Hallucinations have not disappeared, but they have become far less common and far less severe. The bigger quality challenge has shifted.

Making Agents Work: The Power of Being Told You're Wrong

AI coding workflows lead the way in quality, impact, and adoption in the workplace because they allow models something that other workflows frequently lack: the ability to check for correctness.

Leading-edge AI coding workflows are agentic; they allow models the chance to plan their overall strategy, take actions one step at a time, and iterate towards improvement. Crucial in this loop are the linter and the compiler: tools that check syntax, formatting, and logical correctness.

Agentic coding systems use the same workflow humans do. They spot their own errors and fix them. The results are extraordinary. But Claude Code and Github Copilot, despite producing amazing results compared to even six months ago, have a nasty habit: they produce output that is locally correct, but globally incorrect.

What is Myopia?

In myopia, the input (to the model, the agent, or the agentic system) is incorrect for the underlying task the user or company needs to accomplish.

Myopia is more insidious than hallucination. In myopia, the output passes checks that instill confidence. The output compiles. It passes tests. It looks correct. And factually, when looking at it, the output generally is correct in isolation.

Myopia can really only be caught by critiquing the input itself for missing or incorrect context. And even those reviewing the input and output might not have sufficient knowledge or context to correctly make that judgment.

In the Cody and Cayla example, a junior engineer who wasn't in the meeting would see the PR and assume it's correct. The code looks like what she would have written. That junior engineer would have seen the tests pass, tried the code, and felt confident in the work. Looks good. But she didn't know what she didn't know.

The Unknown Unknowns

Myopia is the failure to see or be aware of facts that change the nature of the task at hand. In the case of Cody, AI was unaware of what it was unaware of.

This is the "unknown unknowns" problem. The agent didn't know about the meeting. It didn't know it was missing context, so it didn't think to ask. The error isn't visible from inside the frame; you can only see it from a vantage point that has the full picture. And in most organizations, no single person has the full picture either.

The resulting state is incredibly challenging: it produces work that, given the available information at the time, is indeed correct, but given information beyond the field of view, is wrong.

Myopia is a Human Problem, Too (And That’s Actually Good News)

Unlike hallucination, myopia is not alien. It is deeply, recognizably human. People don't always know what they don't know. Individual engineers are not in every meeting. No single human can read all the emails and the docs. So on their own, humans are very prone to producing work that looks correct when standing on its own but fails to fit into what the team and the broader organization needs.

The difference is that human teams have spent decades developing strategies to mitigate it. We train junior employees to ask more questions. We review each other's work. We have sync meetings to ensure alignment. We hire good managers who can look out for alignment issues between their employees, peer teams, and broader organizations. We bring people into the meetings where decisions are made. We share context before assigning tasks.

The senior engineer in Cayla's story isn't just a code reviewer, he's a context conduit. His value in that moment wasn't technical; it was organizational. He knew what had been decided, and he made sure the person doing the work knew too.

Agents like Cody need the same thing. Not a smarter model, a better manager. One who ensures that before work begins, the agent has access to the decisions, discussions, and context that shape what "correct" actually means.

In the AI world, myopia is a structural and system problem. Smarter models alone cannot overcome it. New infrastructure is required: ways of connecting agents and AI coworkers with the information and background they need to make better decisions.

The good news is that the solution space is not a mystery. It is, in many ways, already mapped. We just need to build it for AI.

Conclusion

My team and I have been studying AI coding workflows carefully. We will be sharing our results more formally soon, but one thing is clear: AI fails in ways that look just like how humans fail if placed in similar situations.

Good human managers give their employees agency. They bring them into meetings where decisions are made. They share context and background before assigning tasks.

The next big breakthrough in AI quality won't come only from better models. It will come from better management that allows AI coworkers to fully participate in teams in the same way that encouraging human employees to participate tends to improve their work. Both need a seat at the table.

Keith Brisson

Kinelo


Designing the future of work