As an industry we need to be bolder and more ambitious in bringing truly transformative AI products to market. Rather than hooking up language models to conventional SaaS products, we need to be thinking about how we really want to interact with technology in the workplace and build towards that.
Keith Brisson
Feb 21, 2026

There's been a marked shift in vibes in the software and AI industries over the past few months. I sense it in my colleagues, my friends, and in myself. There's a mixture of optimism and excitement. But at the same time fear and uncertainty.
I will write on the fear and uncertainty in a future post. It’s a visceral feeling and deserves a deep dive and a nuanced take. But for today I want to write about optimism and of an inflection point in AI software and how we interact with technology.
January was a turning point, not in what AI could do, but in how people in the broader community started to imagine it should work.
OpenClaw, the fastest growing open source tool of 2026, captivated both technical and non-technical audiences with its ability to feel natural, and in many ways, magical. It has inspired new visions of working alongside computers to those that might have viewed AI software in a more restricted light. It was a “holy shit AI might just become real” moment for many people as they watched it actually take action on real data in their life.
OpenClaw is an example of a new kind of AI agent that is persistent, with memory and the ability to take action through connections to tools that humans use every day. Bold experimenters are giving it its own email address, login credentials to SaaS systems, and connections to business critical applications. It is able to work between connections to accomplish goals, both on demand and over time. The connected workflows this enables are substantially more impactful than the point solutions AI software has delivered so far.
Perhaps most importantly, OpenClaw is encouraged to be something that the user treats like a partner. It's a marked departure from the command-response chatbots that have been deployed so far. OpenClaw starts to look like a co-worker if you blur your eyes and imagine it working reliably and effectively (which it does not; not yet). It feels less like a tool, more natural.
Meanwhile, the big players in the space are starting to switch their terminology too, as a sign of their shifting focus. When OpenAI released Frontier, a platform for building, managing and deploying AI agents with shared context and workflows, it started using a different term in its marketing and docs than mere agent: “AI coworker”. In fact, not only are they positioning it as an orchestration layer for these AI coworkers (vs mere plumbing), they just hired the creator of OpenClaw. They’re serious about AI coworkers and assistants, both for work and personal use cases.
So what’s going on? Where have we been and where are we going?
January 2026 did not bring a breakthrough in technology. OpenClaw is an impressive piece of software in impact, but rather than a technical triumph it is a breakthrough in demonstrating a UX far more closely aligned to how people would naturally expect software labeled “artificial intelligence” to behave than what they have experienced thus far.
I was having drinks with a friend of mine recently. We were discussing the state of the industry, this inflection point, and how a persistent, stateful, and goal-oriented assistant connected to data could actually gain traction where existing offerings haven’t.
My friend remarked that people wanted and were promised the “Computer” from Star Trek. Yet what they've been given as AI are SaaS apps with embedded chatbots. And he hinted that OpenClaw is closer to the Star Trek Computer people have been yearning for than anything seen in the past, therefore inspiring their imaginations.
I’m a nerd and know way too much about Star Trek. So I love using it for metaphors. But I actually disagreed with my friend that people want the Computer from the show. Actually, they want something more.
In Star Trek The Next Generation, the ship's "Computer" can be spoken to naturally and solve incredibly complex problems with reasoning that far outstrips human capability. But alongside Computer is the character Data, an android who looks and acts human (aside from some quirks like refusing to use contractions). Data, like Computer, is brilliant and natural to work with. But he has agency in a way Computer does not. He is a member of the crew, and he has real impact.
I believe that if you asked the average person what form "AI" would take at work, you'd get answers a lot closer to Data than to Computer. True "artificial intelligence" looks like entities that communicate naturally with us, with memory, agency, and autonomy.
The reality is that AI for several years has underwhelmed. Partially this is due to immaturity and underperformance of the underlying technology. But I’d argue in reality, both large companies and the multitude of startups branding themselves as “AI” have lacked transformative vision and the willingness to push the boundaries of how we interact with computers and technology in transformative ways. They've been trying to meet the market where it is today because they recognize that they need to do something incremental to gain traction in a reasonable timeframe.
While the industry promised “AI transformation”, their conservative vision led them to incremental ambition. The result was overpromising in vision and underwhelming in execution.
In the past few years, what we've mostly received are systems that look like the Star Trek Computer. First were "AI copilots", natural language interfaces stuck in their boxes, with little agency, autonomy, or understanding of what's happening outside their isolated domain.
Then came "AI agents." The core insight behind AI agents is that if you repeatedly call an AI model in a loop, the model can break down tasks into achievable bits. Through this plan-and-loop strategy, AI agents become capable of multi-step tasks, and once able to connect to APIs and tools, show a sense of emergent behavior.
But "agent" is a misnomer. The word implies "agency." As of early 2026, AI agent systems have not performed well enough to be entrusted with agency on arbitrary tasks. They're having an impact in isolated cases, but haven't been ready to be truly unleashed in the workplace with abstract goals and real autonomy.
Modern AI agents are far from Data. They're much closer to Computer. And if we build AI copilots or agents into each software product but still rely on humans to serve as connective tissue between these tools, are we really utilizing human potential to its highest and best?
I run a very small startup, Kinelo, my second AI+ML company over the past decade. My tasks range from recruiting to paying invoices to product decisions. I could use help with every aspect, and I do use AI tools extensively, but I don't feel nearly as transformed as I should be.
Here's a simple example. Recently in a team meeting, we discussed our working agreements while preparing to onboard new hires. We verbally reached consensus on changes to a draft document, and I committed to making another revision and distributing it for async feedback.
Updating a document from meeting discussion and distributing the new version is simple but common and AI should help, right? The edits were clearly understandable from the transcript. "AI tools" exist for parts of this. Meeting transcription and summarization are ubiquitous. I can copy a transcript into Claude and have it produce updated text. But this simple task still requires many manual steps: connecting Claude to Google Drive, typing a prompt, copy/pasting into Google Docs, pasting a link in Slack, and closing the Linear ticket. Not hard, but not automatic. And not that far from what I did before.
What I really want is my technology to be proactive. I want Data sitting in the meeting, ready to offer to take the task off my hands. I want technology in my workflow like a coworker who never sleeps or loses focus, always looking for opportunities to move the team forward.
The same problem exists everywhere. In product management, I use AI to consolidate user feedback, draft specs, and prototype UIs. But, there's no artificial PM coworker I can tell: "Explore this feature area, prototype it, test it lightly, and spec it for the team."
Across the board, I'm still the connective tissue. There's no AI coworker I can delegate to completely the way I would with a new full-time employee. Instead there are many individual tools, each with some AI enhancement. But I'm still connecting the dots.
The closest we have today is in software development. It's now possible to take a task in Linear or Jira, assign it to something that appears next to humans in the list of assignees, and later receive a result. It feels like assigning a ticket to a remote junior engineer you've never met on video.
Autonomous agents like Cursor Cloud Agents, Devin, and GitHub Copilot spin up when you assign them an issue. They read the description, produce a PR, test their code, and ask for review. You can comment requesting changes. This is similar to working with an engineer or intern.
But even these don't feel like true coworkers. Software teams naturally discuss problems, get on calls, write and iterate on specs. As engineers work, they ask questions when things are unclear, reach out to teammates, and remember decisions and reasoning over time. Human workers have memory and communication ability, so their output becomes more consistent and requires less rework.
AI tools at parity with human engineers don't exist today. Even when they excel at isolated code or algorithms, they fall apart in ambiguity.
A true AI coworker would:
Most of all, a true AI coworker wouldn't feel like a tool. It would feel more natural, more human. Like Data.
Many of the building blocks exist, or are actively being worked on. I'll discuss this in a future post, but here’s a rundown:
Raw intelligence: Raw intelligence is largely here. LLMs can reason, judge, and transform data at or above human level for many isolated tasks. They absolutely fall short in some laughable ways (“should I walk or drive to the car wash 50 meters away?”), but on a broad array of benchmarks across a multitude of domains they consistently score above humans.
Communication: We’re partway there on connecting people to the reasoning engines. Natural language text interfaces are ubiquitous, and voice interfaces are emerging. Connection to where people work (Slack, email, meetings) exists through point-in-time search like RAG and MCP. This is more true for distributed asynchronous teams that primarily communicate digitally; certainly less so for in-person teams in meatspace.
Memory: This is crucially missing. Various AI services have bespoke memory systems, from ChatGPT to Claude to frameworks like the OG LangChain. There are even SaaS services dedicated to providing memory to AI agents. Yet this is very unsolved and not simple. Modeling knowledge is hard even for humans, and organizational knowledge bases are fragmented, unstructured, and never complete. In many ways, what is so compelling and magical about OpenClaw and related experiences is the memory it builds over time, which makes it appear to get smarter.
Coordination: Coordination between agents is also crucially lacking. There are cases of agents being orchestrated together. Deep research tools in ChatGPT, Claude, and Gemini use a master agent that creates research topics, delegates to individual agents, then coalesces results.
On the more advanced side are coding orchestration agents. Recently there was hype around GasTown. Claude Code has Agent Teams. And various frameworks have introduced agent critics, students, and other collaborative inter-agent functions.
But at a higher level, agents in broader workflows aren't effectively coordinated today, nor when working across multiple domains or tools. The orchestration problem is very much unsolved, but that doesn't mean it's not solvable.
The need for connectivity between AI software and the human layer in organizations has become well known, with efforts like MCP taking one approach to at least fetching point-in-time data. Meanwhile, the zeitgeist has adopted terms like "context graph" to represent a more sophisticated perspective on modeling data and process relationships and context within workflows than simple RAG of the past few years.
But putting together these building blocks into a coherent product with natural UX is both one of the biggest challenges and the biggest opportunities in the next wave of AI deployment. The trick is not merely improving AI models now. It's rethinking how software is deployed into organizations and building the connective tissue to give our future AI co-workers a seat at the table alongside us humans.
Despite the best marketing out there, an AI coworker does not exist. We have “copilots.” We have “agents.” We do not have coworkers.
But the direction is clear. OpenClaw showed millions of people what it feels like when AI actually participates in your life rather than waiting to be prompted, though it’s far too hard to set up, too unpredictable, and too rough to be a true solution today. OpenAI is reorganizing around the concept of AI coworkers. So is Anthropic if you read between the lines on their messaging. The industry is starting to reach for Data instead of settling for Computer.
What's changed is the understanding of where the bottleneck is. It's no longer primarily about making models smarter. The models are smart enough for an enormous range of real work. What's missing is everything around the model: persistent memory, coordination across tools and people, and the connective tissue that turns isolated intelligence into something that behaves like a colleague. That's a systems and software engineering problem, not a model problem. And software engineering problems are solvable.
As an industry we need to be bolder and more ambitious. Rather than hooking up language models to conventional SaaS products, we need to be thinking about how we really want to interact with technology in the workplace and build towards that. It’s what excites me, and it's the problem my team at Kinelo is working to solve; I hope and believe others will too.
We asked for Data. We got Computer. From an industry that desires to change the world yet that is building incrementally. The distance between those two views isn't a failure, it’s an opportunity and a map.
Keith Brisson
Kinelo