When the Chat Rail Becomes the App
For the last couple of years, the default AI design compromise has been obvious:
Keep the product as it is. Add a chat rail on the side. Call it AI.
That compromise made sense when the model was still mostly an assistant. It could summarize, answer questions, maybe automate one short step. The center of gravity of the app still lived in the old interface: forms, dashboards, menus, filters, tabs.
I do not think that compromise survives the age of agents.
Once the AI can actually act, plan, inspect tools, explore options, and come back with choices, the chat rail often stops feeling like a feature and starts becoming an operating surface. At first it looks like a sidecar. Very quickly, it can start becoming the cockpit.
You can already see this in how the labs describe their own products. OpenAI is no longer talking about Codex as a clever suggestion box. It talks about pairing with Codex or delegating work to Codex in the cloud. Anthropic is even more explicit: Claude Code is framed as doing the bulk of the code-writing while the human shifts toward architecture, product judgment, and orchestration.
That is not autocomplete language.
That is supervisor language.
And once the user becomes a supervisor of agentic work rather than the direct manipulator of every function, the old application shell is no longer the obvious primary interface.
The primary interface becomes the place where:
- intent is expressed
- progress is watched
- branches are compared
- interventions are made
- decisions are ratified
In other words, the chat rail starts competing to become the main rail.
The side rail was always transitional. It was a good way to smuggle AI into existing applications without rewriting the whole product model. The dashboard stayed intact. The workflow stayed intact. The permissions model stayed intact. The AI sat on the right and waited politely to be summoned.
But behavior changes once users trust the agent with real work.
In the first one or two projects, the user uses AI.
In many workflows, the user soon starts supervising AI.
That is the shift most product teams still understate.
In the first mode, the user is still driving and occasionally asking for help.
In the second, the AI is driving most of the route and periodically asking:
- which branch should I keep?
- do you want me to explore all of these options?
- should I do that sequentially or in parallel?
- should I merge these changes or keep them separate?
That is not a sidebar interaction.
That is the application’s main control loop.
So the real design question is no longer “where do we put the chat panel?”
It is: what happens when this rail becomes the main surface of the app?
That question matters because the shift is not only technical. It is interface-level in a way that feels oddly Turing-shaped. Turing’s most famous move was to treat interaction itself as the meaningful boundary. We are now drifting into software where, for many tasks, talking to the system is the system.
That is also why so much fiction suddenly feels product-relevant. Jarvis, Alfred, Jeeves, Friday, Man Friday, Sancho Panza. The imagined role was never “small assistant hidden in the corner.” It was mediator, operator, interpreter, and planner. LLMs are not those characters, but the UX gravity is similar enough that software starts reorganizing around the same role.
The practical implication is simple.
If you are building a user-facing app for the next few years, do not design a static chatbot bolted to the side of a giant legacy UI. Design an adaptable rail that can begin as assistance and widen into the main workspace as trust and capability rise.
My bet is that the sidecar phase will not last long in products where the agent can reliably do real work.
In a meaningful subset of products, the rail is going to become the app.
The next question is what, exactly, should live inside that rail once it takes over. I think the answer is not “more chat.” It is something more structured than that, which I will get into in the next post.
Next: The prompt is not the product
Sources and grounding: OpenAI’s current descriptions of Codex and ChatGPT agent, plus Anthropic’s descriptions of Claude Code and Claude Code workflows.