Interfaces That Build Themselves – O’Reilly


For most people, the face of AI is a chat window. You type a prompt, the AI responds, and the cycle repeats. This conversational model—popularized by tools like ChatGPT—has made AI approachable and flexible. Yet as soon as your needs become more complex, the cracks start to show.

Chat excels at simple tasks. But when you want to plan a trip, manage a project, or collaborate with others, you find yourself spelling out every detail, reexplaining your intent and nudging the AI toward what you actually want. The system doesn’t remember your preferences or context unless you keep reminding it. If your prompt is vague, the answer is generic. If you forget a detail, you’re forced to start over. This endless loop is exhausting and inefficient—especially when you’re working on something nuanced or ongoing.

The thing is, what most of us are dealing with right now are really just “Type 1” interfaces—conversational ones. They’re flexible, sure, but they suffer from what we call “prompt effectiveness fatigue.” When planning a complex project or working on something that requires maintaining context across multiple sessions, you’ll have to explain your goals, constraints, and preferences over and over again. It’s functional, but it’s also exhausting.

This got us thinking: What if we could move beyond Type 1? What if interfaces could remember? What if they could think?

Table of Contents

The Three Types of Interfaces We’re Actually Building

Interfaces that build themselves

Here’s what I’ve noticed in my experiments with different AI tools: We’re actually seeing three distinct types of AI interfaces emerge, each with different approaches to handling complexity and shared context.

Type 1: Conversational Interfaces

This is where most of us live right now—ChatGPT, enterprise search systems using RAG, basically anything that requires you to capture your intent and context fresh in every prompt. The flexibility is great, but the cognitive load is brutal. Every conversation starts from zero.

We tested this recently with a complex data analysis project. Each time we returned to the conversation, we had to reestablish the context: what dataset we were working with, what visualizations were needed, what we’d already tried. By the third session, we were spending more time explaining than working.

Type 2: Coinhabited Interfaces

This is where things get interesting. GitHub Copilot, Microsoft 365 copilots, smaller language models embedded in specific workflows—these systems have ambient context awareness. When we’re using GitHub Copilot, it doesn’t just respond to our prompts. It watches what we’re doing. It understands the codebase we’re working in, the patterns we tend to use, the libraries we prefer. The ambient context awareness means we don’t have to reexplain the basics every time, reducing the cognitive overload significantly. But here’s the catch: When these tools misinterpret environmental clues, the misalignment can be jarring.

Type 3: Generative Interfaces

This is where we’re headed, and it’s both exciting and terrifying. Type 3 interfaces don’t just respond to your prompts or watch your actions—they actually reshape themselves based on what they learn about your needs. Early prototypes are already adjusting page layouts in response to click streams and dwell time, rewriting CSS between interactions to maximize clarity and engagement. The result feels less like navigating an app and more like having a thoughtful personal assistant who learns your work patterns and discreetly prepares the right tools for each task.

Consider how tools like Vercel’s v0 handle this challenge. When you type “create a dashboard with user analytics,” the system processes this through multiple AI models simultaneously—a language model interprets the intent, a design model generates the layout, and a code model produces the React components. The key promise is contextual specificity: a dashboard that surfaces only the metrics relevant to this analyst, or an ecommerce flow that highlights the next best action for this buyer.

The Friction

Here’s a concrete example from my own experience. We were helping a client build a business intelligence dashboard, and we went through all three types of interfaces in the process. Here are the points of friction we encountered:

Type 1 friction: When using this type of interface to generate the initial dashboard mockups, every time we came back to refine the design, we had to reexplain the business context, the user personas, and the key metrics we were tracking. The flexibility was there, but the cognitive overhead was enormous.

Type 2 context: When we moved to implementation, GitHub Copilot understood the codebase context automatically. It suggested appropriate component patterns, knew which libraries we were using, and even caught some styling inconsistencies. But when it misread the environmental cues—like suggesting a chart type that didn’t match our data structure—the misalignment was more jarring than starting fresh.

Type 3 adaptation: The most interesting moment came when we experimented with a generative UI system that could adapt the dashboard layout based on user behavior. Instead of just responding to our prompts, it observed how different users interacted with the dashboard and gradually reshaped the interface to surface the most relevant information first.

Why Type 2 Feels Like the Sweet Spot (for Now)

After working with all three types, we keep coming back to why Type 2 interfaces feel so natural when they work well. Take modern car interfaces—they understand the context of your drive, your preferences, your typical routes. The reduced cognitive load is immediately noticeable. You don’t have to think about how to interact with the system; it just works.

But Type 2 systems also reveal a fundamental tension. The more they assume about your context, the more jarring it is when they get it wrong. There’s something to be said for the predictability of Type 1 systems, even if they’re more demanding.

The key insight from Type 2 systems is that ambient context awareness can dramatically reduce cognitive load but only if the environmental cues are interpreted correctly. When they’re not, the misalignment can be worse than starting from scratch.

The Trust and Control Paradox

Here’s something I’ve been wrestling with: The more helpful an AI interface becomes, the more it asks us to give up control. It’s a weird psychological dance.

My experience with coding assistants illustrates this perfectly. When it works, it’s magical. When it doesn’t, it’s deeply unsettling. The suggestions look so plausible that we find ourselves trusting them more than we ought to. That’s the Type 2 trap: Ambient context awareness can make wrong suggestions feel more authoritative than they actually are.

Now imagine Type 3 interfaces, where the system doesn’t just suggest code but actively reshapes the entire development environment based on what it learns about your working style. The collaboration potential is enormous, but so is the trust challenge.

We think the answer lies in what we call “progressive disclosure of intelligence.” Instead of hiding how the system works, Type 3 interfaces need to help users understand not just what they’re doing but why they’re doing it. The complexity in UX design isn’t just about making things work—it’s about making the AI’s reasoning transparent enough that humans can stay in the loop.

How Generative Interfaces Learn

Generative interfaces need what we think of as “sense organs”—ways to understand what’s happening that go beyond explicit commands. This is fundamentally observational learning: the process by which systems acquire new behaviors by watching and interpreting the actions of others. Think of watching a skilled craftsperson at work. At first, you notice the broad strokes: which tools they reach for, how they position their materials, the rhythm of their movements. Over time, you begin to pick up subtler cues.

We’ve been experimenting with a generative UI system that observes user behavior. Let me tell you about Sarah, a data analyst who uses our business intelligence platform daily. The system noticed that every Tuesday morning, she immediately navigates to the sales dashboard, exports three specific reports, and then spends most of her time in the visualization builder creating charts for the weekly team meeting.

After observing this pattern for several weeks, the system began to anticipate her needs. On Tuesday mornings, it automatically surfaces the sales dashboard, prepares the reports she typically needs, and even suggests chart templates based on the current week’s data trends.

The system also noticed that Sarah struggles with certain visualizations—she often tries multiple chart types before settling on one or spends extra time adjusting colors and formatting. Over time, it learned to surface the chart types and styling options that work best for her specific use cases.

This creates a feedback loop. The system watches, learns, and adapts, then observes how users respond to those adaptations. Successful changes get reinforced and refined. Changes that don’t work get abandoned in favor of better alternatives.

What We’re Actually Building

Organizations experimenting with generative UI patterns are already seeing meaningful improvements across diverse use cases. A dev-tool startup we know discovered it could dramatically reduce onboarding time by allowing an LLM to automatically generate IDE panels that match each repository’s specific build scripts. An ecommerce site reported higher conversion rates after implementing real-time layout adaptation that intelligently nudges buyers toward their next best action.

The technology is moving fast. Edge-side inference will push generation latency below perceptual thresholds, enabling seamless on-device adaptation. Cross-app metaobservation will allow UIs to learn from patterns that span multiple products and platforms. And regulators are already drafting disclosure rules that treat every generated component as a deliverable requiring comprehensive provenance logs.

But here’s what we keep coming back to: The most successful implementations we’ve seen focus on augmenting human decision making, not replacing it. The best generative interfaces don’t just adapt—they explain their adaptations in ways that help users understand and trust the system.

The Road Ahead

We’re at the threshold of something genuinely new in software. Generative UI isn’t just a technical upgrade; it’s a fundamental change in how we interact with technology. Interfaces are becoming living artifacts—perceptive, adaptive, and capable of acting on our behalf.

But as I’ve learned from my experiments, the real challenge isn’t technical. It’s human. How do we build systems that adapt to our needs without losing our agency? How do we maintain trust when the interface itself is constantly evolving?

The answer, we think, lies in treating generative interfaces as collaborative partners rather than invisible servants. The most successful implementations I’ve encountered make their reasoning transparent, their adaptations explainable, and their intelligence humble.

Done right, tomorrow’s screens won’t merely respond to our commands—they’ll understand our intentions, learn from our behaviors, and quietly reshape themselves to help us accomplish what we’re really trying to do. The key is ensuring that in teaching our interfaces to think, we don’t forget how to think for ourselves.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment