I’ve designed AI assistants — Here’s what actually works


Less than three years ago, OpenAI stunned the world with the next-generation technology of its generative AI model ChatGPT. Seemingly overnight, millions of people, from consumers to large enterprises, began using it to summarize articles, draft emails, brainstorming ideas, and accelerate their work. It became a practical, smart tool that could take care of tedious tasks and help us increase our productivity.

That’s why AI assistants are quickly becoming central to product experiences. Users are tired of digging through endless documentation to find a single answer. They’re frustrated with manual data entry, starting from a blank page, or doing the same repetitive tasks like sending status updates and reminders. AI assistants can help them with all of these things and companies are recognizing the need for more automation and generative AI capabilities in their own products.

While there’s an obvious demand for AI assistants, designing them isn’t so straightforward. It’s a complex project that requires a dedicated design strategy, thoughtful user flows, and a UI system built specifically for AI’s unique characteristics. In this article, we’ll explore how to design a purpose‑driven AI assistant, from creating reusable UI components to applying best practices that make users want to engage with it.

Starting with the jobs-to-be-done

When you’re designing an AI assistant for a product, you don’t want it to do everything that ChatGPT does. Otherwise, you risk creating an unfocused, bloated feature that tries to solve everything, but doesn’t end up doing anything particularly well.

The assistant should be able to operate within the context of your product, rather than functioning as a general-purpose chatbot. Its goal should be to help your users become more efficient in their workflows and effective at using your product.

The best way to get there is to start by deeply understanding your users’ jobs-to-be-done (JTBD). These are the specific tasks they’re trying to accomplish, along with the pain points that are slowing them down.

 

Jobs To Be Done Framework

 

To understand their goals and frustrations, you can ask your users questions like:

  • Where are you repeatedly getting stuck or slowed down?
  • Which tedious, repetitive tasks could be automated or accelerated?
  • What decisions could be made faster or better with the help of AI?
  • Where do you need more context, clarity, or suggestions to move forward?
  • Which parts of the workflow are mentally taxing, frustrating, or high‑friction?

Answering these questions can help you define a focused scope for your assistant, so that it can deliver relevant value to your users without just becoming a ChatGPT clone.

Turning JTBD into AI features

Once you’ve identified your users’ core jobs-to-be-done, the next step is to translate them into targeted AI features. This is where you decide how your assistant will show up in the product and how it will help your users.

From my experience, it’s easy to go wrong. If you give users a generic AI experience that they can get from any other AI assistant, they might not know how to use it for their specific workflows. Alternatively, they might use the assistant extensively for unrelated tasks, which defeats the purpose of having an assistant embedded within your product.



So instead of giving users a blank prompt area that says “Ask me anything” or “How can I help you,” design context-aware prompts and action buttons that support your user where they need help the most.

For example, when I worked on a data analytics product, the assistant could prompt the user to summarize trends they were already looking at, explain anomalies in a chart, or translate metrics into plain language insights. This made it clear to our users what types of tasks the assistant could help with, and it also removed the friction from getting started from scratch.

By mapping JTBD to clearly defined AI features, you avoid bloating your product with functionality that your users may not necessarily find helpful. Adding AI to your product just for the sake of having it isn’t a good long-term strategy. Instead, understanding your users’ JTBD will lead to your AI assistant feeling like an extension of the main experience.

Creating reusable UI components and patterns

Once you’ve decided on your AI features and aligned on requirements with product management, it’s time to design your assistant. This is where reusable UI components and patterns become essential to your design process. Not only do they contribute to creating consistent AI experiences across your product, but they also make it easier for your team to maintain and scale over time.

Every time you modify a component or add a new capability to your assistant, you shouldn’t have to reinvent the wheel. You can use a well-documented library of components and patterns that users already understand and feel familiar with.

There are a few core components that are worth standardizing early on.

Prompt input fields and action buttons

A well-designed prompt input field encourages your users to engage with your AI assistant. The welcome message is the equivalent of a concierge greeting you when you first enter a hotel.

Providing a few example prompts in the form of action buttons can offer inspiration while informing users of what the assistant can do. Tailor the button labels towards context-relevant actions that support the user’s jobs-to-be-done:

 

Syncfusion
Source: Syncfusion

 

For example, a product management tool might include buttons to “draft a status update” or “summarize a recent update.” Embedding these targeted actions into the assistant’s interface removes the guesswork of figuring out what to ask and can help spark additional prompt ideas.

If it makes sense for your product, you can also add an option for voice input to help lower friction even further. Voice input provides another level of accessibility for users who may have a difficult time using a keyboard:

 

Meta
Source: Meta

 

Visual indicators of AI activity

Transparency is key when it comes to designing for AI. Since many users are still unfamiliar or skeptical about how AI works, clear visual cues that the AI is active can help build trust and prevent confusion.

Some examples of AI visual indicators include subtle animations or effects that show the assistant is “thinking” or generating a response. This provides users with immediate feedback after prompting the assistant.

When content is AI-generated, using a distinct tag or designated color for AI can make this clear to users. Similarly, action buttons that trigger AI functions can use the same color coding to visually separate AI-driven actions from manual ones:

 

Microsoft
Source: Microsoft

 

Response blocks

How you structure the assistant’s outputs can also have a big impact on the experience. Walls of unformatted text can discourage users from reading, making the assistant less appealing to use. That’s why it’s important to make response blocks easy to scan and read, with clear headings, bullet point lists, and consistent formatting:

 

Gemini
Source: Gemini

 

Use the designated AI color as a subtle background tint to distinguish AI responses from human responses. Some conversations may involve other humans, like support agents, so differentiating AI responses from human ones will signal to the user which responses came from the assistant.

I also like to make responses actionable by including a context menu, either on hover or placed beneath each output. Common actions can include copy, share, and provide feedback. Not only does this make it easy for users to reuse or share the assistant’s responses, but gathering feedback helps your team continuously improve the assistant over time.

When it comes to providing feedback on the assistant, I’ve found that it can be confusing if the feedback UI looks like the user is responding to the assistant, rather than submitting feedback to your team. Styling the feedback block as a system message, rather than an AI response, makes it clear that it’s meant for the company and not the assistant. This can avoid misunderstanding and confusion.

Once the feedback is submitted, the block should disappear so it doesn’t disrupt the flow of the conversation:

 

Gemini 2
Source: Gemini

 

Inline actions and follow-up

Your assistant can keep conversations flowing by ending its responses with a follow-up question and inline action buttons. Instead of requiring users to type, offer simple answers like Yes or No for confirmations or specific context-relevant actions such as “Show alternatives” or “Apply changes.” This can help guide the conversation while reducing typing fatigue, making the experience feel more like a collaboration rather than just a text generator:

 

Google
Source: Google

 

Best practices and insights for designing AI assistants

Designing for AI isn’t the same as just designing any other feature. It introduces complexities around trust, unpredictability, and balancing automation with user control. Based on my experiences, I’ve noticed several key principles that consistently help create AI assistants that are helpful, transparent, and make users want to engage with them.

Be transparent and build trust

Trust is the foundation of a good AI assistant. If users aren’t sure where information is coming from or whether they can rely on it, they’ll hesitate to trust its responses. It’s important to clearly label AI‑generated content so it’s instantly recognizable. Users told us this gave them confidence because they could easily distinguish the assistant’s contributions from human ones, allowing them to approach the content with appropriate caution.

Transparency also means being honest about what the assistant can and can’t do. Setting realistic expectations builds long‑term trust, even if it means admitting certain limitations upfront.

This can include confidence levels for AI responses, especially if answers involve facts, statistics, or historical data. You might also add short disclaimers reminding users not to treat responses as professional advice or assume that they’re 100 percent accurate. These cues help manage users’ expectations and encourage them to fact-check rather than rely on AI blindly:

 

Yahoo
Source: Yahoo

 

Design for unpredictability

One thing that I’ve often observed is that users tend to get frustrated when the assistant returns irrelevant or confusing responses. When the AI doesn’t meet their expectations, it can quickly lose trust, leading users to stop engaging with the assistant altogether.

To prevent this, it’s safe to build in options to regenerate a response or refine a prompt to be more specific. This not only keeps users engaged but also helps them learn how to get better responses from the assistant over time. The key here is to make it easy for users to recover from mistakes or unexpected outcomes so that they can have more effective conversations:

 

Dribble
Source: Dribble

 

Balance automation with user control

One of the most valuable applications of AI is automating manual, tedious tasks to boost efficiency and productivity. But users still want to feel in control of their workflow, in case outcomes aren’t what they want or if they change their minds.

I once worked on a project where the assistant automatically applied edits without asking. The user feedback was generally negative. Instead of feeling assisted, users lost their sense of control over their work and felt like the AI was too intrusive.

To avoid this, automation works best with approval checkpoints. This can mean asking users to confirm an action before applying changes, or showing them a preview of results before committing to them. This approach keeps control in the user’s hands while still speeding things up through automation without taking away their autonomy.

If users change their minds after committing to an action, an easy undo option can give them peace of mind. Undo functionality lets users experiment with automation confidently, knowing they can quickly reverse any changes made by the assistant:

Google 2
Source: Google

Final thoughts

AI is here to stay, and as a designer you need to become proficient at creating usable, trustworthy experiences that embed AI into every product. As you design your AI assistant, focus on your users’ needs and goals before jumping into designs. You should have a solid understanding of their current pain points to design an assistant that is truly helpful to them.

Then, create reusable UI components and patterns that can scale over time as your users’ needs evolve.The most important takeaway to remember is to build trust and transparency into your designs. This is the foundation for all AI experiences, as users are still skeptical about its applications and effectiveness.

And rightfully so, as AI can hallucinate and provide inaccurate responses. The more you can add transparency to your assistant, like through visual indicators, confidence levels, or reference links, the higher the confidence your users will have when using it.

Continue to test your assistant and use feedback to improve its responses and interactions. Although we’re still in the early stages of AI adoption, designers must take the lead in shaping how this technology integrates into our lives through the tools that we use. By ensuring that our users maintain a strong sense of control, we can empower them to feel confident and equipped with the tools that help them get things done more effectively.

LogRocket helps you understand how users experience your product without needing to watch hundreds of session replays or talk to dozens of customers.

LogRocket’s Galileo AI watches sessions and understands user feedback for you, automating the most time-intensive parts of your job and giving you more time to focus on great design.

See how design choices, interactions, and issues affect your users — get a demo of LogRocket today.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment