Recently, our team was assigned to redesign a report dashboard for Course AI—an AI-driven learning management system (LMS). We paid a lot of effort in understanding user needs through user research, which consistently pointed to a poor user experience. Of course, what can we expect from a very first version of dashboard—one that tends to integrate every feature developers can easily implement, rather than what users actually need.

However, when it came to ideation, the messy menu and page structure pushed me to think in a clearer, more direct and simple way to design the interaction – which in turn pushed me to simplify the way I design interaction. The answer became clearer and clearer, especially as I had been using ChatGPT to generate results simply through dialogue. That is:

Instead of just designing how users manipulate the interface, designing how the system understands users’ desires should be the first priority for AI-driven products.

In other words, users just need to “tell” the AI product what they want, and the AI will implement the process and generate results continuously by learning from users’ feedback.

I am thrilled to see the emergence of UX design evolution – from man-machine interaction design to man-machine coordination design. Many UX designers have probably already sensed this big shift when designing for AI products. Tia Clement pointed out that AI is reshaping UI. As she mentioned in her article (AI is reshaping UI – have you noticed the biggest change yet?), users articulate their goals, but instead of executing every step manually, they collaborate with the system – refining inputs and guiding the AI as it interprets, adjusts, and responds dynamically.

Jakob Nielsen addressed that this shift is not just a technical trend, but a philosophical one:

“Users tell the computer what outcome they want instead of what to do to achieve that outcome.”

As for me – someone who has been in charge of several products, with a mixed role of product owner and UX designer – I’ve come to realise the mindset shift in how we as designers treat our users. That is, instead of treating users like they are a bunch of “idiots” by designing a series of “no-brainer” interaction steps, we probably should treat users like they are boss, by designing a “touch-base” to coordinate with AI to support them better.

With that mindset, here I present three principles to help designers better serve users in this new AI-driven era.

Don’t ask how, just generate outcome promptly

Try to imagine this: your boss hires you to deliver results. But if you keep asking how to achieve those results, your boss will probably lose their temper—and you might get fired. (Here, we’re excluding apprenticeships and internships.) The whole reason for your employment is that the boss assumes you already know how to get things done.

So, when it comes to a task, you don’t ask how to complete it step by step—you figure out what outcome they expect and why it matters. Likewise, you can’t expect users – your “bosses” – to tell the AI product how to reach the outcome, step by step.

Take Stable Diffusion as an example. It’s a well-known text-to-image diffusion model that produces amazing visuals you’ve probably seen online. Yet, few people around you are actually using it. Why? Because it requires users to guide every step of the generation process like an engineer—not to mention the complexity of setup.

In contrast, ChatGPT-4o only needs a few prompts, maybe an image or two, and it generates results that are already good enough for users to share on social media. This difference makes a huge impact on usability and adoption.

AI excels at providing answers promptly, which means it can also get user feedback quickly. That feedback loop allows AI to refine its output continually—until the result is acceptable for users. This mirrors how we respond to highly uncertain requests in real life—like designing a poster for an impatient boss who is always in a hectic schedule.

We usually start by producing a quick draft—maybe not perfect, but grounded in our past experience and what we know of the boss’s preferences. Then, we gather feedback and refine. In such cases, bosses often appreciate the efficiency, even though some time should be spent adjusting the direction later.

Therefore, when designing AI products, make sure the output is generated fast enough that users are willing to give feedback. Treat users like bosses. Don’t ask how—just generate a result that shows understanding and serves as a starting point to reach alignment.

But what if the busy boss’s request is uncertain? Then we arrive at the second principle.

If the request is uncertain, reason based on previous behaviours

When OpenAI announced it was rolling out a new update to ChatGPT’s memory – allowing the bot to access the contents of all your previous chats—I thought it was heading in the right direction: treat users like they are boss.

These days, many people have switched to ChatGPT for answering questions. Some requests are quite clear, such as:
“What is the difference between oregano, thyme, and rosemary?”
“Resize this image to 1080×1920.”

But others are vague, like:
“Fix this.” (What’s broken?)
“Make it better.” (In what way?)

You probably wouldn’t have entered such ambiguous commands into your computer before the rise of AI. But you have likely received these kinds of requests from a busy boss – and in those situations, saying “how?” too often might risk your job. To get the job done, you had to start working based on your boss’s preferences.

In fact, how well you understand unclear requests is often a reflection of how competent you are. It’s not just about knowing what’s being asked—but why. For example:

Your boss receives a presentation from another colleague and emails it to you, asking you to “make it better.” As a competent assistant, you wouldn’t reply with “How exactly?” Instead, you would follow this loop:

1.Review the presentation to understand what’s wrong

2.Identify the issues based on your experience and knowledge of your boss’s preferences

3.Email your boss to explain what you’ve improved or plan to improve

4.If no new feedback comes, proceed with the work; if feedback arrives, adjust accordingly

5.Repeat step 3 until your boss is satisfied

AI products should follow a similar flow. One key point I want to emphasise is:

The ability to identify the real issues behind a request is essential.

Only when an AI product can reason and infer what the user truly needs can it deliver satisfying – and even unexpectedly great—outcomes. Because that’s how things work in the real world.

Search engines like Google used to be the most “AI-like” product. They provided results based on input and some understanding of user intent. But still, users often get frustrated when irrelevant answers show up. That’s because Google expects users to input the “right” keywords to get a relevant result within the first few lines of the results page.

But in reality, users are busy. Users are lazy. Users sometimes are even a bit clueless.They often type incomplete or imprecise keywords. And instead of refining the query, they just scroll again and again hoping something relevant will show up.

That kind of vague, uncertain request requires reasoning, not just matching. And this is where traditional search falls short—leaving a gap in the user experience. This gap is what AI can fill—by identifying real user intent based on prior behavior.

That’s why I believe AI will eventually replace traditional search engines. Personally, I’ve already started using ChatGPT more often to search – especially since the newer version can remember my preferences and deduce intent from previous chats. That gives me more reason to “hire” ChatGPT as my AI assistant. (By the way, I’ve tried Google Gemini too, but the results still lack an outcome-oriented mindset—it still follows the traditional search process). In most cases, knowing why the user is making a request is more important than just outputting the answer because answers that miss the intent probably is not that closed to what the users really want. Users today are switching tools for a better user experience. They want tools that respond like a good assistant, not just a machine.

But in terms of output, there are two types as well – predictable or unpredictable.

What if the output itself is unpredictable – something creative, ambiguous, or exploratory – where even the boss doesn’t know exactly what the answer should be? That’s where our third principle comes in.

If the Output Is Unpredictable, Provide Options

Some outputs are predictable, as we’ve discussed. But others – even when the request is clear – are inherently unpredictable. For instance:
“Generate a Ghibli-style portrait using this photo”
“Write a poetic Instagram caption for this sunset”

I believe many people have already used the ChatGPT-4o model or similar tools to transform photos into different styles. Some of my photography clients have even shown me Ghibli-style versions of their family photos—and I was blown away. The results were stunning yet consistent with the original images, making them feel personal, shareable, and truly unique. I tried it with my own photos as well. While the output was often beautiful, I started noticing oddities: strange-looking fingers, off-backgrounds, or slightly distorted facial features. I tried correcting them using text prompts, but quickly hit the limit of what the model could do. Those are the understandable result since the outputs for such requests is bound to be unpredictable.

But as a designer with a sensitivity to emotional impact, I realised the problem goes beyond technical imperfections.

Since these generated images are often highly personal, even small distortions – like a weird face – can cause discomfort or emotional unease, especially for vulnerable users. The psychological reaction is similar to what’s known as the “uncanny valley” effect.

“The uncanny valley is a phenomenon where humanoid figures (like robots or AI-generated characters) become unsettling as they approach human realism but don’t quite get there. Subtle imperfections in their appearance or behavior make them feel unnatural, even creepy—because they’re close enough to evoke real human expectations, but not perfect enough to meet them.”

So what can we do to reduce the side effects of unpredictable outputs?

The answer lies – again – in treating users like they’re the boss.

Imagine this: you’re a designer, and your boss asks you to create a poster. If you want to make your boss happy, you probably won’t just submit one final version. Instead, you’d present at least two options—giving your boss choices, and more importantly, a sense of control in the process. This also helps you narrow down the direction for further refinement. Even if the boss dislikes both versions, you’ve at least ruled out what doesn’t work.

The same principle applies to AI product design:

When a user makes a request for unpredictable output, the AI product should offer at least two options, not just to capture user preference, but also to give the user that same “boss-like” sense of control. And when there are outputs that user do not want -visually jarring, off-style, or awkward – the system should hide them automatically, so users don’t get distracted or disturbed. “Less is more” still applies here. What’s more, users should always be able to give feedback to correct the output. If the AI system is advanced enough, it should support a refinement loop, where the output improves continuously based on feedback:

Here I would like to show a good example, let’s say you’re editing a photo in Photoshop. You want to change the Telstra Tower into something more fancy. So you select the tower area and enter a prompt like “make it a fancy one.” In just a second, Photoshop gives you three different versions of the tower – directly placed in your original photo. You can browse them using arrow buttons, choose one, or hit “Generate Again” to refresh, possibly adjusting your prompt along the way. I can not say such user experience is good enough like working with a smart assistant, but at least you have choice of output.

Final thoughts

As AI becomes more mature and powerful, I have a strong feeling that an era of UX design is approaching. This is the era in which AI’s capabilities not only empower designers to create and present ideas more efficiently, but also push them to empathise with users on a deeper level. It’s worth revisiting current design methodologies when AI is involved. As UX designers, we can start by asking ourselves three questions:
1. If the AI product behaved like a talent assistant we helped cultivate, would it appear humble or arrogant?
2. What untapped opportunities exist in UX interaction that could better capture user desires?
3. How can we leverage AI to enhance the entire UX workflow—across research, problem definition, ideation, prototyping, and testing?
Design thinking isn’t just about following a set of methods; it’s about cultivating the right mindset—a mindset that is essential in adapting to the rise of AI. Here, I offer a simple metaphor for AI product design—one that I hope can help other designers, and serve as a reminder for myself as well: treat users like they are the boss.

Leave a Reply

Your email address will not be published. Required fields are marked *