AI Adoption best practises: How to integrate AI into your design

AI Adoption best practises: How to integrate AI into your design

AI Adoption best practises: How to integrate AI into your design

Abstract
Abstract

Artificial intelligence is no longer a futuristic concept, it’s a tool on every designer’s desk. Whether you’re building with LLMs, experimenting with AI-generated visuals, or designing user interfaces that include AI-driven experiences, the question is no longer if you should use AI, but how to do it responsibly, effectively, and with your users in mind.


1. Start with the human, not the model

Before diving into model capabilities, define what your user actually needs. LLMs are powerful, but they shouldn’t be your product’s personality — they should be its invisible engine. Use jobs-to-be-done frameworks to find meaningful places where AI can reduce friction, unlock insight, or guide action.

Ask yourself → “What problem are we solving, and is AI genuinely the best solution for it?”


2. Anchor AI to a core JTBD (Job To Be Done)

Too many teams start with, “What can we do with this model?” I

Start with → What pain point can we help users solve faster or more easily with AI?

AI should accelerate a user’s progress toward value, whether that’s summarizing data, generating ideas, or guiding decisions. If it doesn’t contribute to your product’s core loop, it’s just noise.

Growth principle: Start where AI reduces friction in high-frequency or high-value moments.

Ask yourself → “What core user task does this AI support and does it help shorten time to value?”


2. Design AI Interactions to be exploratory, but nudged towards action

LLMs open up a wide surface of possible user inputs — but that ambiguity can kill activation. Users don’t want to start with a blinking cursor or a blank canvas. Instead nudge the user towards action and provide suggestion to inspire them.

Growth strategies:

  • Use scaffolded prompting (e.g. suggested actions, templates, “Try this” chips).

  • Reduce friction with micro-copy like: “Need ideas? Start with a goal."

  • Show example outputs before they act. Trust grows when users can predict what they’ll get.

Growth principle: Help users move from curiosity to action with as little effort as possible.

Ask yourself → “What does the user see the very first time they meet this AI — and does it invite action, not confusion?”


3. Design trust loops, not just magic moments

Impressive outputs are great for first impressions. But retention comes from building trust loops and small, repeatable interactions where the user sees the AI do something useful, again and again.

Trust loops are built on:

  • Transparent language: “AI-generated suggestion based on your input.”

  • Controls: edit, undo, regenerate. (Users need to be in control).

  • Signals: show progress, uncertainty, or confidence scores.

Growth principle: Users return to what they trust and can control.

Ask yourself → “Does this feature build confidence over time, or does it just try to impress once?”


6. Respect limits, don’t over-promise or over-personalize

Yes, LLMs feel magical. But they’re fallible, and hallucinations or biased outputs can harm trust (and your brand).

Instead of pretending it’s flawless:

  • Set clear expectations up front

  • Explain how results are generated

  • Allow feedback, flagging, or opting out

Growth principle: Transparency increases trust. Over-promising kills it.

Ask yourself → “Are we honest about what this AI can and can’t do, and do users feel safe trying it?”


Artificial intelligence is no longer a futuristic concept, it’s a tool on every designer’s desk. Whether you’re building with LLMs, experimenting with AI-generated visuals, or designing user interfaces that include AI-driven experiences, the question is no longer if you should use AI, but how to do it responsibly, effectively, and with your users in mind.


1. Start with the human, not the model

Before diving into model capabilities, define what your user actually needs. LLMs are powerful, but they shouldn’t be your product’s personality — they should be its invisible engine. Use jobs-to-be-done frameworks to find meaningful places where AI can reduce friction, unlock insight, or guide action.

Ask yourself → “What problem are we solving, and is AI genuinely the best solution for it?”


2. Anchor AI to a core JTBD (Job To Be Done)

Too many teams start with, “What can we do with this model?” I

Start with → What pain point can we help users solve faster or more easily with AI?

AI should accelerate a user’s progress toward value, whether that’s summarizing data, generating ideas, or guiding decisions. If it doesn’t contribute to your product’s core loop, it’s just noise.

Growth principle: Start where AI reduces friction in high-frequency or high-value moments.

Ask yourself → “What core user task does this AI support and does it help shorten time to value?”


2. Design AI Interactions to be exploratory, but nudged towards action

LLMs open up a wide surface of possible user inputs — but that ambiguity can kill activation. Users don’t want to start with a blinking cursor or a blank canvas. Instead nudge the user towards action and provide suggestion to inspire them.

Growth strategies:

  • Use scaffolded prompting (e.g. suggested actions, templates, “Try this” chips).

  • Reduce friction with micro-copy like: “Need ideas? Start with a goal."

  • Show example outputs before they act. Trust grows when users can predict what they’ll get.

Growth principle: Help users move from curiosity to action with as little effort as possible.

Ask yourself → “What does the user see the very first time they meet this AI — and does it invite action, not confusion?”


3. Design trust loops, not just magic moments

Impressive outputs are great for first impressions. But retention comes from building trust loops and small, repeatable interactions where the user sees the AI do something useful, again and again.

Trust loops are built on:

  • Transparent language: “AI-generated suggestion based on your input.”

  • Controls: edit, undo, regenerate. (Users need to be in control).

  • Signals: show progress, uncertainty, or confidence scores.

Growth principle: Users return to what they trust and can control.

Ask yourself → “Does this feature build confidence over time, or does it just try to impress once?”


6. Respect limits, don’t over-promise or over-personalize

Yes, LLMs feel magical. But they’re fallible, and hallucinations or biased outputs can harm trust (and your brand).

Instead of pretending it’s flawless:

  • Set clear expectations up front

  • Explain how results are generated

  • Allow feedback, flagging, or opting out

Growth principle: Transparency increases trust. Over-promising kills it.

Ask yourself → “Are we honest about what this AI can and can’t do, and do users feel safe trying it?”


Artificial intelligence is no longer a futuristic concept, it’s a tool on every designer’s desk. Whether you’re building with LLMs, experimenting with AI-generated visuals, or designing user interfaces that include AI-driven experiences, the question is no longer if you should use AI, but how to do it responsibly, effectively, and with your users in mind.


1. Start with the human, not the model

Before diving into model capabilities, define what your user actually needs. LLMs are powerful, but they shouldn’t be your product’s personality — they should be its invisible engine. Use jobs-to-be-done frameworks to find meaningful places where AI can reduce friction, unlock insight, or guide action.

Ask yourself → “What problem are we solving, and is AI genuinely the best solution for it?”


2. Anchor AI to a core JTBD (Job To Be Done)

Too many teams start with, “What can we do with this model?” I

Start with → What pain point can we help users solve faster or more easily with AI?

AI should accelerate a user’s progress toward value, whether that’s summarizing data, generating ideas, or guiding decisions. If it doesn’t contribute to your product’s core loop, it’s just noise.

Growth principle: Start where AI reduces friction in high-frequency or high-value moments.

Ask yourself → “What core user task does this AI support and does it help shorten time to value?”


2. Design AI Interactions to be exploratory, but nudged towards action

LLMs open up a wide surface of possible user inputs — but that ambiguity can kill activation. Users don’t want to start with a blinking cursor or a blank canvas. Instead nudge the user towards action and provide suggestion to inspire them.

Growth strategies:

  • Use scaffolded prompting (e.g. suggested actions, templates, “Try this” chips).

  • Reduce friction with micro-copy like: “Need ideas? Start with a goal."

  • Show example outputs before they act. Trust grows when users can predict what they’ll get.

Growth principle: Help users move from curiosity to action with as little effort as possible.

Ask yourself → “What does the user see the very first time they meet this AI — and does it invite action, not confusion?”


3. Design trust loops, not just magic moments

Impressive outputs are great for first impressions. But retention comes from building trust loops and small, repeatable interactions where the user sees the AI do something useful, again and again.

Trust loops are built on:

  • Transparent language: “AI-generated suggestion based on your input.”

  • Controls: edit, undo, regenerate. (Users need to be in control).

  • Signals: show progress, uncertainty, or confidence scores.

Growth principle: Users return to what they trust and can control.

Ask yourself → “Does this feature build confidence over time, or does it just try to impress once?”


6. Respect limits, don’t over-promise or over-personalize

Yes, LLMs feel magical. But they’re fallible, and hallucinations or biased outputs can harm trust (and your brand).

Instead of pretending it’s flawless:

  • Set clear expectations up front

  • Explain how results are generated

  • Allow feedback, flagging, or opting out

Growth principle: Transparency increases trust. Over-promising kills it.

Ask yourself → “Are we honest about what this AI can and can’t do, and do users feel safe trying it?”


Ready to collaborate?
© Kirsten Swensen 2025
Ready to collaborate?
© Kirsten Swensen 2025
Ready to collaborate?
© Kirsten Swensen 2025