AI Adoption best practises: How to integrate AI into your design
AI Adoption best practises: How to integrate AI into your design
AI Adoption best practises: How to integrate AI into your design


Artificial intelligence is no longer a futuristic concept, it’s a tool on every designer’s desk. Whether you’re building with LLMs, experimenting with AI-generated visuals, or designing user interfaces that include AI-driven experiences, the question is no longer if you should use AI, but how to do it effectively, and with your users in mind.
1. Start with the human, not the model
Before diving into model capabilities, define what your user actually needs. LLMs are powerful, but they shouldn’t be your product’s personality. They should be its invisible engine. Use jobs-to-be-done frameworks to find meaningful places where AI can reduce friction, unlock insight, or guide action.
Ask yourself → “What problem are we solving, and is AI genuinely the best solution for it?”
2. Anchor AI to a core JTBD (Job To Be Done)
Too many teams start with, “What can we do with this model?” I
Start with → What pain point can we help users solve faster or more easily with AI?
AI should accelerate a user’s progress toward value, whether that’s summarizing data, generating ideas, or guiding decisions. If it doesn’t contribute to your product’s core loop, it’s just noise.
Growth principle: Start where AI reduces friction in high-frequency or high-value moments.
Ask yourself → “What core user task does this AI support and does it help the user?"
2. Design AI Interactions to be exploratory, but nudged towards action
LLMs open up a wide surface of possible user inputs but that ambiguity can kill activation. Users don’t want to start with a blinking cursor or a blank canvas. Instead nudge the user towards action and provide suggestion to inspire them.
Growth strategies:
Use scaffolded prompting (e.g. suggested actions, templates, “Try this”.
Reduce friction with micro-copy like: “Need inspiration? Try this prompt"
Show example outputs before they act. Trust grows when users can predict what they’ll get.
Growth principle: Help users move from curiosity to action with as little effort as possible.
Ask yourself → “What does the user see the very first time they meet this AI, and does it invite action, not confusion?”
3. Design trust loops, not just magic moments
Impressive outputs are great for first impressions. But retention comes from building trust loops and small, repeatable interactions where the user sees the AI do something useful, again and again.
Trust loops are built on:
Transparent language: “AI-generated suggestion based on your input.”
Controls: edit, undo, regenerate. (Users need to be in control).
Signals: show progress, uncertainty, or confidence scores.
Growth principle: Users return to what they trust and can control.
Ask yourself → “Does this feature build confidence over time, or does it just try to impress once?”
4. When working with LLM's: Respect limits, expect to rework things again and again.
Yes, LLMs feel magical. But they’re fallible, and hallucinations or have faulty outputs. Make sure you start with a strong conceptual main prompt and iterate from there. The LLM will understand the first steps and from there you can add more complexity if needed. Focus on functionality first, and polish the UI later.
Set clear expectations in your prompting up front, be as specific as possible.
Start with your initial thoughts and write them out as a concept combined with a user flow.
Explain through visual references, screenshot show it should look or describe/upload the styling of your design system.
Polish the UI where needed and make sure to save back-ups of previous versions in Github and download your code or share with devs.
Make sure to import your design in Figma so you have the source there as well and you can have version history.
Ask yourself → How can I serve as a director and lay the groundwork first?
Here are the tools I'd recommend for experimentation and building web + mobile apps: Lovable, Claude, v0.
Artificial intelligence is no longer a futuristic concept, it’s a tool on every designer’s desk. Whether you’re building with LLMs, experimenting with AI-generated visuals, or designing user interfaces that include AI-driven experiences, the question is no longer if you should use AI, but how to do it effectively, and with your users in mind.
1. Start with the human, not the model
Before diving into model capabilities, define what your user actually needs. LLMs are powerful, but they shouldn’t be your product’s personality. They should be its invisible engine. Use jobs-to-be-done frameworks to find meaningful places where AI can reduce friction, unlock insight, or guide action.
Ask yourself → “What problem are we solving, and is AI genuinely the best solution for it?”
2. Anchor AI to a core JTBD (Job To Be Done)
Too many teams start with, “What can we do with this model?” I
Start with → What pain point can we help users solve faster or more easily with AI?
AI should accelerate a user’s progress toward value, whether that’s summarizing data, generating ideas, or guiding decisions. If it doesn’t contribute to your product’s core loop, it’s just noise.
Growth principle: Start where AI reduces friction in high-frequency or high-value moments.
Ask yourself → “What core user task does this AI support and does it help the user?"
2. Design AI Interactions to be exploratory, but nudged towards action
LLMs open up a wide surface of possible user inputs but that ambiguity can kill activation. Users don’t want to start with a blinking cursor or a blank canvas. Instead nudge the user towards action and provide suggestion to inspire them.
Growth strategies:
Use scaffolded prompting (e.g. suggested actions, templates, “Try this”.
Reduce friction with micro-copy like: “Need inspiration? Try this prompt"
Show example outputs before they act. Trust grows when users can predict what they’ll get.
Growth principle: Help users move from curiosity to action with as little effort as possible.
Ask yourself → “What does the user see the very first time they meet this AI, and does it invite action, not confusion?”
3. Design trust loops, not just magic moments
Impressive outputs are great for first impressions. But retention comes from building trust loops and small, repeatable interactions where the user sees the AI do something useful, again and again.
Trust loops are built on:
Transparent language: “AI-generated suggestion based on your input.”
Controls: edit, undo, regenerate. (Users need to be in control).
Signals: show progress, uncertainty, or confidence scores.
Growth principle: Users return to what they trust and can control.
Ask yourself → “Does this feature build confidence over time, or does it just try to impress once?”
4. When working with LLM's: Respect limits, expect to rework things again and again.
Yes, LLMs feel magical. But they’re fallible, and hallucinations or have faulty outputs. Make sure you start with a strong conceptual main prompt and iterate from there. The LLM will understand the first steps and from there you can add more complexity if needed. Focus on functionality first, and polish the UI later.
Set clear expectations in your prompting up front, be as specific as possible.
Start with your initial thoughts and write them out as a concept combined with a user flow.
Explain through visual references, screenshot show it should look or describe/upload the styling of your design system.
Polish the UI where needed and make sure to save back-ups of previous versions in Github and download your code or share with devs.
Make sure to import your design in Figma so you have the source there as well and you can have version history.
Ask yourself → How can I serve as a director and lay the groundwork first?
Here are the tools I'd recommend for experimentation and building web + mobile apps: Lovable, Claude, v0.
Artificial intelligence is no longer a futuristic concept, it’s a tool on every designer’s desk. Whether you’re building with LLMs, experimenting with AI-generated visuals, or designing user interfaces that include AI-driven experiences, the question is no longer if you should use AI, but how to do it effectively, and with your users in mind.
1. Start with the human, not the model
Before diving into model capabilities, define what your user actually needs. LLMs are powerful, but they shouldn’t be your product’s personality. They should be its invisible engine. Use jobs-to-be-done frameworks to find meaningful places where AI can reduce friction, unlock insight, or guide action.
Ask yourself → “What problem are we solving, and is AI genuinely the best solution for it?”
2. Anchor AI to a core JTBD (Job To Be Done)
Too many teams start with, “What can we do with this model?” I
Start with → What pain point can we help users solve faster or more easily with AI?
AI should accelerate a user’s progress toward value, whether that’s summarizing data, generating ideas, or guiding decisions. If it doesn’t contribute to your product’s core loop, it’s just noise.
Growth principle: Start where AI reduces friction in high-frequency or high-value moments.
Ask yourself → “What core user task does this AI support and does it help the user?"
2. Design AI Interactions to be exploratory, but nudged towards action
LLMs open up a wide surface of possible user inputs but that ambiguity can kill activation. Users don’t want to start with a blinking cursor or a blank canvas. Instead nudge the user towards action and provide suggestion to inspire them.
Growth strategies:
Use scaffolded prompting (e.g. suggested actions, templates, “Try this”.
Reduce friction with micro-copy like: “Need inspiration? Try this prompt"
Show example outputs before they act. Trust grows when users can predict what they’ll get.
Growth principle: Help users move from curiosity to action with as little effort as possible.
Ask yourself → “What does the user see the very first time they meet this AI, and does it invite action, not confusion?”
3. Design trust loops, not just magic moments
Impressive outputs are great for first impressions. But retention comes from building trust loops and small, repeatable interactions where the user sees the AI do something useful, again and again.
Trust loops are built on:
Transparent language: “AI-generated suggestion based on your input.”
Controls: edit, undo, regenerate. (Users need to be in control).
Signals: show progress, uncertainty, or confidence scores.
Growth principle: Users return to what they trust and can control.
Ask yourself → “Does this feature build confidence over time, or does it just try to impress once?”
4. When working with LLM's: Respect limits, expect to rework things again and again.
Yes, LLMs feel magical. But they’re fallible, and hallucinations or have faulty outputs. Make sure you start with a strong conceptual main prompt and iterate from there. The LLM will understand the first steps and from there you can add more complexity if needed. Focus on functionality first, and polish the UI later.
Set clear expectations in your prompting up front, be as specific as possible.
Start with your initial thoughts and write them out as a concept combined with a user flow.
Explain through visual references, screenshot show it should look or describe/upload the styling of your design system.
Polish the UI where needed and make sure to save back-ups of previous versions in Github and download your code or share with devs.
Make sure to import your design in Figma so you have the source there as well and you can have version history.
Ask yourself → How can I serve as a director and lay the groundwork first?
Here are the tools I'd recommend for experimentation and building web + mobile apps: Lovable, Claude, v0.