from Vitaly Friedman’s masterclass session
Current interface(s) & phenomena
Prompt engineering guide
–> Too technical
AI Nativity
Language models are gullible.
They believe what we tell them— what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.
Simon Willson, Stuff We Figured Out About AI
AI models are gullible, as they don’t have a sense of criticism and judgement, they believe everything in training data and prompt
Language models are gullible.
If you hired a personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.
Simon Willson, Stuff We Figured Out About AI
AI Fatigue
Today when somebody says that something is Al-generated, usually it’s not a praise, but rather a testament how poor and untrustworthy it actually is.
Nilay Patel, Ezra Klein Show
“AI-ish” means robotic, it’s not a praise it is a criticism
AI Fatigue vs. AI Evolution
Al Fatigue | Al Excitement |
01 – Hallucinations | 01 – Giant leaps, every month |
02 – Noise and pollution | 02 – Automation of routine tasks |
03 – Ethical and legal concerns | 03 – Reduced cost of human error |
04 – Massive sustainability cost | 04 – Excels at synthesis and translation |
05 – Masses of AI-generated garbage | 05 – Excels at generating code/design |
06 – Poor training data -> poor results | 06 – Excels at writing and clustering |
07 – Cleansing stage always necessary | 07 – Cheap, relatively easy to integrate |
08 – LLMs are hard to tweak/customize | 08 – Autonomous work by Al agents |
09 – Slow and repetitive input | 09 – Recursive use of AI to train itself |
10 – Unpredictable, unreliable output | 10 – Response outlines and templates |
11 – Slow fine-tuning of output | 11 – Al presets: personas, roles, tasks |
12 – Human replacement and layoffs | 12 – Emerging AI design patterns |
Drawbacks of Text Prompts
We put the burden on the user to articulate good questions, but they may not know what exactly to ask. A good UI helps users incrementally explore the problem and solution space with guidance and nudges.
Austin Z. Henley, Natural Language Is The Lazy UI
Examples
Natural language is great at rough direction: teleport me to the right neighbourhood. But once ChatGPT has responded, how do I get it to take me to the right house?
Amelia Wattenberger, Why Chatbots Are Not The Future
What others are dreaming up
Deamons
Change the tone, and create your own, similar to customizing your profile.
Daemons are characters who sit in the background of your interface to help users explore AI output via the lens of different personalities.
Maggie Appleton, LM Sketchbook
Branches
Branches help users explore cause and consequence chains, explore connections and next steps. See them as a discovery assistant.
Maggie Appleton, LM Sketchbook
Finetuning and Versioning
Users can refine output by interacting with it via a context menu, e.g. to critique, find evidence for claims, generate research questions, point out assumptions.
ClipDrop
Style Lenses
We help users adjust the output with a visual representation, according to user’s interest – e.g. keywords, location, intention etc.
Amelia Wattenberger, Style Lenses
Options to improve writing
Not only interact at sentence level but also get an overview of style lenses used.
Bing CoPilot
Elsevier’s Scopus
Human-Verified Badges
We address trust issues by adding an accuracy score for Al-generated response, or adding a human-verified badge to lend an answer more credibility.
Scoping
We help users scope their query to a specific topic, domain, level of expertise, timeframe or set of documents – similar to search within category.
Source: Luke Wroblewski
Prompt Presets & Templates
We can proactively suggest relevant prompts to help people refine output. It would also help users find meaningful insights from a doc or a large data set.
Perceived Performance
Al is slow. We can show what’s happenning step-by-step as the generation is processing. We can also cache frequent AI responses to avoid expensive calculations.
Perplexity is showing – time, money,
if lots of people are asking questions we should be caching them
Assistant Pattern
Al performs best when it’s guiding users towards insights and explanations, providing insights autonomously, rather than on request. It must provide sources to appear credible and trustworthy.
Insight generated by AI without prompting because most of the times people may not even be able to figure out what to ask.
Temperature Knobs
Rather than using a text input to specify user’s input, they could use temperature knobs to shape outcome in a meaningful direction.
Hallucination
As humans, we experience reality via spatial reasoning, sense of time, touch, culture, point of view, experiences, emotions, intuition. But AI only knows language: so it’s cold, fuzzy, unhinged, boundless. AI can’t access reality.
Maggie Appleton, Forest Talk
Don’t Treat AI As Oracle
Al output isn’t great final output. Treat it as temporary artefacts, not sources of truth.
Helpful to summarize, extract structured data, find contradictions, compare, group, discuss, generate research questions.
Maggie Appleton, Forest Talk
AI Strengths vs. Human Strengths
Al Strengths | Al Excitement |
01 – Rapid ideation and discovery | 01 – Critical thinking |
02 – Extracting structured data | 02 – Emotional intelligence |
03 – Comparing and contrasting | 03 – Long-term memory |
04 – Grouping and clustering | 04 – Understanding social contexts |
05 – Exploring and summarizing | 05 – Broader understanding of reality |
06 – Refining and adjusting output | 06 – Diversity of opinions and expertise |
07 – Role-playing identities and lenses | 07 – Rich personal experiences |
08 – Organize, synthesize vast data | 08 – Intuition and gut feeling |
09 – Translate/structure natural language | 09 – Conscience and beliefs |
10 – Generating research questions | 10 – Intrinsic motivation |
11 – Automating repetitive tasks | 11 – Legal and ethical boundaries |
12 – Assisting humans | 12 – Value of human connection |
Great sources
Al Interaction patterns
Build services that earn trust
Summary
- Allow users to adjust the temperature of output with knobs.
- Allow users to ask for more context to highlight some areas.
- Suggest scopes to limit output to a level of detail or expertise.
- Allow users to scope their queries to a domain, timeframe.
- Restrain AI to provide proof for each conclusion or insight.
- Add structure with chapters, segments to navigate data faster.
- Suggest specific presets and templates to boost efficiency.
- Help users make sense of data by clustering or summarizing it.
- Cluster/cache AI responses to avoid expensive calculations.
- Suggest style lenses (Concrete → Abstract, Lengthy – Short).
Conclusion and questions
AI is always looking at the past instead of the future.
Considering there are so many different ML methods and techniques with different levels of fuzziness and accuracy level, would it make sense that we are more precise about what type of AI / ML method we are talking about? Speaking from a background of working on functionality different from LLMs, does it make sense to emphasize what method we are referring to when we ask each other questions? Like pronouns, for AI methods ?
Need to run comparison
Ideas: design what I was thinking – prompting experience
Comments
One response to “Designing AI Interactions”
[…] 9 – Design Patterns for AI UX […]