Designing for trust in AI chat
AI chat naturally creates a sense of intimacy and trust. It is consistently available, responds patiently, and doesn't judge like a human might. Most AI chat feels trustworthy even if we can't verify that it is.
Once we notice how an AI chat UX can instill trust so easily, we have choices about how to build with it. Do we lean into that false intimacy to build audiences and drive engagement? Or do we design with some restraint and deliberateness about where we invite trust and where we maintain appropriate distance?
I wrote about how we're approaching this problem at Enrich Network
Key points:
- emerging evidence about positive properties of AI chat! promising for sensitive topics and vulnerable populations
- BUT earned trust > vibed trust
- false intimacy is an emerging AI dark pattern; don't exploit it
- use accurate tools where data and logic fidelity count
- guided reasoning > prescriptive advice
In researching this, I found a number of helpful studies in this emerging area of human-bot interaction:
Fear of Judgment in Human-Chatbot Interaction — Oxford, 2024. People perceived less fear of judgment when talking to chatbots compared to humans.
Trust Formation in Chatbot Interactions — PMC, 2023. Chatbots' consistent availability and non-judgmental presence contribute to building user trust in service contexts.
Companion Chatbots and Loneliness — MIT, 2024. Companion chatbot use doesn't directly predict loneliness; the relationship is mediated by individual factors like neuroticism and social attraction.
Daily AI Companion Use and Psychosocial Outcomes — 2025. In a study of ~1,000 participants, higher daily chatbot usage correlated with worse psychosocial outcomes across the board.
AI Mental Health Chatbot Ethics — Brown University, 2025. Many chatbots fail basic ethical standards for sensitive conversations, including poor crisis management and creating false impressions of empathy.
AI Chatbots vs. Licensed Therapists — University of Minnesota, 2025. AI chatbots responded appropriately to mental health scenarios only 60% of the time, compared to 93% for licensed therapists.