I did not set out to study trust. I wanted to understand why smart, capable people keep pushing back against AI systems that are, by every measurable standard, performing well. The gap between what the technology promises and what people actually experience at work kept appearing in the literature I was reviewing, and I could not ignore it.
What I found, after reviewing dozens of empirical studies on human-AI collaboration, is that trust in AI is one of the most misunderstood dynamics in the modern workplace. Organizations treat it like a technical problem. Roll out the tool, run a training session, measure adoption rates. But trust does not work that way. It never has, and AI does not change that.
〰️ The People Pushing Back Are Often the Best in the Room
One of the first things that struck me in the research was the relationship between expertise and skepticism. We tend to assume that resistance to AI is a knowledge problem. If people just understood the technology better, they would trust it more. The data tells a different story.
Experienced professionals, the clinicians, senior analysts, and seasoned decision-makers, are consistently less likely to defer to AI recommendations than their less experienced colleagues. And when you sit with that finding for a moment, it makes complete sense. These are people who have spent years developing nuanced judgment. They know that context matters, that edge cases exist, that the situation in front of them is rarely as clean as a dataset. When an AI delivers a recommendation without revealing its reasoning, an expert does not see efficiency. They see a system asking them to abandon the very thing that makes them good at their job.
The practical implication here is big. If your organization's strategy for building AI trust is simply to increase familiarity with the tool, you are likely to win over the people who needed less convincing to begin with, while losing the ones whose buy-in actually matters most.

〰️ Updating Your AI Can Break Your Team
This one surprised me when I first encountered it in the research, and I think it will surprise most leaders too.
When a team works alongside an AI system over time, something quiet and important happens. They learn where it tends to be wrong. They build a mental map of its limitations and, without always realizing it, they start compensating. A doctor knows the AI underweights certain patient histories, so she pays extra attention there. An analyst knows the model struggles with volatile markets, so he applies more scrutiny in those moments. The human and the AI have, in an informal but functional way, become a team.
Then the AI gets updated. It is more accurate now. But it fails in different places. The mental map the team built is no longer valid, and they do not know it yet. They keep compensating for the old weaknesses while new ones go unnoticed. Performance drops, sometimes significantly, not because the AI got worse but because the update disrupted a working human-AI relationship.
The lesson for organizations is that AI updates need to be communicated and managed the way any significant change to a team's workflow would be. People need to know what changed, where the new limitations are, and how to recalibrate. Releasing a quieter, more accurate model in the background is not a neutral act.
〰️ Transparency Is Doing a Lot of Heavy Lifting, and It Is Often Misunderstood
Almost every organization I read about in this research valued transparency in their AI systems. But there is a version of transparency that actually makes things worse.
Designing an AI to feel more human, giving it a conversational tone, having it use first-person language, and presenting its outputs in warm and accessible ways does increase comfort. People feel more at ease with it. But comfort and trust are not the same thing. What the research shows is that surface-level friendliness can tip people into what researchers call automation bias, a state where they stop questioning the AI's outputs altogether. They become too trusting, and that is its own kind of failure.
Genuine transparency means showing people how the AI reached its conclusions. It means being honest about where the system is uncertain or limited. It means giving people enough information to make a real judgment rather than just a comfortable one. That kind of transparency is harder to design and less immediately satisfying, but it produces something far more valuable: people who are engaged, critical, and genuinely collaborating with the system rather than just deferring to it.
The research draws a compelling parallel between how organizations bring new employees into a team and how they should, but rarely do, bring AI into the workplace.
When a new colleague joins, there is a process. Introductions are made, roles are explained, expectations are set, and trust is built gradually through shared experience. We understand instinctively that you cannot just place a new person in a team and expect seamless collaboration from day one.
AI deserves the same thoughtfulness. When respected leaders actively endorse a system, explain its purpose, and are clear about what it is designed to do and what falls outside its capabilities, employees respond differently. They are more willing to engage, more willing to experiment, and more willing to bring their own judgment to the collaboration rather than either blindly following or reflexively resisting.
Role clarity is especially important here. People need to understand that the AI is not competing with them. It handles pattern recognition, data aggregation, and probability at a scale no human can match. They bring context, judgment, ethical reasoning, and the kind of situational awareness that comes from being human. When those roles are clearly defined and both feel genuinely valued, the collaboration works.
〰️ Keeping Humans in the Loop Is Not Enough on Its Own
There is a widespread assumption that the safest and most trustworthy design for human-AI collaboration is one where a human reviews every output before anything happens. A human in the loop, checking and approving. It sounds sensible.
But the research points to a complication. When the human and the AI are essentially performing overlapping functions, both analyzing the same information and arriving at similar kinds of recommendations, the human begins to feel redundant. Their professional identity and sense of contribution start to feel threatened. Rather than producing good collaboration, this setup tends to produce quiet rivalry and disengagement.
What generates genuine trust is parallel specialization, where the human and the AI are doing meaningfully different things that complement each other. The AI does what it does best. The human does what they do best. Neither feels like a redundant version of the other. In this configuration, people do not just tolerate the AI. They rely on it in the same way you rely on a colleague whose strengths are genuinely different from your own.
〰️ Trust Has to Be Tended, Not Just Established
Perhaps the most important thing I took away from this research is that trust between humans and AI is not a problem you solve once. It is something that has to be actively maintained.
It shifts as the AI's performance changes. It shifts as people gain experience and their expectations evolve. It shifts as organizational pressures change, as team composition changes, as the nature of the decisions being made changes. A team that trusted an AI system six months ago may have a very different relationship with it today, for reasons that have nothing to do with the technology itself.
This means that organizations need mechanisms for ongoing evaluation. Not just "is the AI performing well?" but "how is our team's relationship with this system evolving, and is that relationship producing the outcomes we need?" Those are harder questions to ask, and they require a different kind of attention than a quarterly accuracy report. But they are the questions that actually matter.
What I keep coming back to, after all of this research, is something quite simple. The organizations that will get the most out of AI are not necessarily the ones with the most sophisticated systems. They are the ones who take the human side of the collaboration. The ones that listen to their experts, manage change carefully, invest in genuine transparency, and understand that trust, once lost, is very hard to rebuild.
AI is becoming a genuine partner in how decisions get made. Whether that partnership actually works depends far less on the algorithm than most people think.
I wrote this piece based on reflections about one of my papers: Dynamics of Trust: Unpacking Trust in Human-AI Collaboration in Decision-Making.
