ARTIFICIAL INTELLIGENCE
Psychological Safety in Healthcare Drives High-Performance Teams — and AI Should Support It
Discover how psychological safety drives team performance and how the right AI tools can strengthen safety culture and communication.
May 8, 2025
Surgical Safety Technologies
Frontline healthcare workers in busy operating rooms, trauma bays, and acute care units face relentless pressure to perform - often without the psychological safety to speak up, ask questions, or challenge decisions that may impact patient safety. Many fear that admitting uncertainty or reporting near misses could lead to blame or consequences. But the evidence is clear: psychological safety in healthcare prevents harm, improves outcomes, and protects the well-being of care teams and patients alike.
Originally introduced by Harvard researcher Amy Edmondson,¹ psychological safety has become a foundation for high-performing, resilient, and error-aware teams. Psychological safety in healthcare is not a “nice-to-have” - it’s a patient safety requirement – and, as artificial intelligence (AI) becomes more integrated into clinical routines, it is increasingly imperative to protect.
Healthcare teams need assurance that these tools won’t replace human judgment or create a culture of surveillance. Instead, the right AI should support learning and communication that keeps everyone - patients and providers - safe.
Why Psychological Safety in Healthcare is Vital
Healthcare is unpredictable, fast-paced, and often emotionally intense. No one knows this better than nurses, residents, surgeons, technicians, and attending physicians working under pressure. Even the smallest hesitation to speak up in these environments can lead to errors, missed opportunities, or harm.
The inability to raise concerns or learn from mistakes can take a serious toll on providers - eroding their confidence, increasing emotional strain, and making it harder to stay engaged and effective at the bedside.
A 2024 study published in the Joint Commission Journal on Quality and Patient Safety² found that psychological safety among healthcare workers was directly linked to lower burnout and greater engagement in quality improvement efforts. Teams that feel safe communicate more openly, recover from errors faster, and work more cohesively in high-pressure situations.
Psychological safety enables:
Open communication: Team members speak up about concerns, questions, or potential mistakes
Collaborative problem-solving: Diverse perspectives are welcomed, leading to better decisions
Faster error detection: Staff report near misses and adverse events, driving proactive change
Lower burnout and turnover: Supportive environments reduce stress and improve retention
Trust, mutual respect, and freedom from fear form the core of safe, high-functioning teams.
AI in Healthcare: Risk or Opportunity
Artificial intelligence is rapidly becoming part of clinical workflows - from documentation support to surgical performance analytics and risk prediction. While these tools promise to improve efficiency and care quality, they also raise important concerns³ for frontline teams:
Will AI track and judge my performance?
Could it be used against me in peer review?
Will it replace or will there be expectations that it should replace my clinical judgment?
Who sees this data and what will they do with it?
Poorly implemented AI-driven technology can feel like surveillance—adding pressure and reducing autonomy. That’s why maintaining psychological safety in healthcare must remain front and center during technology development and deployment.
Research warns that poorly implemented digital systems may contribute to “moral distress”⁴ and disengagement among clinicians. As such, it’s critical to evaluate not only the type of AI being introduced, but also the organization’s commitment to using it responsibly, non-punitively, and in ways that genuinely support team learning and trust.
The Right AI Protects Providers and Strengthens Teams
AI tools can support rather than erode psychological safety in healthcare when it is thoughtfully designed and implemented. Key features of AI-driven technology should include:
1. Objective Feedback Without Blame
AI can analyze clinical events and identify trends that matter - from procedural delays to communication breakdowns. Tools like the Black Box Platform™ anonymize and aggregate data to inform system-wide learning, not personal punishment. This supports a just culture and encourages reporting.
Research has shown that teams using Black Box Platform insights improved their communication and reduced avoidable errors.⁵ This kind of feedback promotes curiosity, not defensiveness.
AI-driven technologies like the Black Box Platform also ensure psychological safety by de-identifying captured data. The platform mutes spoken PHI details, changes voice pitch, and blurs facial images to ensure data review focuses on what is happening – not who is involved.
2. Structured Debriefs and Learning Loops
The right AI tools support structured debriefs to ensure they are clear, objective, and non-punitive. These conversations help teams learn from real events without placing blame.
A 2024 study in BMC Medical Education⁶ showed such structured, peer-led debriefs increase learning and psychological safety.
AI can also flag patterns that may go unnoticed in manual reviews or anecdotal debriefs unsupported by captured data - making learning faster, deeper, and more actionable.
3. Psychological Safety as a Metric
Advanced platforms now assess team dynamics directly. For instance, SST research has explored how non-technical skills and communication patterns influence outcomes and team cohesion.⁷
Monitoring indicators like speaking time, turn-taking, and tone allows leaders to proactively support psychological safety.
4. Democratized Data Access
Sharing AI insights across the care team — rather than restricting them to administrators — reinforces trust and accountability. Transparent access flattens hierarchies and builds cultures of trust and continued improvement.
This aligns with Edmondson’s theory¹ that psychological safety requires equality in voice, where every team member, regardless of title, feels heard and valued. Transparency reduces the fear that “AI is watching,” and replaces it with shared ownership of safety and quality.
Healthcare organizations that prioritize visibility, equity, and learning are more likely to maintain psychological safety in healthcare,⁸ particularly during digital transformation.
Aligning AI with Human-Centered Care
AI will never replace the relationships, skills, or empathy that define healthcare - but it can, and should, amplify them. AI-driven technology, designed with human dynamics in mind, empowers teams to learn, improve, and deliver safer care together.
Psychological safety in healthcare is not just a cultural issue - it's a clinical one. Choosing AI tools that support psychological safety is an imperative strategic decision — one that drives better performance, stronger teams, and safer outcomes.
Explore more research-backed insights by downloading the fact sheet, “AI in Healthcare: Maintaining Psychological Safety.”⁹
Recommended Reading
Edmondson, A.C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly;44(2), 350–383. https://doi.org/10.2307/2666999
Rotenstein, Lisa, Wang, Hanhan, West, Colin P., et., al. (2024). Teamwork Climate, Safety Climate, and Physician Burnout: A National, Cross-Sectional Study. Joint Commission Journal on Quality and Patient Safety; 50(6), 458–462. https://doi.org/10.1016/j.jcjq.2024.03.007
Surgical Safety Technologies. (2025). Top 10 Concerns of Leveraging AI in Healthcare [White Paper]. www.surgicalsafety.com/resources/top-10-concerns-of-leveraging-ai-in-healthcare
Naamati-Schneider L, Arazi-Fadlon M., Daphna-Tekoah S. (2024). Navigating moral and ethical dilemmas in digital transformation processes within healthcare organizations. Digit Health.;10:20552076241260416. https://doi.org/10.1177/20552076241260416
van Dalen, A.S.H.M., Jung, J.J., Nieveen van Dijkum, E.J.M., et., al. (2022). Analyzing Human Factors Affecting Surgical Patient Safety Using Innovative Technology: Creating a Safer Operating Culture. J Patient Saf.;18(6):617-623. https://doi.org/10.1097/pts.0000000000000975
He, X., Rong, X., Shi, L. et al. (2024). Peer-led versus instructor-led structured debriefing in high-fidelity simulation: a mixed-methods study on teaching effectiveness. BMC Med Educ 24, 1290. https://doi.org/10.1186/s12909-024-06262-9
Boet, S., Burns, J.K., Brehaut, J., et., al. (2023). Analyzing interprofessional teamwork in the operating room: An exploratory observational study using conventional and alternative approaches. J Interprof Care.;37(5):715-724. https://doi.org/10.1080/13561820.2023.2171373
Lucian Leape Institute. (2019). Safety is Personal: Partnering with Patients and Families for the Safest Care. https://www.ihi.org/resources/publications/safety-personal-partnering-patients-and-families-safest-care
Surgical Safety Technologies. (2025). AI in Healthcare: Maintaining Psychological Safety [Fact Sheet]. www.surgicalsafety.com/resources/ai-in-healthcare-maintaining-psychological-safety