With contributions from: Vish Kaushik and Todd Quartiere
In 2023, Artificial Intelligence (AI) capabilities from leading companies like OpenAI/Microsoft, Google, and others have experienced significant growth thanks to advances in machine learning (ML) and large language models (LLMs). Health industry organizations are eager to take advantage of these AI capabilities, aiming to enhance productivity, foster creativity, and elevate the service experience for patients and customers. Use-cases range from marketing to diagnostics, from customer/member support to advanced analytics and prediction models.
Leaders across the health landscape are asking themselves the following questions:
- What is our vision for AI? How might it fundamentally change our business model and customer value proposition?
- What is our technology roadmap for embedding AI into our products, services, and internal operations?
- Does our talent pool have the right skillsets to enable and embrace the needed change?
What has received less attention in the press and health industry boardrooms is the critical role that an organization’s culture will have on its ability to harness the potential of AI.
We believe that an organization’s culture will be the key differentiator that separates health organizations who succeed in adapting to this AI revolution from those who are left behind.
At Vynamic, we witness the impact of culture on an organization’s ability to deliver its strategy every day. Organizations who are purposeful about defining and tending to their culture use it as a force multiplier, enabling them to move more quickly toward a shared vision and purpose. Conversely, organizations who have neglected their culture risk internal conflict and misalignment.
We have identified 4 cultural cornerstones that we expect to have a marked impact on an organization’s ability to adapt to the AI future and that are necessary for any significant transformation.
1. Trust
Trust is the cornerstone of effective collaboration. To trust fundamentally means to make ourselves vulnerable to the actions of others because we believe they will do right by us. When we choose to trust someone, we willingly give them power over us, trusting that they will not abuse that power.1 A high-trust company culture gets things done faster and more cost-effectively, while those with a low-trust culture are burdened with a tax on business performance.2
In adapting to a new AI-powered world, health industry organizations will rely on a foundation of trust interwoven throughout their organizations. Employees must trust not only the competence of their leadership to navigate this change, but perhaps, more importantly, trust their leaders to have their best interests at heart. Conversely, leaders need to trust their employees.
In a world where AI-powered shortcuts are readily available, leaders must be able to trust their employees to use (or to not use) these emerging tools in ways that protect the company and its stakeholders’ interests.
High-trust organizations leverage this strength to accelerate decision-making and collaboration across teams. We expect high-trust organizations will experience lower turnover than their counterparts, as employees trust that their organization will find ways to adapt to AI and the changing external environment.
2. Psychological safety
Closely related to trust, psychological safety refers to a person’s perception of the consequences of taking an interpersonal risk.3 It is a precondition for trust.
The successful adoption of AI into the health industry will require experimentation and risk-taking. For these to be manifested productively, two characteristics of psychological safety must be true.
- First, individuals must feel free to challenge the status quo and suggest novel applications of AI – even in the face of long-held company practices or business models. One must feel confident that leading such an experiment will not negatively impact their career prospects, because like a lot of innovations, many of these experiments with AI will undoubtedly fail. In company cultures that promote psychological safety, mistakes are seen as learning opportunities. This same mentality applies to promoting the rapid skill acquisition and adaptation to AI.
- Secondly, a culture high in psychological safety empowers those with concerns to speak up without fear of retribution. Not every idea will be a good idea. However, this two-way open dialogue will be essential for organizations to harness if they are to develop sound AI strategies, make decisions quickly, and drive forward execution of the change.
Recognizing the need to democratize AI innovation in a safe space, Merck recently launched myGPT@Merck to its internal colleagues, bringing large language models and AI-powered data to as many internal users as possible around the world. To ensure colleagues feel supported in their use of this powerful tool, the solution was carefully vetted by IT security experts and the workers council before being rolled out.4
Notably, organizations with employees experiencing low psychological safety are characterized by both a persistent anxiety about future prospects and a dearth of good ideas.
3. Accountability for quality
While much of the conversation about AI relates to the tremendous impact it is likely to have on organizational efficiencies by performing a given task in a fraction of the time with lower labor costs, it’s critical to not lose sight of AI’s relationship to quality. We share the view of many that to a large extent, AI won’t replace people – people using AI will replace those who are not.
Humans using AI have already been demonstrated to perform better than either humans or AI alone. In the health industry, this holds true in several domains with few more important than the role AI can play in assisting providers to make time-sensitive diagnoses. At Mt Sinai, where AI is being deployed to rapidly sift through patient records and give time back to the staff, leaders have stressed that AI tools are informing medical professionals’ decision-making, not replacing it.5 This serves as an example of the needed alignment around the respective roles of AI and people across many roles and functions to produce not only more efficient outcomes, but high-quality ones.
Organizations that do not frame clear cultural expectations about the role of AI risk their teams using AI tools as a crutch and accept mediocrity as a result. We call this the “mediocrity trap”.6 Generative AI tools like ChatGPT make it easy to do “just enough” where a novice or an uncritical eye may not notice they are impacting quality. In contexts where critical thinking or creativity is called for, relying too heavily on AI like this poses the risk of creating undifferentiated or uninformed results. This not only has implications on the quality of the immediate task, but also shortchanges the individual’s growth and understanding of the topic at hand, making them less able to bring informed perspectives to future contexts.
4. Empathy
Empathy holds a pivotal role as the fourth cultural dimension we have identified for organizations to adapt to the AI future. Empathy at its core is the ability to understand and share the feelings of another. When applied within an organizational culture, it signifies recognizing and valuing the emotional experiences, perspectives, and well-being of all members within the institution and its stakeholders.
Empathy is so foundational to innovation, it’s codified as the first stage of the Design Thinking innovation framework (Empathize → Define → Ideate → Prototype → Test) and companies who express it across their organization will be better positioned to identify the most effective and human-centered uses of AI.7 Many have feared that outsourcing customer interactions to AI will result in a loss of empathy, but early research has found that somewhat counterintuitively, AI has the potential to increase empathy. In one study from November 2022, messages drafted by AI and sent by providers to patients were rated by blinded observers to be more empathetic than those that providers drafted independently.8
We recognize empathy is one of the most foundational components of any change management plan, which must account for how various recipients may experience a message. Tools like ChatGPT can be used to pull through the thread of empathy to achieve desired outcomes across a number of areas, such as synthesizing survey results, customer service interactions, and social media, or when used as a sounding board for draft communications.
Vynamic’s experience leading transformation initiatives across the health industry suggests that companies who fail to embed empathy into their approach may struggle adapting to an AI world.
So, what is a health industry leader to do? We recommend the following actions to begin to prepare your organization for the impacts of artificial intelligence:
- Cultivate trust: Balancing vision and transparency
Leaders play a critical role in establishing clear messaging and commitment from the top, while fostering an environment for safe exploration and input from the grass roots. In these relatively early days, teams understand that their leaders don’t have all the answers. But they need line of sight into their leaders’ thinking on the implications of AI to foster trust that the organization is taking the needed steps to adapt and that they won’t be blindsided.
- Commit to your culture
As we stand on the precipice of transformative change within the workplace and beyond, it becomes imperative to uphold the core values of company culture. As highlighted, the utilization of AI should be aligned with human purpose and not the reverse. Anchoring your organization around its meaning and purpose and defining behavioral expectations (such as accountability for quality) provides the north star around which work gets done in both the pre- and post-AI world.
- Demystify AI
While the prospects of AI may be exciting for some, the unknowns associated with it can also create a profound sense of anxiety for many. It’s important to take steps to expose your organization to AI early and often, to let people wrap their heads around the changes to come. Education of AI shouldn’t be treated as a once-and-done didactic exercise, but as an ongoing conversation with the organization. That said, leverage empathy to recognize that not everyone is starting from the same place and be prepared to meet various stakeholders where they are. Simultaneously, start thinking about the skills and competencies needed for tomorrow and invest in upskilling your teams for that future.
- Provide freedom within a framework
Be transparent about the areas you want to explore as an organization. Real progress will require learning by doing. Provide guidance that gives employees a framework to work within for experimentation with confidence so that they aren’t running afoul of company policy. And, importantly, provide the tools for them to do so in a safe space to protect the organization’s intellectual property and confidential data.
If you or your team is grappling with navigating the impacts of AI on your organization, Vynamic can help. Our team is experienced in leading strategic change efforts across the health industry, bringing together intimate knowledge of health trends and challenges with expertise in culture, organization design, technology transformation, and change management. In future insights, we will dive deeper into the specific steps you should be taking to transform your organization for the AI future.
Agree or disagree with our perspective? Contact us to let the authors know, through the channels listed below.
Related Content:
End notes:
- Sucher, Sandra J.; Gupta, Shalene. The Power of Trust: How Companies Build It, Lose It, Regain It. Hachette Book Group.Copyright 2021.
- Covey, Stephen M.R., Merrill, Rebecca R. The Speed of Trust: The One Thing that Changes Everything. Free Press. October 17, 2006.
- Edmonson, Amy C. The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Harvard Business School. Wiley & Sons, Ltd. Copyright 2019.
- Mehanna, Walid, “Announcing myGPT@Merck: A Key Step Towards AI Literacy At Scale”. LinkedIn. May, 2023: https://www.linkedin.com/feed/update/urn:li:activity:7070275820874817537/
- Jeffery-Wilensky, Jaclyn, “How NYC hospitals are using artificial intelligence to save lives”. Gottamist, June 27, 2023: https://gothamist.com/news/how-nyc-hospitals-are-using-ai-artificial-intelligence-to-save-lives.
- Bhattacharya, Shourov. “The Automation of Mediocrity”. Medium. February 2, 2023. Downloaded Aug 23, 2023: https://medium.com/polynize/the-automation-of-mediocrity-c91111535103.
- Interaction Design Foundation. “Design Thinking”. Downloaded August 23, 2023: https://www.interaction-design.org/literature/topics/design-thinking#:~:text=Empathize%3A%20research%20your%20users’%20needs,Test%3A%20try%20your%20solutions%20out.
- Ayers, John W., PhD, MA; Poliak, Adam, PhD; Drezde, Mark, PhD; et al. “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum”. JAMA Intern Med. 2023;183(6):589-596. Downloaded August 23, 2023: https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2804309