HBR EDITOR AMY BERNSTEIN: Nitin, you’re a management consultant, you lead Deloitte’s global AI business. What’s the most interesting conversation you’ve had recently with a client?
DELOITTE PRINCIPAL NITIN MITTAL: A client, the CFO of the client basically said, “If I apply generative AI in my company – and the use case that, Nitin, you articulated took me, which is apply in a call center for customer care. Why? Because the marginal cost of conversing with that customer using a virtual digital agent is a zero, and because the marginal cost is zero, I know if I apply it it’ll drop my cost structure by 60 to 70%. But, what does it do to all the employees that I have who are from a disadvantaged part of the society…” Now the CFO was white, “…disadvantaged part of the society, who essentially are earning their daily living and have no other jobs?”
AMY BERNSTEIN: I mean, that seems like a perfectly reasonable question. How’d you answer?
NITIN MITTAL: I punted it to a certain degree because it’s a difficult one to answer.
AMY BERNSTEIN: Yeah.
NITIN MITTAL: The reality is, yeah, it’ll lead to job losses. And the only way that you’ll be able to kind of overcome it, you have to reskill yourself for a different job as opposed to being in a call center. Reskill yourself, get a vocation training to be, for example, a prompt engineer who actually prompts and trains the models than being in the call center. The pay is probably the same, but there has to be a willingness both by the individual to get retrained and by the employer to do the retraining.
AMY BERNSTEIN: Welcome to How Generative AI Changes Everything, a special series from the HBR IdeaCast. Read just about any business history or any case study, and you realize just how much success depends on company culture. The unwritten rules of behavior can make the difference between capitalizing on a big shift or missing it all together. You can’t have successful innovation without the right culture, you can’t compete successfully without the right culture, you can’t thrive over the long term without the right culture. And it follows that if you want to bring your organization into a future that includes generative AI, you need to build the right culture for it.
This week, How Generative AI Changes Organizational Culture. I’m Amy Bernstein, editor of Harvard Business Review and your host for this episode. In this special series, we’re talking to experts to find out how this new technology changes workforce productivity and innovation, and we’re asking how leaders should adopt generative AI in their organizations and what it means for their strategy. Later on in this episode, you’re going to hear from Harvard Business School professor Tsedal Neeley. We’re going to talk through the known risks and how leaders can respond. But, first, I’m talking to Nitin Mittal. He runs the global AI business at Deloitte, and he helped develop the firm’s own implementation of generative AI. He’s also a coauthor of the book, All-in On AI: How Smart Companies Win Big with Artificial Intelligence. Nitin, thanks for coming on the show.
NITIN MITTAL: Thank you.
AMY BERNSTEIN: When you walk into a client’s organization, what are the signs that you look for that say, this organization is ready to move into AI?
NITIN MITTAL: First impressions don’t always tell the whole story in terms of what an enterprise may be doing. But, having said that, if an organization already has some kind of a setup, like a center of excellence or a group that is focused on AI and has been experimenting and working with different business units, that is a very positive sign. On the other hand, if they just have a data science group that has been conducting proof of concepts without the connectivity to business, they’re not thinking about the culture of the organization, and they would very likely not be progressing. Those are things to kind of look out for. The other aspect to look out for is the leadership and the human side.
AMY BERNSTEIN: Yes. How do you advise your clients to lead, to shape their organizations into cultures that will embrace AI rather than run from it in fear?
NITIN MITTAL: Yeah. So, what is being noticed and what is being observed in many of these organizations is that the pressure to move ahead at speed and with skill is coming from the employees themselves. If we don’t provide them these particular tools, and we don’t provide them all the ways of augmenting themselves through generative AI, they are going to find their own ways, and that could lead to unfortunate circumstances where they end up using, let’s say, open source models and start leaking an organization’s data through the usage of those open source models.
AMY BERNSTEIN: So, you just alluded to the need for guardrails, right?
NITIN MITTAL: That is correct.
AMY BERNSTEIN: So, I wonder then what the role of culture is in all of this. I mean, is there a way communicate what’s okay and what’s not okay when you, an employee, are out there experimenting with ChatGPT, generative AI, which we want you to do within certain bounds, right? How does culture come into play here?
NITIN MITTAL: My view is that no AI system is going to magically somehow be responsible by itself without the culture of that organization being responsible onto itself to start with. It cannot be dictated by the CEO, it cannot be governed by the board, it cannot be mandated by the leadership. It is the prerogative, it is the sense of accountability of every single person to essentially always think about the right usages, in the right time, for the right areas where AI can be applied.
AMY BERNSTEIN: So, Nitin, as you talk to your clients, are you seeing alignment between the management team and boards, or misalignment? What’s going on there, on the generative AI front?
NITIN MITTAL: I would not necessarily say there’s misalignment. Rather what it is say it is, it’s a lot about questions. The board certainly has a lot of questions of management, but management also has questions. And it’s all essentially around what is the impact to our business? How fast would that impact materialize? How disruptive this could be? And ultimately, how do we need to respond both culturally and from a safety and responsibility standpoint to this phenomena? That’s the set of questions being asked.
AMY BERNSTEIN: How are those questions being answered?
NITIN MITTAL: Frankly, they’re not necessarily being precisely answered. Everyone is trying to get their arms around it. We have a pretty good idea, but we also have to kind of learn. In Deloitte, for example, we have something called the trustworthy AI framework, and by its very nature, it’s a framework. It gives a set of guidelines, protocols, and methods in terms of what to think, when to apply those methods, and how to apply those methods. But, every organization also has to make sure that their employees are culturally sensitive to applying it in a responsible manner.
AMY BERNSTEIN: What does that mean?
NITIN MITTAL: The same way, the same way that every employee has a bond in terms of how do they work with their coworkers, how do they show up, what task they actually perform, and consequently, what is the team environment that they want? Think of that bond extending beyond just the human coworker, extending to essentially a non-carbon, non-bipedal coworker that happens to be an intelligent machine.
AMY BERNSTEIN: So, how do companies, that do it right, tease out that bond, that you just described, and turn it into a culture that can guide an organization forward on the use of AI?
NITIN MITTAL: There are perhaps not many companies who have kind of perfected it. But there are certain elements in terms of what is kind of critical to tease this out. First and foremost, education on cultural fluency. What would it take our employees to essentially apply things in a responsible manner, in a safe manner, for the benefit of not only the business, but their customers and society at large?
AMY BERNSTEIN: Does any organization train on cultural fluency in a way that you would want to share with other organizations.
NITIN MITTAL: In pockets. I’ve seen it in pockets. I’ve seen it in pockets in a few organizations that we serve. I’ve also seen it in pockets in Deloitte as an example. But, that cultural fluency has typically extended to the realm of being culturally sensitive, particularly if you’re a multinational organization, not necessarily culturally sensitive in the context of the rise of intelligent machines.
AMY BERNSTEIN: So, it sounds as if leaders then have to start making room for these foundational questions. I mean, these are questions we’ve never had to ask ourselves before, right?
NITIN MITTAL: These are questions we have never had to ask ourselves before, because now, with generative AI, that concept of we, the people, also transcends to we, the people and machines. And that’s where the cultural boundaries have to be pushed. What would it mean for a factory worker to have a robot as a coworker? What would it mean for a professional consultant to have an AI model that is augmenting your particular kind of job, and augmenting and aiding the insights that you bring, and consequently being a coworker in your team? What does it mean for a medical professional to essentially have a AI assistant that is aiding with diagnosis? What does it mean?
AMY BERNSTEIN: So, this will call on everyone’s powers of imagination, but also everyone’s commitment to accountability and trust.
NITIN MITTAL: Absolutely. This is where I was kind of going earlier. It has to be for everyone, by everyone.
AMY BERNSTEIN: Nitin, you’ve described what progressive organizations are starting to do, including Deloitte. Where have you seen organizations kind of miss the mark? Where do they go wrong?
NITIN MITTAL: Well, there are definitely telltale signs of it.
AMY BERNSTEIN: Yeah, what are they?
NITIN MITTAL: Frankly, the organizations that absolutely miss the mark is who have got this viewpoint that, “Well, this is yet another technology, probably going through a hype cycle, and consequently we’ll just have kind of this particular group in IT, or this data science function that we have, or this set of individuals, kind of just look into it and take it forward.” That is when they miss the mark. Rather, those organizations who actually view this as a moment in time where they have to question the basis of how do they compete, how do they thrive, what changes do they need to make, both from a product or a service perspective, but more important from a culture and a people standpoint, actually are the ones who are able to progress forward. If that can be tackled first, you will be a learning organization, you will thrive in a digital economy, and you will redefine the market that you’re in.
AMY BERNSTEIN: Nitin, thank you so much.
NITIN MITTAL: Well, thank you.
AMY BERNSTEIN: Coming up after the break, I talk to Harvard Business School professor Tsedal Neeley about adopting generative AI in your organization, and the right ways to do that effectively and ethically. Stay with us.
AMY BERNSTEIN: Welcome back to How Generative AI Changes Organizational Culture. I’m Amy Bernstein. Joining me now to discuss how to adopt generative AI within your own company is Tsedal Neeley. She’s a professor at Harvard Business School, and she wrote the HBR Big Idea article, “8 Questions About Using AI Responsibly, Answered.” Tsedal, thanks for joining me.
HBS PROFESSOR TSEDAL NEELEY: I’m so happy to be with you, Amy. Thank you for having me.
AMY BERNSTEIN: I’m so happy you’re here. So, I have more than eight questions to ask you, all right?
TSEDAL NEELEY: Great!
AMY BERNSTEIN: In your research, you’ve studied how global companies and smaller organizations alike become leaders at digital collaboration, remote work, and hybrid work. What about generative AI? Are organizations set up for it?
TSEDAL NEELEY: Currently, organizations are neither set up for it, nor do they fully understand it, but the adoption and the curiosity around it has been extraordinary, and so I think people will start figuring it out very quickly.
AMY BERNSTEIN: What kinds of changes are needed? They’re cultural, they’re organizational, what kind?
TSEDAL NEELEY: I think the first thing that organizations need to ensure happens is that people understand these technologies fully. To really develop some form of fluency, a minimum level of fluency around what the technology is, what it isn’t, what are the limitations, what are the risks, and what are the opportunities. So, everyone needs to start experimenting with it, but it’s really important to do it very carefully.
AMY BERNSTEIN: Now, I have to raise the specter of change management. What does this mean for change management? It’s hard enough under the up till now normal circumstances.
TSEDAL NEELEY: Absolutely. You know what? Imagine change getting motivated from the top-down imperatives or mandates. Here we have a scenario where there’s a lot of bottom-up activities.
AMY BERNSTEIN: With what you’re describing, so much bottom-up rather than top-down change. What is leadership then?
TSEDAL NEELEY: Leadership, in this kind of scenario, you need digital leaders with digital mindsets to very quickly mobilize and begin organization led experiments and implementations of these tools, because otherwise, you’re going to have individuals just experimenting and playing with them, which is actually a very, very good thing, but not understanding how they work. You can easily and unwittingly make a very consequential mistake to an organization. An example of this is uploading proprietary information, organizational confidential materials, because anything you put into these systems get fed into the overall model, which is why leaders have to guide the way these things are implemented. We need to think about these tools no different than the way that all of us had access to the internet 30 years ago. You can’t stop it, you can’t control it, unless you set the right boundaries and have these ethical codes that people follow, and even ways to protect the company.
AMY BERNSTEIN: So, do we need a new playbook to manage this change?
TSEDAL NEELEY: We need to take our playbook and add technology and speed and buy in and learning onto it. Organization-led opportunities and experiments become important, which is have some people start to work with them and document what they’re learning. Also, think about where do we automate and where are the places where we can do different types of strategic, creative, interpersonal kind of work. Third thing is you have to have a culture of responsible AI use from the start. This is not an afterthought, this has to be embedded in all that you do from the start. People need to be trained, and every single decision they make with their generative AI uses has to have ethical considerations, because it’s easy to get in trouble around this. Then finally, I would say that you have to pilot, you have to iterate, you have to be open to continuous learning, constant adapting, because you have to have a communication plan where people are open and understand that these changes are happening so fast that we have to be attuned to them and be prepared to implement them. Then finally, the culture change. You have to encourage a culture of flexibility, of innovation, of continuous learning, rewarding people who are adopting the new technologies in the right ways, you have to provide support and resources for those who are struggling, and for those many people are very afraid of these changes. So, you’ve got to make sure no one gets left behind. This type of change requires skill building, shifts to the nature of work in your organization. Many, many shifts.
AMY BERNSTEIN: So, Tsedal, talk a little bit about skill building, because that can be pretty challenging. You have people who are starting out with different skill levels, but also very different attitudes and levels of acceptance and fear. How do you do skill building in an organization with this nascent technology coming on so hard?
TSEDAL NEELEY: So, imagine a two by two. You know? You’re with an HBS professor, we have to come out with a two by two. You should have expected this.
AMY BERNSTEIN: Knew it, knew it.
TSEDAL NEELEY: And you knew it. Imagine a two by two, and imagine a framework called the Hearts and Minds framework. On one side, you have buy-in, where people have to believe that this is important. And another dimension is the belief that they could do it. Or another word for this is, do I have the self-efficacy for this? So, if you have high buy-in and high sense of efficacy, you are going to be inspired, excited. You are in a great, great spot. But, for those who do not, who may be struggling, it is incumbent on leaders of organizations to do the right type of messaging, to also build awareness and provide resources and support for people to learn these things.
AMY BERNSTEIN: How do you do this at scale over time? How do you sustain this?
TSEDAL NEELEY: It’s actually something that we’ve done many times when it comes to scale building. Number one, individual managers need to understand where their team members are. So, you’re bringing it down to the unit of analysis of a team. Team managers need to understand where people are in terms of their buy-in and their sense of efficacy. With that, has to be an organized training guide, learning guide, tutorials. Continuous learning is a mandate in this era of dramatic technological shifts and changes.
AMY BERNSTEIN: It sounds like it’s sort of the actual learning along with the compassionate piece of leadership, helping people embrace it, the hearts and minds piece.
TSEDAL NEELEY: Absolutely. Do you need to help more with the mind part or do you need to help more with the heart part? The other thing I’ll say here is there’s a phenomenon in this type of change called contagion. We do this as a group, together we have collective efficacy, and together we get through it. You can’t let individuals flounder and get into sense of job insecurity, et cetera. This is why the team level is so important for this.
AMY BERNSTEIN: That’s a great insight. It puts so much of the agency into the hands of the manager, the team leader, it doesn’t just happen from the top-down.
TSEDAL NEELEY: Absolutely.
AMY BERNSTEIN: Yes.
TSEDAL NEELEY: It needs to touch every member of the organization.
AMY BERNSTEIN: So, when we’re talking about hearts and minds, Tsedal, we really have to talk about the fear factor as well. A lot of tasks are going to get automated, and the natural conclusion for many of us to draw is that we will get automated out of our jobs. What do you say to that? What do you say, Tsedal, and what should a manager say to his or her team?
TSEDAL NEELEY: Listen, there’s no doubt that there’s going to be changes to the nature of jobs. People’s jobs will shift. But, one thing we know is that every technological revolution has created more jobs than it has destroyed. For many people who are writers, writers are panicked, and I understand that completely, but it’s important to understand that these technologies work well with humans in the loop, meaning it’s human intelligence meeting artificial intelligence. Now, the reality is the long-term effects of generative AI are not fully known to us. We know they’re going to be complex, we know there’s going to be periods of job displacement, there’s going to be a period of job destruction for some industries, and this is why I always come back to the notion of education, training, upskilling and reskilling, and thinking about the various ways in which generative AI cannot help us with interpersonal work, with empathy, with various forms of creativity. So, there’s a lot for us to continue to do, but it’s important to understand that, ultimately, we can’t even conceive of the new things that are going to come out of this. So, there will be many more opportunities, many more things, many more industries that we can’t even imagine that are going to be formed. Will things remain the same for individuals in terms of jobs, companies and industries? Unlikely.
AMY BERNSTEIN: So, it’s a very new technology, we don’t have a lot of guardrails around it, we don’t even really know what it’s capable of, we get a taste of it if we play with it. What are some of the risks ethically speaking here?
TSEDAL NEELEY: So, generative AI comes with many risks. The first one is it can perpetuate harmful biases, meaning deploying negative stereotypes or minimizing minority viewpoints, and the reason for this is the source of the data for these models, the underlying models, the large language models, are the internet, documents, it’s really pulling from everywhere and anywhere. As long as we have biases and stereotypes in our societies, these language models will have them as well. The second thing is misinformation from making stuff up, falsehoods, or violating the privacy of individuals by using data that is ingested, embedded in these models without the consent of people, so personal data can get into these. So, these are the ethical considerations that are important to both understand and to develop codes of ethics in your organizations to avoid them, and there are ways to avoid them. By the way, regulation is coming fast, the government is working on it at the state level, at the national level, but regulation still lag adoption.
AMY BERNSTEIN: Let’s talk about harmful bias a bit. How do we prevent it?
TSEDAL NEELEY: There are a couple of things to consider. One is to always understand the source of data. So, generative AI may not give you citations or even the right citations, but if there’s some information that it spits out, it’s important to check it and to double check it and to triple check it, to triangulate, to try to find primary sources. So, it’s important to have diversity in your company to vet these things, and if you’re building models, large language models, internal large language models, which is where I think this is going to go for many companies, you need diversity, you need women, you need people of color, who are helping design these systems, you need to set strict rules around documentation, transparency, understanding where the source of all of this data is coming from.
AMY BERNSTEIN: Doing the legwork.
TSEDAL NEELEY: Absolutely, must do the legwork.
AMY BERNSTEIN: Yeah. Yeah. There’s a job that isn’t going away, huh?
TSEDAL NEELEY: Exactly. These tools, in my mind, get us started, and we need to do additional work before the output is ready for primetime.
AMY BERNSTEIN: So, you mentioned transparency. What about how you, your team, your organization, is using generative AI? What are the responsibilities there in terms of transparency?
TSEDAL NEELEY: It’s interesting because I don’t think everyone will be reporting that they’ve used ChatGPT for any and every little thing. I mean, that’s no different than do we go around telling people, “I Googled this, I Googled that. I went on this website, I went on that website,” the use of our browser? No way are we going to do that, and no way do we need to do that. It only matters if there are important consequences from the use of these tools.
AMY BERNSTEIN: Right, and I guess it goes back to what you were saying before about citations and double checking that you, as the individual using these tools, have to remember that you’re responsible for the truth-
TSEDAL NEELEY: Absolutely.
AMY BERNSTEIN: … that you’re putting out there. You cannot blame GenAI for your mistakes, because they’re your mistakes.
TSEDAL NEELEY: They’re your mistakes, and this is where cultural change is important. The responsible AI use culture is going to be crucially important. This is why this is such a big deal for companies. Each individual user has to be responsible for what they put out in the world in their organization by using these tools, which means they have to be extra thoughtful, they have to be extra careful, they have to verify. The oversight is incredibly important. But, is it a shortcut tool? Is it a cheating tool? Absolutely not. We need to celebrate these tools because they’re not going away, and we need to guide people on their best uses.
AMY BERNSTEIN: Right. So, the skills you need are both technical, and then it’s those timeless leadership skills around integrity and accountability and a sense of fairness, right?
TSEDAL NEELEY: A hundred percent. In fact, the timeless leadership skills will be more important than ever before, because right now we’re a bit in the wild, wild west, and we inside of our organizations need to determine what are the safeguards? What are the guardrails? What are the ways in which we’re going to advocate people use these? So that we get the best possible results from them, without getting ourselves, as an organization, in trouble, or without any individuals unwittingly getting themselves in trouble.
AMY BERNSTEIN: So, then, given the kind of small ‘d’ democratic nature of these tools, how do organizational leaders instill those values to ensure that these tools are used in a way that is fair and equitable?
TSEDAL NEELEY: I love that question, because it takes me right back to one of the most powerful organizational characteristics called trust. Trust, a culture where there is trust, a culture where you have some rules to help people, but you trust people to make the right decisions because leaders are role modeling it. There’s learning and training to help people understand how to use it, and the belief that one of our shared values in our organization is trust. This is no different than hybrid work where you trust people after you’ve equipped them. It’s that same characteristics. I am learning that the more digital we become, the more trust becomes one of the most important shared values that companies need to uphold.
AMY BERNSTEIN: You know, what I find so inspiring about your message, Tsedal, is that you’re saying you have got to do the work, you’ve got to understand this technology as a technology, but equally, you have got to pay attention and communicate as a leader, all those timeless leadership skills, the ones we just discussed, because in order to foster the kind of trust you are describing, you have to communicate not just your competence with the tool, but the values that you bring to its use, and that’s the contagion, right?
TSEDAL NEELEY: That’s exactly right. That you can’t be a mediocre leader in the world of remote or hybrid work. You cannot be a mediocre leader in the world of generative AI that is poised to transform every organization, every industry in ways that we can’t really understand today. So, your leadership fundamentals are incredibly important, and leaders have to lead, they can’t micromanage, you can’t micromanage your way out of generative AI. That’s impossible. People are using it whether you want it or not. The question is, how do you make sure that you lead the way on generative AI in your organization as opposed to reactively run around trying to damage control? Because it can bring damage too.
AMY BERNSTEIN: So, what’s changed is the technology, but the leadership values remain as they’ve always been.
TSEDAL NEELEY: The leadership values remain with less flexibility on poor leadership. There’s no hiding on this one, you’ve got to be right ahead, and every leader has to work on becoming a digital leader with a digital mindset. This is it.
AMY BERNSTEIN: Tsedal, it was so interesting to talk to you. Thank you.
TSEDAL NEELEY: Thank you so much, Amy.
AMY BERNSTEIN: Anytime. That’s Tsedal Neeley, a professor at Harvard Business School. She wrote the article “8 Questions About Using AI Responsibly, Answered.” You can find it, and other articles, by experts at hbr.org/techethics. Before that, I talked to Nitin Mittal. He leads Deloitte’s global AI business and co-wrote the book All-in On AI: How Smart Companies Win Big with Artificial Intelligence.
AMY BERNSTEIN: Next episode, How Generative AI Changes Strategy. HBR editor in chief Adi Ignatius will talk to experts who take stock of the competitive landscape and share how to navigate it effectively for your organization. That’s next Thursday, right here in the HBR IdeaCast feed after the regular Tuesday episode. This episode was produced by Curt Nickisch. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox, and Hannah Bates is our audio production assistant. Special thanks to Maureen Hoch. Thanks for listening to How Generative AI Changes Everything, a special series of the HBR IdeaCast. I’m Amy Bernstein.