John P. Desmond, AI Trends editor
Engineers tend to see things in unequivocal terms, what some might call black and white terms, such as choosing between right or wrong and good and bad. The consideration of ethics in artificial intelligence is highly nuanced, with huge gray areas, making it difficult for AI software engineers to apply it to their work.
This was one of the sessions on the future of standards and the future of ethical artificial intelligence AI World Government conference held in person and virtually this week in Alexandria, Va.
The overall impression from the conference is that the discussion of AI and ethics is happening in nearly every quarter of AI in the vast federal government enterprise, and the consistency of the points made across all these disparate and independent efforts stood out.

“We engineers often think of ethics as something obscure that no one has really explained,” said Beth-Ann Schuelke-Leach, associate professor, engineering management and entrepreneurship at the University of Windsor, Ontario, Canada, speaking at the ” in “Future of”. Ethical AI session. “It can be difficult for engineers looking for strong constraints to be told they are ethical. It gets really complicated because we don’t know what that really means.”
Schuelke-Leech started her career as an engineer, then decided to pursue a graduate degree in public policy, which gives her the opportunity to see things as an engineer and as a social scientist. “I got a PhD in social sciences and I’m back in the engineering world, where I’m involved in AI projects, but I’m based in the mechanical engineering department,” he said.
An engineering project has an objective that describes the purpose, a set of required features and functions, and a set of constraints such as budget and schedule. “Standards and regulations become part of the constraints,” he said. “If I know I have to live up to it, I will. But if you tell me it’s a good thing, I may or may not accept it.”
Schuelke-Leech also serves as chair of the IEEE Society’s Committee on Social Implications of Technology Standards. He commented: “Voluntary compliance standards like those from the IEEE are important when people in the industry come together to say this is what we think we should be doing as an industry.”
Some standards, like those on interoperability, don’t have the force of law, but engineers comply with them so their systems will work. Other standards are described as good practice but are not required. “Whether it helps me achieve my goal or hinders me from achieving my goal, that’s how an engineer looks at it,” he said.
Pursuing AI ethics described as ‘messy and difficult’

Sarah Jordan, Senior Advisor at the Future of Privacy Forum, works with Schuelke-Leach on the ethical challenges of AI and machine learning and is an active member of the IEEE Global Initiative on Ethics and Autonomous and Intelligent Systems. “Ethics is messy and complex and loaded with context. We have a proliferation of theories, frameworks and constructs,” he said, adding that “the practice of ethical AI will require iterative, rigorous thinking in context.”
Schuelke-Lech suggested: “Ethics is not the end result. That is the process being followed. But I’m also looking for someone to tell me what I need to do to do my job, tell me how to behave ethically, what rules I need to follow, remove ambiguity.”
“Engineers shut down when they come across funny words they don’t understand, like ‘ontological’. They’ve been learning math and science since they were 13,” he said.
He has struggled to engage engineers in attempts to develop standards for ethical AI. “Engineers are off the table,” he said. “Discussions about whether we can achieve 100% ethical are conversations that engineers don’t have.”
He concluded. “If their managers tell them to figure it out, they will. We need to help the engineers get the bridge halfway across. It is important that social scientists and engineers do not abandon it.”
A panel of leaders described the integration of ethics into AI development practice
The topic of artificial intelligence ethics is more prevalent in the curriculum of the US Naval War College in Newport, which was created to provide advanced study for officers of the US Navy and now educates leaders of all services. Ross Coffey, the institution’s military professor of national security affairs, participated in a Leadership Panel on AI, Ethics and Smart Policy in AI World Government.
“Students’ ethical literacy increases over time as they work with these ethical issues, so it’s an urgent issue because it’s going to take a long time,” Coffey said.
Group member Carol Smith, a senior research fellow at Carnegie Mellon University who studies human-machine interaction, has been involved in integrating ethics into the development of AI systems since 2015.
“My interest is in understanding what kind of interactions we can create when a person properly trusts the system they’re working with, not trusting it,” he said, adding: “In general, people have higher expectations than they should for systems.”
As an example, he cited Tesla Autopilot features that somewhat, but not fully, realize the capabilities of a self-driving car. “People assume that the system can do much more than it was designed to do. It is important to help people understand the limitations of the system. Everyone should understand the expected results of the system and what mitigating circumstances can be,” he said.
Panel member Taka Ariga, the first chief data scientist appointed to the U.S. Government Accountability Office and director of the GAO Innovation Lab, sees a gap in AI literacy among the young workforce entering the federal government. “Data scientist training doesn’t always include ethics. Accountable AI is a laudable construct, but I’m not sure everyone buys into it. We need their responsibility to go beyond the technical aspects and be accountable to the end user we’re trying to serve,” he said.
Panel moderator Alison Brooks, Ph.D., research vice president of Smart Cities and Communities at market research firm IDC, asked whether the principles of ethical AI could be extended beyond national borders.
“We’re going to have limited opportunities for each nation to agree on the same exact approach, but we have to somehow agree on what we won’t allow AI to do and what humans will also be responsible for,” Smith said. CMU.
Panelists praised the European Commission for confronting these ethical issues, particularly in the area of enforcement.
Ross recognized from the Naval War Colleges the importance of finding common ground around the ethics of artificial intelligence. “From a military point of view, our interoperability must go to a whole new level. We need to find common ground with our partners and our allies on what we will allow AI to do and what we will not allow AI to do.” Unfortunately, “I don’t know if that discussion is happening,” he said.
Discussions about the ethics of AI could probably be done as part of some existing contracts, Smith suggested.
The many AI ethics principles, frameworks, and roadmaps proposed by many federal agencies can be difficult to follow and make consistent. “I’m hoping that in the next year or two we’ll see some consolidation.”
For more information and access to recorded sessions, visit AI World Government.