How Accountability Practices Are Implemented by AI Engineers in the Federal Government

John P. Desmond, AI Trends editor

Two examples of how AI developers in the federal government are pursuing AI accountability practices are outlined in the report. AI World Government event held virtually and in person this week in Alexandria, Va.

Taka Ariga, Chief Data Scientist and Director, US Government Accountability Office

Taka Ariga, Chief Data Officer and Director, US Government Accountability Office, described the AI ​​accountability framework he uses within his agency and plans to make available to others.

And Bryce Goodman, chief strategist for AI and machine learning Defense Innovation Unit (DIU), a Department of Defense unit established to help the U.S. military more quickly adopt emerging commercial technologies, described his unit’s work to put the principles of AI development into terminology that an engineer can use.

Ariga, the first chief data scientist appointed to the US Government Accountability Office and director of the GAO’s Innovation Lab, discussed: An AI Accountability Framework he helped develop it by convening a forum of experts from government, industry, nonprofits, as well as federal inspectors general and AI experts.

“We accept the auditor’s view on the AI ​​accountability framework,” Ariga said. “GAO is under review.”

Efforts to create a formal framework began in September 2020 and included 60% of women, 40% of whom are underrepresented minorities, for two days of discussion. That effort was fueled by a desire to ground AI’s accountability framework in the reality of an engineer’s day-to-day work. The resulting framework was first published in June as what Ariga described as “version 1.0.”

Striving to bring the “High Altitude Attitude” to Earth

“We found the framework for AI accountability to be very high,” Ariga said. “These are laudable ideals and aspirations, but what do they mean for everyday AI practitioners? There is a gap while we see AI proliferating in government.”

“We settled on a life-cycle approach,” which goes through phases of design, development, deployment, and ongoing monitoring. Development efforts are based on the four “pillars” of governance, data, monitoring, and performance.

Governance reviews what an organization has put in place to oversee AI efforts. “A chief artificial intelligence officer may be in place, but what does that mean? Can a person make changes? Is it multidisciplinary?’ Within this pillar, at the systems level, the team will review individual AI models to see if they were “on purpose”.

For the data pillar, his team will examine how the training data was evaluated, how representative it is, and whether it performs as intended.

For the effectiveness pillar, the team will consider the “societal impact” the AI ​​system will have when deployed, including whether it might violate the Civil Rights Act. “Auditors have long-standing experience in equity valuation. We have based the AI ​​evaluation on a proven system,” Ariga said.

Emphasizing the importance of continuous monitoring, he said. “AI is not a technology you use and forget.” he said. “We’re going to continuously monitor the movement of the models and the fragility of the algorithms, and we’re scaling the AI ​​accordingly.” The evaluations will determine whether the AI ​​system will continue to meet needs “or whether a sunset is more appropriate,” Ariga said.

He is part of a discussion with NIST on a general framework for government AI accountability. “We don’t want an ecosystem of confusion,” Ariga said. “We want a whole government approach. We feel this is a useful first step in bringing high-level ideas down to a level that makes sense for AI users.”

DIU assesses whether proposed projects meet ethical AI guidelines

Bryce Goodman, chief strategist for AI and machine learning, Defense Innovation Unit

At DIU, Goodman is involved in a similar effort to develop guidelines for AI project developers within government.

Projects Goodman has been involved in implementing AI for humanitarian aid and disaster response, predictive maintenance, counter-disinformation and predictive health. He leads the task force responsible for AI. He is a faculty member at Singularity University, has a wide range of consulting clients inside and outside government, and holds a PhD in AI and Philosophy from the University of Oxford.

DOD adopted five areas in February 2020 Ethical principles for AI After 15 months of consultation with AI experts from commercial industry, government academia, and the American public. These areas are responsible, fair, traceable, trustworthy and manageable.

“They’re well thought out, but it’s not clear to an engineer how to translate them into a specific project requirement,” Goode said in a presentation on the Responsible AI Guidelines at the AI ​​World Government event. “That’s the gap we’re trying to fill.”

Before DIU even considers a project, they look into ethical principles to see if it will go through recruitment. Not all projects do. “There needs to be a way to say the technology isn’t there or the problem isn’t compatible with AI,” he said.

All project stakeholders, including those from commercial vendors and within government, must be able to test and validate and go beyond the minimum legal requirements to conform to the principles. “The law doesn’t move as fast as artificial intelligence, which is why these principles are important,” he said.

Also, collaboration continues across government to ensure values ​​are maintained and maintained. “Our goal with these guidelines is not to achieve perfection, but to avoid catastrophic outcomes,” Goodman said. “It’s hard to get a group to agree on the best outcome, but it’s easier to get a group to agree on the worst outcome.”

The DIU guidelines, along with case studies and additional materials, will be posted on the DIU website “soon,” Goodman said, to help others use the experience.

Here are the questions DIU asks before development begins

The first step in the guidelines is to define the task. “That’s the single most important question,” he said. “Only if there is an advantage should you use AI.”

Next is the benchmark that needs to be pushed to know if the project has been achieved.

Next, it assesses the candidate’s data ownership. “Data is critical to an AI system and is where a lot of problems can occur.” Goodman said: “We need some agreement on who owns the data. If it’s ambiguous, this can lead to problems.”

Next, Goodman’s team wants to evaluate a sample of the data. They then need to know how and why the information was collected. “If consent is given for one purpose, we cannot use it for another purpose without obtaining consent again,” he said.

The team then asks if there are responsible stakeholders identified, such as pilots, who could be affected in the event of a component failure.

The responsible mission implementers must then be identified. “We need one individual for that,” Goodman said. “Often we make a trade-off between the performance of an algorithm and its explainability. Maybe we have to decide between the two. Those kinds of decisions have an ethical component and an operational component. So we have to have someone who is responsible for those decisions, which is consistent with the DOD chain of command.”

Finally, the DIU team requires a fallback process if things go wrong. “We should be careful about abandoning the previous system,” he said.

Once all these questions are satisfactorily answered, the team moves on to the development phase.

During lessons learned, Goodman said: “Measures are key. And simply the measurement accuracy may not be adequate. We need to be able to measure success.”

Also, adapt the technology to the task. “High-risk applications require low-risk technologies. And when the potential harm is significant, we have to have a lot of confidence in the technology,” he said.

Another lesson learned is setting expectations with commercial vendors. “We need vendors to be transparent,” he said. “When someone says they have a proprietary algorithm that they can’t tell us about, we’re very wary. We view relationships as partnerships. This is the only way we can ensure that AI is developed responsibly.”

Finally, “AI is not magic. It will not solve everything. It should only be used when necessary and only when we can demonstrate that it will provide an advantage.”

Learn more here AI World Governmentthe time Government Accountability Office, in An AI Accountability Framework and the time Defense Innovation Unit website.

Source link