John P. Desmond, AI Trends editor
Artificial intelligence defined by Carnegie Mellon University is fundamental to the approach the US Army is taking to its AI development platform efforts, said Isaac Faber, chief data scientist at the US Army AI Integration Center, in a speech at the university. AI World Government event held in person and virtually in Va. From Alexandria last week.

“If we want to move the Army from legacy systems through digital modernization, one of the biggest problems I’ve found is the difficulty of bridging application differences,” he said. “The most important part of digital transformation is the middle layer, the platform that makes it easy to be in the cloud or on-premises.” The desire is to be able to transfer your software platform to another platform with the same ease with which a new smartphone transfers a user’s contacts and stories.
Ethics cuts through all the layers of the AI application stack, placing the planning phase at the top, followed by decision support, modeling, machine learning, massive data management, and the device layer or platform at the bottom.
“I’m advocating that we think of the stack as a way of deploying the core infrastructure and applications and not listening in our approach,” he said. “We need to create a development environment for a globally distributed workforce.”
The Army has been working on the Common Operating Environment Software (Coes) platform, first announced in 2017, a design for DOD operations that is scalable, flexible, modular, portable and open. “It’s suitable for a wide range of AI projects,” Faber said. “The devil is in the details,” he said.
The Army is working with CMU and private companies on a prototype platform, including with Visimo: Coraopolis, Pa., which offers AI development services. Faber said he prefers to collaborate and coordinate with private industry rather than buy products. “The problem with that is you’re stuck with the value that’s provided to you by that one vendor, which is typically not designed for the challenges of DOD networks,” he said.
The Army is training a number of technical teams in AI
The Army is involved in efforts to develop an artificial intelligence workforce for several teams, including leadership, post-graduate professionals; technical staff being trained for certification; and AI users.
The Army’s technology teams focus on a variety of areas, including general-purpose software development, operational data science, deployments that include analytics, and an operational machine learning team, such as the large team needed to build a computer vision system. “As people come through the workforce, they need a place to collaborate, build and share,” Faber said.
Project types include diagnostic, which can combine historical data streams, predictor, and prescriptive, which suggests a course of action based on a prediction. “At the far end is AI. you don’t start with that,” Faber said. The developer needs to solve three problems: data engineering, an AI development platform, which he called a “green bubble,” and a deployment platform, which he called a “red bubble.”
“These are mutually exclusive and all interrelated. These teams of different people need programmatic coordination. Usually a good project team will have people from each of those bubble areas,” he said. “If you haven’t already done so, don’t try to solve the green bubble problem. There is no point in pursuing AI unless you have an operational need.”
When asked by one of the participants which group is the most difficult to reach and train, Faber said without hesitation: “The hardest thing to get is the leaders. They need to know what the value of the AI ecosystem is. The biggest challenge is how to deliver that value,” he said.
The panel discusses artificial intelligence use cases with maximum potential
In a panel dedicated to emerging AI foundations, moderator Kurt Savoy, Program Manager, Global Smart City Strategies for IDC, a market research firm, asked which AI use case has the most potential.
Jean-Charles Ladd, an autonomous technology advisor for the US Air Force Office of Scientific Research, says: “I point to the benefits of decisions at the edge, supporting pilots and operators, as well as decisions behind, for mission and resource planning.”

Christa Kinard, the Labor Department’s chief of emerging technologies, said, “Natural language processing is an opportunity to open the door for artificial intelligence in the Department of Labor,” she said. “At the end of the day, we’re dealing with data from people, programs and organizations.”
Savoy asked what are the big risks and dangers that experts see in the implementation of artificial intelligence.
Anil Chaudhry, director of federal AI implementation at the General Services Administration (GSA), says that in a typical IT organization using traditional software development, the impact of a developer’s decision only goes so far. With AI, “You have to consider the impact on a whole class of people, constituencies and stakeholders. With a simple change in algorithms, you could be delaying the benefits of millions of people or making the wrong conclusions at scale. That is the most important risk,” he said.
He said he asks his contract partners to have “people in the loop and people in the loop.”
Kinard seconded this, saying: “We have no intention of cutting people out of the loop. It’s really about empowering people to make better decisions.”
He emphasized the importance of monitoring AI models after they are deployed. “The models can drift as the underlying data changes,” he said. “So you need a level of critical thinking to not only do the task, but also to evaluate whether what the AI model is doing is acceptable.”
He added: “We’ve built use cases and partnerships across government to make sure we’re implementing responsible AI. We will never replace people with algorithms.”
Lede of the Air Force said: “We often have use cases where there is no data. We can’t study 50 years of war data, so we use a simulation. The risk is in the learning of the algorithm that you have a “simulation in the real gap”, which is a real risk. You’re not sure how the algorithms will map to the real world.”
Chaudhry emphasized the importance of a testing strategy for AI systems. He warned of developers “who fall in love with the tool and forget the purpose of the exercise.” He advised the development manager to design an independent verification and validation strategy. “Your testing is where you should focus your energy as a leader. A leader needs to have an idea, before committing resources, of how they will justify whether the investment has been successful.”
Air Force Lede talked about the importance of explainability. “I am a technologist. I do not make laws. It is important to explain the ability of artificial intelligence to function in a way that a human can interact with. “Artificial intelligence is a partner with whom we have a dialogue, instead of AI making a conclusion that we have no way to verify,” he said.
Learn more here AI World Government.