The Promise and Perils of Using Artificial Intelligence for Hiring. Protect against data bias

By AI Trends staff

While AI is now widely used in hiring to write job descriptions, screen candidates, and automate interviews, it poses a major risk of discrimination if not used carefully.

Keith Sonderling, Commissioner of the US Equal Opportunity Commission

That was the message from US Equal Opportunity Commission Commissioner Keith Sonderling, speaking at the conference. AI World Government the event was held live and virtual Va. in Alexandria last week. Sonderling is responsible for enforcing federal laws that prohibit discrimination against job applicants on the basis of race, color, religion, sex, national origin, age or disability.

“The idea that AI would become mainstream in HR departments was closer to science fiction two years ago, but the pandemic has accelerated the pace at which AI is being used by employers,” he said. “Virtual recruiting is now here to stay.”

It’s a busy time for HR professionals. “Big layoff is leading to big rework, and AI will play a role in that like we haven’t seen before,” Sonderling said.

Artificial intelligence has been used for years in hiring — “It didn’t happen overnight” — for tasks including interviewing applications, predicting whether a candidate will do the job, projecting what kind of employee they’ll be, and mapping skill sets. promotion and training opportunities. “In short, artificial intelligence is now making all the decisions that were once made by HR staff,” which he didn’t describe as good or bad.

“Carefully designed and used correctly, AI has the potential to make the workplace fairer,” Sonderling said. “However, if used carelessly, AI can discriminate on a scale never seen before by an HR professional.”

Training datasets for AI models used in hiring to reflect diversity

This is because AI models rely on training data. If the company’s current workforce is used as the basis for training, “It will replicate the status quo. If it’s primarily one gender or one race, it’s going to replicate that,” he said. Conversely, AI can help reduce the risks of hiring bias based on race, ethnicity, or disability status. “I want to see AI improve discrimination in the workplace,” he said.

Amazon started building a hiring app in 2014 and found over time that it discriminated against women in its recommendations because the artificial intelligence model was trained on the company’s hiring record database over the previous 10 years, which is predominantly male. Amazon developers tried to fix it, but eventually scrapped the system in 2017.

Facebook recently agreed to pay $14.25 million to settle civil lawsuits filed by the U.S. government alleging the social media company discriminated against American workers and violated federal recruitment rules, the report said. Reuters:. The case focused on Facebook’s use of its PERM program for labor certification. The government found that Facebook refused to hire American workers for jobs reserved for temporary visa holders under the PERM program.

“Excluding people from the hiring pool is a violation,” Sonderling said. If an AI program “holds back the availability of employment opportunities for that class, so they can’t exercise their rights, or if it diminishes the status of a protected class, that’s within our purview,” he said.

Employment assessments, which became more popular after World War II, have provided high value to HR managers, and with the help of artificial intelligence, they have the potential to minimize bias in hiring. “At the same time, they are vulnerable to claims of discrimination, so employers need to be careful and not take a hands-off approach,” Sonderling said. “Inaccurate data will reinforce bias in decision making. Employers must be alert to discriminatory outcomes.”

He recommended exploring solutions from vendors who screen data for risks of bias based on race, gender and other factors.

One of the examples HireVue: South Jordan, Utah, which created a hiring platform based on the U.S. Equal Opportunity Commission’s Uniform Guidelines designed specifically to mitigate unfair hiring practices, said: the whole work.

A post on AI’s ethical principles on its website reads in part: “Because HireVue uses AI technology in our products, we actively work to prevent the introduction or propagation of bias against any group or individual. We will continue to carefully review the datasets we use in our work and ensure they are as accurate and diverse as possible. We also continue to develop our capabilities to monitor, detect and mitigate bias. We strive to build teams from diverse backgrounds with diverse knowledge, experience and perspectives to best represent the people our systems serve.”

Also, “Our data scientists and IO psychologists build HireVue Assessment algorithms in such a way that data removed by the observation algorithm contributes to negative impact without significantly affecting the predictive accuracy of the assessment. The result is a highly valid, bias-reduced assessment that helps enhance human decision-making while actively promoting diversity and equal opportunity regardless of gender, ethnicity, age or disability status.”

Dr. Ed Ikeguchi, CEO of AiCure

The problem of bias in data sets used to train AI models is not limited to hiring. Dr. Ed Ikeguchi, CEO of AiCure, an AI analytics company working in the life sciences industry, noted in a recent account: HealthcareITNewsAI is only as strong as the data it’s fed, and lately the credibility of that data backbone has been increasingly questioned. Today’s AI developers don’t have access to large, diverse data sets on which to train and validate new tools.”

He added: “They often need to use open source databases, but most of them are trained by volunteer computer programmers, who are mostly white.” Because algorithms are often trained on single-origin data samples with limited diversity when applied to real-world scenarios across a broader population of different races, genders, ages, and more, technologies that appear highly accurate in research can be unreliable.”

Also, “All algorithms must have an element of governance and peer review, because even the most solid and tested algorithm is bound to have unexpected results. An algorithm never stops learningit needs to be constantly developed and fed more data to improve.”

And, “As an industry, we need to be more skeptical of AI conclusions and encourage transparency in the industry. Companies should be willing to answer basic questions like, “How was the algorithm developed? On what basis did he make this conclusion?’

Read source articles and information here AI World Governmentfrum Reuters: and since HealthcareITNews.

Source link