The goal of AI is to help simplify our lives. We’ve made significant progress in building this fascinating technology, and as machines get more advanced and learn at a deeper cognitive level, they will allow us to automate time-consuming and inefficient tasks.
A recent Lifewire article got us thinking about AI and its role in recruitment and hiring. One way AI can support efforts is by taking on a time-consuming task – sifting through thousands of resumes to find people who fit best in certain roles. AI models used in hiring and recruiting help speed up the process and have a lot of potential to find the right people for the job. But the model needs to be trained correctly.
A monitoring system will help ensure that machine learning algorithms are doing what they are intended to do.
However, there is a potential risk with using AI in hiring processes if the models are not properly monitored for bias, fairness, and other issues. While we mostly agree with the recent research paper from the University of Cambridge criticizing the role of AI in boosting diversity, we also see a solution in making sure AI is monitored accordingly to be more productive and fair.
How do ML models learn biases?
AI algorithms enable machines to decide on various tasks based on data points provided. But, without proper AI system monitoring, biases can unintentionally direct models into unethical territory. A monitoring system will help ensure that machine learning algorithms are doing what they are intended to do. With monitoring, you can more easily confirm that if there are changes in predictions or input/output distribution, then the model likely drifted due to poor data.
For example, if a company has not historically hired a diverse workforce, using past data in models for future hiring practices will result in the company seeing and hiring individuals that closely mirror their existing team. This type of model drift first made headlines when Amazon found that its AI recruiting tool was not recommending female candidates for software developer positions. The recruitment AI tool was being fed historical employee data, which was largely male-dominated, therefore “teaching” it to ignore terms in resumes referencing “women,” as in they attended a “women’s-only college.”
An AI system is only as good as the data entered into it as it learns to find patterns in the historical data set.
This potential for biased results does not mean that AI cannot be trusted to assist with various efforts. It means that organizations must empower people to monitor and update the AI models while providing regular feedback to ensure the AI is ingesting high-quality, real-world data in real-time.
AI’s Role in Hiring
AI can also help remove a hiring manager or recruiter’s unconscious bias and discrimination when screening candidates. Deloitte’s State of AI report found that 94% of business leaders agree that AI is critical to success over the next five years leading us to believe AI expansion will continue into all aspects of an organization, especially in hiring.
It is important to remember that AI tools can’t replace humans, but they can make our jobs much more manageable and support us with tasks that can be tedious or time-consuming. In the future, we will see the growth of human-centered AI bringing the power of human understanding – like judgment and collaboration – together with the best data and AI techniques. It will be up to us to build intelligent systems that can significantly improve hiring outcomes and processes.
We will see an increase in companies relying on AI, machine learning, and other technologies for recruiting and hiring, which speed up the process for both the company and the candidate. Well-suited candidates will move through the process more quickly, and candidates who are not aligned with a company can be spared a long, unnecessary interview process.
AI Bias, Monitoring Solutions
The key to decreasing bias is simplifying and automating AI system monitoring to make sure the models don’t drift. Many people do not understand the technology behind these AI systems, why specific models behave the way they do, and what information they rely on to deliver insights or recommend decisions. Even among the few who understand these systems and work with them regularly, there will always be turnover in an organization, and that historical knowledge can be lost.
To continue decreasing bias in AI and hiring, robust and scalable monitoring solutions are vital to ensuring that the risks associated with models – drift, uncertainty in the data, lack of documentation, lack of clarity on lineage, etc. – are minimized. Monitoring models in real-time can ensure biases are quickly identified and fixed when they arise and automatically retrained if they hit certain risk thresholds.
Equal Employment Opportunity Commission
There are federal protections in place to create equity in the workplace, especially with the use of AI and machine learning technologies. In 2021, the U.S. Equal Employment Opportunity
Commission (EEOC) launched an agency-wide initiative to comply with federal civil rights laws to deter discrimination.
The EEOC monitors promising emerging tech practices and provides guidance on algorithmic fairness, the use of AI in hiring, and more.
The government recently released its AI Bill of Rights which is also related as the federal sector becomes more involved in AI initiatives and how people are protected from biases. Find out more about our thoughts on the Bill here.
Vian H+AI Platform
Our platform is built to empower those behind AI models, especially at scale. The platform’s easy-to-use interface allows our customers to optimize, deploy, monitor, and manage models in production, at scale and in real time.
How is your business using technology to help hire? We would love to hear about the success or challenges with AI models in production with finding applicants, screening resumes, and scheduling interviews.