How to Protect your Recruitment Chatbot from Falling Prey to AI Bias

How to Protect your Recruitment Chatbot from Falling Prey to AI Bias

The Recruiting Intelligence Blog

Learn More
hero-jobbies-7

How to Protect your Recruitment Chatbot from Falling Prey to AI Bias

Aida Fazylova
Jul 31, 2019 1:51:19 AM

A key thing many companies can forget is that AI-powered chatbots learn by example. AI solutions are trained by humans and create patterns from experience, meaning that AI can only learn from the examples it is exposed to. If the data is biased, the machine will be as well. However, there are many human flaws that we definitely don’t want to reinforce, including social and cultural biases. 

Unfortunately, even recruitment chatbots built at the highest levels have faced problems with these biases. Take Amazon, for example. Their recruitment chatbot became sexist, only picking male candidates to go forward because its computer models were trained to screen applicants by observing patterns in resumes submitted to the company over the last 10-year period. The problem is that in that 10-year period, most resumes came from men — not surprising in the male-dominated tech industry. Amazon’s system taught itself that male candidates were preferable, penalizing resumes that included the word “women,” as in “women’s chess club captain,” and downgrading graduates from women’s colleges. 

Even with the brightest minds training AI tools, chatbots can still fall prey to human biases. Luckily, there are a few steps you can take to ensure you’re doing the most you can to protect your AI chatbot from this phenomenon.

Review your data strategy and processes

Do your current resume screening processes include things like names, ages and universities attended? Do your current datasets represent the recruitment pool you normally pull from, or do they reflect the pool you’d like to start including? These are all important questions to consider as you decide which datasets to use to train your AI chatbot. 

Don’t forget to also deeply examine the language used in your job ads and remove any wording that could be considered biased — even if unconsciously. For example, many companies like to use the word “ninja” to make their ads stand out and seem more fun or casual. But research found that using “ninja” in applications actually deters women from applying because the word appeals inherently to men.

To create a more representative and diverse dataset, you can create new screening questions and job ads, or seek out different data sources for those screening questions. By strategically creating a new, wider range of data, you can make sure your bot learns how to screen and choose candidates without unfair preferences or choosing the same old candidates each time. 

Choose a diverse team to teach your chatbot

As previously mentioned, chatbots first learn from humans. So, if you use a team of people all with similar gender, race and background, you’re probably going to build a chatbot with some inherited biases. But if you use a well-rounded and diverse team to build that same chatbot, you’ll be much less likely to introduce new biases into the system.

Whether you’re building the chatbot in-house or using a third-party chatbot, do your research to make sure the teams building your recruitment chatbot are diverse, aware of these AI bias issues, and ready to do everything they can to ensure your chatbot is open to any and all candidates. Additionally, you should make sure that your chatbot is advertised and available to all candidates and not just a select few. By making your chatbot easily discoverable by all, you will in turn create more diverse applicants.

“While [AI] is certainly about machines, the building of AI is very much about humans.” When we are designing an artificially intelligent system, we are often making very human choices. If our bot learns by example, then we are responsible for setting a good example. By creating representational training sets, diversifying our development teams, and making our bot available to people of all backgrounds and demographics, we can reduce built-in biases and foster a new wave of egalitarian AI.” 

Ellen Kolstø, IBM design researcher

AI recruitment chatbots have incredible potential — but it doesn’t mean they don’t have flaws. By following the above steps, you can create better, unbiased chatbots that improve your pool of candidates and expand diversity in your company. By eliminating these chatbot biases, you can not only improve your company but also the world around you.  

To learn how XOR’s recruitment chatbot eliminates recruiting biases and helps you hire more diverse talent, contact us today. 

Tags: AI, Recruitment, AI trends

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *