If you’ve been to any HR or recruitment conference in the past six months, or scrolled through LinkedIn at any point ever, you won’t have been able to avoid the rise of AI, big data and bots. People across all industries are worried that they’ll soon be replaced by AI, bots or robotics. And while bots probably aren’t going to write the next smash hit Harry Potter novel (J.K. Rowling is pretty safe by the looks of it ) technology is going to continue to change the recruitment landscape. Great insight, I know.
But the people who are creating this technology are human. And given how they’re going to change the way people across all industries work it’s important to consider how these products are made, who is making them and how the lack of diversity in one industry might impact all of us.
The evidence
There’s already a whole host of concerning trends out there. If you do a Google image search for ‘doctor’ or ‘engineer’, the results are incredibly white and male. Voice recognition software has historically struggled if you have an accent or are female. Software used to predict criminal behaviour in the US may well be biased against black people. Skewed data sets feeding healthcare AI systems mean they’re likely to give better advice to white men. I could go on.
AI systems often function as black boxes, which means it’s hard to identify any inequality, bias, or discrimination when a system makes any particular decision. This was the case when a university study found significantly fewer women than men were shown online ads promising them help getting jobs paying more than $200,000. It may have been how the advertisers had set up their ads, but equally it could be the unintended consequence of machine learning algorithms that work the recommendation engines. There was just no way of knowing.
All of this stems from a lack of diversity across software development and technology. Too many people share too similar a background. Being typically designed and built from the perspective of white men, the unintended consequence is that they’ll favour white men. But for such a revolutionary technology that is going to change the world so dramatically, this lack of diversity is something we should all be concerned about and shouldn’t accept as the norm. And if as predicted these systems will be making hiring and firing decisions, or even just recommendations, it’s possible that diversity issues in one industry will snowball across many more.
So what can you do as a recruitment manager, a HR adviser, or a CEO when looking at a system which uses AI, machine learning or big data?
Always ask what’s being done to assess, measure or correct for any bias in the system your investing in and implementing. If you don’t get a decent answer, keep on digging. Ask about the diversity in their development teams. Make sure the data sets you start out with are balanced and fair.
And what about the hiring process in general?
It’s not all on the software you’re buying in.
Look at your attraction campaigns to appeal to a diverse audience - if you’re targeting Russell Group universities for a graduate campaign, you’ve already selected an unrepresentative audience.
Review your job ads using our Readability and gender bias tool. Research has found that having feminine-coded ads will only have a slight effect on how appealing the job advertised is to men, and will encourage women applicants. But ads that have more stereotypically masculine words will risk putting women off applying.
And make sure you keep offline checks and balances in the recruitment process to make sure you’re seeing a range of candidates. Name, age and institution blind recruiting can all help reduce unconscious bias from both recruiters and hiring managers. Look at having a ‘Rooney Rule’, or diverse interview panels to avoid ‘groupthink’.
Society and businesses benefit when we’ve got a diverse workforce. And as we move towards the age of AI, it’s vital that we make sure everyone gets a fair shot.