Diversity and inclusion is a key priority for all global businesses. Forward-thinking companies are actively hiring for cognitive diversity and embedding data driven strategies to turbo charge their D&I agenda. Disruptive technologies such as artificial intelligence (or AI) and machine learning are opening up new ways to derive novel insights and augment decision making for more diverse and inclusive workplaces. The digital age is no longer a 'thing'. It just is. But key questions are still to be answered. Can machines really replicate human behaviours for decision making and produce results that lead to a fairer and better society? What risks do they present? How must companies prepare for increasing regulation of AI across most sectors and geographies? In this first part of our AI blog series we set out some key takeaways from our recent conversation with Dr Vivienne Ming, theoretical neuroscientist and prominent AI researcher in human-related applications on the topic of AI and bias in business.
AI is not a magic wand
Western society is fundamentally better off than at any time historically. We live longer, we suffer less from disease and ill health, and we are generally more affluent. Yet diversity and inclusion remains a key issue for many CEOs and boards. We know from empirical studies that diverse and inclusive teams outperform their competitors. We also know that technology can be a critical enabler. Yet it can be very difficult for leaders to know where or how to start or how to make real change. Many companies are actively looking to widen the recruitment net by using AI and machine learning to predict best candidates for hiring.
This area carries risk and controls are needed to safely test possible new solutions. For example, there are examples where AI used in testing to predict the best candidates for roles was consistently biased against women. This was not the fault of the algorithm but that the tool was using historical internal data on appointments and promotions, which had historically predominantly favoured white, male candidates. The key lesson learned is that there is no AI magic wand that can be waved to predict for inclusive hiring, without a data set that represents the population that organisations wish to recruit for.
AI mirrors human flaws
Even when data is cleansed of any gender markers, the AI can still find patterns that recommend biased hiring, since this is what is suggested by the data used to train the AI model. For example, favouring verbs commonly used by male candidates such as “executed”, as one study found. As most global organisations are actively exploring how to automate portions of the hiring process, it is clear there are key aspects concerning bias, fairness, transparency and justifiability (or explainability) that must be considered, and where specialist support can be critical, given the potential consequences of getting it wrong. A company’s reputation may be quickly damaged by adoption of AI systems that discriminate or treat some groups of people unfairly.
Only human beings can explore the unknown
So what is AI and machine learning good for? We know that AI and machine learning can be extremely valuable to automate specific, repeatable, complex and monotonous tasks. This is not to say that chat bots are a substitute for human capital or that the radiologists, actuaries or doctors, to name a few, no longer have jobs. It does however allow leaders to automate certain types of work to free up people to use their uniquely human abilities, such as: judgment, empathy, creativity, communication and insight (for which no AI can as yet replace, despite the hype surrounding the potential opportunities) to generate greater and more meaningful value. In this sense, AI could mean ‘augmented intelligence’. For instance, AI systems are very poor at judging human promises or intentions from their facial expressions or body language. Indeed, most humans are not perfect at assessing each other’s intentions or promises. We are better at this task when we know a lot of contextual information about the specific person making the promise, as in the case of close family members or long-term colleagues, and so can assess their reliability. AI may become better at this assessment challenge eventually, after several more decades, but not without an ability to incorporate contextual and historical information about the person making the promise.
This is not the industrial revolution
There are also profound opportunities for leaders to leverage technology to help solve one of the thorniest problems of the future of work – the war on talent. Most organisations are competing for the same elite pool of talent. If technology could enable organisations to make the talent pool bigger, the collective effect could be transformational. There are a number of exciting use cases where AI and machine learning can be used to test for attributes that are predictive of inclusion (which studies show are similar for innovation) across a broader population, agnostic of gender or race. Individual attributes such as personal resilience in the face of obstacles may be better predictors of eventual career success than what subjects the person studied at university, which university they attended, or even whether or not they went to university at all. Yet these human qualities that are desirable for organisations competing in the inclusive and digital age - centred on small, ever-changing, multi-disciplinary and multi-cultural teams of diverse talent - to rapidly assess and solve novel and complex problems, either aren’t captured in the majority of hiring processes or aren’t fostered in traditional systems of people management, that have largely evolved from the Industrial Revolution.
The case for leadership
Actively finding and hiring people who are different is only one part of the problem to solve. Organisations must also retain talent. There is an inherent paradox of innovation and diversity. Some studies have shown that people most likely to offer creative or novel solutions are also the most likely to be regarded as the ‘outcrowd’ in established organisations, and the ones whose ideas or points of view are most likely to be excluded. It is here that real leadership is needed to build trust in teams where the outsider’s perspective is valued. Whilst AI can fundamentally change the economics of the problem business leaders are looking to solve in D&I, it is still only a tool. Leaders need to hire creative people and enable a work place culture where people are able to explore the unknown in a trusted environment. True innovation means we don’t know the answer yet – we need to test and see. Here, working with specialists to help teams to understand the risks and to navigate the opportunities, safely, can be hugely helpful.
Listen to the top 5 takeaways from our recent discussion with Dr Vivienne Ming here.
NRF Technology Consulting Practice is an innovative disruptive technology advisory service pioneered by Norton Rose Fulbright to help our global clients to navigate the risks and explore the opportunities that disruptive technologies present. We combine our specialist knowledge of technology in one sector to help our clients to solve problems in another, with fresh thinking. Our global platform means we can provide a holistic service to our clients.
This article was first published in People Management Magazine.