Will artificial intelligence need human rights training?
Despite all of the advances in the field of artificial intelligence (AI), experts reveal that these technologies are not immune from some of the less-than-admirable tendencies which afflict humans.
As recently reported by the Financial Post, experts have noted increasing biases that plague the decisions made by AI software. Specifically, AI outputs have been found to discriminate on the bases of race, ethnicity, gender and disability.
This phenomenon presents novel challenges to precisely the areas that have historically been susceptible to human lapses in judgement. One such area, as noted by Maya Medeiros, a patent and trademark lawyer, is employment and human resources management. Decisions with regard to hiring, promotion and firing, Maya states, are often being made or influenced by AI despite biases inherent in particular software, biases of which employers may not be aware. The discriminatory biases may manifest in various forms, from imparting negative scores in a cognitive emotional analysis of a video interview on the account of an individual’s disability, to discounting an individual’s work ethic on account of a period of unemployment due to pregnancy.
In order to prevent decades of progress in the area of human rights from being undermined by what is perceived to be objective analysis on the part of AI, it is imperative that those who build, as well as those who use, the technology take accountability for its shortcomings. In that regard, Maya states that “employers need to ensure that AI embeds proper values, that its values are transparent and that there is accountability, in the sense of identifying those responsible for harm caused by the system.” A way to do this, Maya suggests, is to take advantage of AI’s ability to learn and evolve by providing it with training data that reflects diverse values early on, and to continue to monitor the data throughout the machine learning process.
Employers would indeed be well advised to take an active role in counteracting any potentially improper decisions made or influenced by AI to ensure compliance with employment and human rights laws, as it is the employers themselves that are likely to be held ultimately responsible for any violation of such. Additionally, it may be advisable for employers and others relying on AI to obtain contractual indemnification from their respective developers to avoid liability for aspects of AI technology beyond their control.
The author would like to thank Alexandre Kokach for his assistance in preparing this post.