New research on the use of an AI “gaydar” raises legal as well as ethical questions

September 21, 2017

A recent article from the Guardian newspaper raises interesting ethical questions about the use of an AI algorithm which, according to research recently published, can determine whether someone is gay from their photograph with 91 per cent accuracy.

The article highlights concerns that the technology developed could be used to discriminate on grounds of sexual orientation. In light of the billions of facial images of people stored on social media sites and in government databases, the reporter suggests that public data could be used to detect people’s sexual orientation without their consent and, for example, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations.

In addition to the specific concerns raised in the article, use of this technology could also have implications in the employment field, and in particular in relation to discrimination on grounds of sexual orientation on recruitment. Best practice already dictates that as a rule photographs should not be used in the job application process unless necessary for the purposes of national security. However, applicants’ photographs could be accessed via social media for monitoring by AI. If used to filter out applicants on grounds of sexual orientation at this early stage of the process without the applicants’ knowledge, on what basis could an applicant bring a claim?

As pointed out in the article, building this kind of software and publicising it is itself controversial given concerns that it could encourage not only the harmful applications highlighted by the article but also those possible in the employment field. The authors of the report, however, argue that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider the risks and the need for safeguards and regulations.