Sonia Katyal in Slate on AI, Algorithms, and Sexual Orientation

10 Oct, 2017

Sonia Katyal in Slate on AI, Algorithms, and Sexual Orientation

Sonia Katyal is chancellor’s professor of law at University of California, Berkeley and co-director at the Berkeley Center for Law & Technology. She published an article in Slate regarding the study that two Stanford researches conducted on artificial intelligience being able to detect a person's sexual orientation. Sonia claims we should approach the study with caution and is not afraid to call out where she finds the work problematic. The main issue, as she points out, is the failure of inclusion of race and sexualities beyond "gay" and "straight".

She writes:

The study was deeply flawed and dystopian, largely due to its choices of whom to study and how to categorize them. In addition to only studying people who were white, it categorized just two choices of sexual identity—gay or straight—assuming a correlation between people’s sexual identity and their sexual activity. In reality, none of these categories apply to vast numbers of human beings, whose identities, behaviors, and bodies fail to correlate with the simplistic assumptions made by the researchers. Even aside from the methodological issues with the study, just focus on what it says about, well, people. You only count if you are white. You only count if you are either gay or straight.

Sonia goes on to explain how using artificial intelligence to determine and label sexuality creates a controversial matter of stripping away a person's safe space, especially in countries where members of the LGBTQ+ communities are being prosecuted for being queer.

A line should be drawn between civil rights and artificial intelligience.

Read Katyal's entire argument here.