Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine

Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine, by Heather Murray.

Whether [Dr Michal Kosinski] has now created “A.I. gaydar,” and whether that’s even an ethical line of inquiry, has been hotly debated over the past several weeks, ever since a draft of his study was posted online.

Presented with photos of gay men and straight men, a computer program was able to determine which of the two was gay with 81 percent accuracy, according to Dr. Kosinski and co-author Yilun Wang’s paper.

The backlash has been fierce.

“I imagined I’d raise the alarm,” Dr. Kosinski said in an interview. “Now I’m paying the price.” He’d just had a meeting with campus police “because of the number of death threats.”

Advocacy groups like Glaad and the Human Rights Campaign denounced the study as “junk science” that “threatens the safety and privacy of LGBTQ and non-LGBTQ people alike.” …

How it works (“anyone” can do it):

Dr. Kosinski and Mr. Wang began by copying, or “scraping,” photos from more than 75,000 online dating profiles of men and women in the United States. Those seeking same-sex partners were classified as gay; those seeking opposite-sex partners were assumed to be straight.

Some 300,000 images were whittled down to 35,000 that showed faces clearly and met certain criteria. All were white, the researchers said, because they could not find enough dating profiles of gay minorities to generate a statistically valid result.

The images were cropped further and then processed through a deep neural network, a layered mathematical system capable of identifying patterns in vast amounts of data.

Dr. Kosinski said he did not build his tool from scratch, as many suggested; rather, he began with a widely used facial analysis program to show just how easy it would be for anyone to pull off something similar. …

Much better gaydar than humans:

The authors were then ready to pit their prediction model against humans in what would become a notorious gaydar competition. Both humans and machine were given pairings of two faces — one straight, one gay — and asked to pick who was more likely heterosexual.

The participants, who were procured through Amazon Mechanical Turk, a supplier for digital tasks, were advised to “use the best of your intuition.” They made the correct selection 54 percent of the time for women and 61 percent of the time for men — slightly better than flipping a coin.

Dr. Kosinski’s algorithm, by comparison, picked correctly 71 percent for of the time for women and 81 percent for men. When the computer was given five photos for each person instead of just one, accuracy rose to 83 percent for women and 91 percent for the men. …

Why does it work?

To account for a link between appearance and sexuality, Dr. Kosinski went further, drawing on what his study called “the widely accepted prenatal hormone theory (P.H.T.) of sexual orientation,” which “predicts the existence of links between facial appearance and sexual orientation” determined by early hormone exposure.