Aporia
Aporia Magazine
My Robot Overlords Just Told Me I’m A Homosexual Liberal With A Low IQ!
3
0:00
-5:59

My Robot Overlords Just Told Me I’m A Homosexual Liberal With A Low IQ!

Are you better than AI at identifying Republicans and Democrats?
3

Let’s play a game of Republican or Democrat. I’m going to show you the faces of eight city councillors. Your task: can you do better than artificial intelligence at identifying their political affiliation? Okay, cue the jingle-jangle gameshow music, because it’s time for My Robot Overlords Just Told Me I’m A Homosexual Liberal With A Low IQ!

The answers are at the bottom of the post.*

Three years ago, Stanford University professor Michal Kosinski had some of his research go viral when his algorithm was able to detect sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos. To put that into perspective, human judges performed much worse than the algorithm, accurately identifying sexual orientation only 61% of the time for men and 54% for women. Now that’s what I call Gaydar.

The Guardian wrote at the time:

It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.

In 2018, Professor Kosinski suggested that AI would soon be able to identify (with great accuracy) people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry enormous social consequences. Obviously, despite what woke social constructionist academics say, a lot of these things do correlate and overlap, so you’d expect immense predictive power for other traits once an AI can reliably assess your IQ within a 10 point range.

Last year, Kosinski was back in the news with a new study. It turns out his predictions were correct. His new algorithm was able to predict, with 72% accuracy, a person’s political orientation (for example liberal or conservative) from a single social media profile picture. For contrast, a regular meat computer only gets it right about 55% of the time. 

But even crazier, if I have you fill out a 100-item personality questionnaire, that only leads to 66% accuracy in predicting your political orientation. So a questionnaire that includes questions like “I treat all people equally” or “I believe that too much tax money goes to support artists” is significantly worse than an AI which looks at one photo. 

Professor Kosinski’s study has a sample of one million participants, so it’s pretty robust. However, some readers might be thinking ‘this isn’t that impressive, because political orientation will cluster.’ In the U.S., for example, white people, older people, and men are more likely to be conservative, just like low IQ individuals are more likely to be unattractive or engage in criminality. Hence another part of the study tested the algorithm’s accuracy when the sample was restricted to people of the same age range, gender, and ethnicity. In this case, accuracy levels only dropped by about 3.5%. The facial features that correlated most with political orientation were head tilt, emotional expression (for example sadness or anger), eyewear, and facial hair. Kosinski wrote that:

Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust.

I'd be interested to hear what readers think the most worrying applications of this technology could be. Can you foresee a time when every dating app has a premium feature with this technology installed? If the accuracy gets to 80-90 per cent, will employers really be able to help themselves? What about corrupt regimes? How will a future North Korea or Saudi Arabia use this to prevent any future uprising and dissent? Comments below, please.

Regulators will probably always be playing catch up to the possibilities afforded by AI, especially this type of technology. For example, three weeks ago, a US company, Clearview AI, was fined £7.5 million by a UK watch dog and ordered to delete all data about UK residents. The US company offers law enforcement agencies access to the “largest known database” of faces in the world (about 20 billion). The UK action is just one of many legal complaints by various agencies across numerous countries where the company collect data. Just last February, for example, Clearview was fined €20m by Italian regulators. Meanwhile, in the US, the company reached a legal settlement, agreeing not to sell its services to companies and private individuals in the US.

If it all sounds very murky and nefarious, the other side of the story is that Clearview found is niche by identifying victims (often children) of sex trafficking—the type of people who aren’t going to be in any regular police database (like driving licenses), but who might have pictures online. There are a few independent articles about this secretive work (read the NYT’s one here), but none that I can find reporting how many children have actually been saved. This obviously makes any gritty utilitarian calculus impossible, but perhaps we can outsource that to AI too.

*Answers: 1R, 2D, 3R, 4D, 5D, 6R, 7R, 8D. If you want to guess some further faces, click here.

If you liked this, please do subscribe. Maybe give it a share to spread the gospel of rationality. And if you want to support my work, you can do so with a paid Substack subscription or using the following methods:

https://www.patreon.com/Ideas_Sleep

BTC wallet address: 1KHB3Mq7njTGfquABcREsiywaxmDbP2NPY

And subscribe to my YouTube Channel here.

3 Comments
Aporia
Aporia Magazine
Ideas for a future worth wanting.
Social science. Philosophy. Culture.