Ticker

6/recent/ticker-posts

AI can predict your political ideology using just a brain scan

 


A group of researchers claims to have developed a system that can determine a person's political ideology by looking at their brain scans, using "state-of-the-art artificial intelligence algorithms."

Wow! Either this is the most powerful AI system in the known universe, or it's a complete ruse.

It's a ruse, of course: there's little cause to get excited. To refute the researcher's work, you don't even need to read their article. We're done now. All you need is the words "politics change."

But, just for kicks, let's look at the study itself and see how prediction models function.

The experiment was conducted.

174 US college students (median age 21) – the great majority of whom self-identified as liberal — were gathered by a team of researchers from Ohio State University, University of Pittsburgh, and New York University, who conducted brain scans on them as they completed a brief battery of tests.

According to the research paper:

Each participant had 1.5 hours of functional MRI recording with a 12-channel head coil, which included eight tasks and resting-state scans.

The researchers gathered a group of young individuals, quizzed them about their political views, and then created a computer that flips a coin to "predict" a person's political views. Instead of tossing a coin, it purports to perform the same thing by using algorithms to reportedly analyse brainwave data.

What is accuracy? 

The AI must forecast either "liberal" or "conservative," and "neither" is not a possibility in these systems.

So, straight away, the AI isn't capable of forecasting or recognizing political events. It has to pick between the data in column A and the data in column B.

Let's imagine I break into the AI center at Ohio State University and mash up all of their data. I replace all of the brainwaves with Rick and Morty memes, then cover my recordings so that the humans don't notice.

The AI will still predict whether the trial subjects are conservative or liberal as long as I don't modify the labels on the data.

You may either think that the computer has magical data abilities that allow it to arrive at a ground truth regardless of the data it is provided, or you can see that the illusion is the same no matter what sort of rabbits you put in the hat.

That figure of 70% accuracy is erroneous.

A machine that is 70% accurate in guessing a human's politics is always 0% accurate in predicting them. This is due to the fact that human political beliefs do not exist as objective realities. There is no such thing as a conservative or liberal intellect. Many people are neither, or a mix of the two. Furthermore, many liberals have conservative attitudes and mindsets, and vice versa.

The researchers don't define "conservatism" or "liberalism," so that's the first issue we have. They let the things they're studying define it for them – keep in mind that the pupils are on average 21 years old.

In the end, this means that the data and labels have no regard for each other. The researchers eventually developed a system that has a 50/50 probability of correctly identifying which of two labels they've applied to a dataset.

These algorithms all function the same way, whether they're looking for indicators of conservatism in brainwaves, homosexuality in facial expressions, or if someone is likely to commit a crime based on their skin color.

They have little choice but to brute force an inference, which they do. They are only allowed to pick from a limited number of labels, so they do. And because they are black box systems, the researchers have no idea how it all works, making it hard to figure out why the AI makes any particular inference.

What is the definition of precision?

Humans aren't pitted against robots in these tests. They only construct two criteria, which they then confuse.

The scientists will offer the prediction job to numerous persons one or two times (depending on the controls). They'll then repeat the prediction exercise hundreds, thousands, or even millions of times with the AI.

Because the scientists have no idea how the machine will arrive at its conclusions, they can't just enter the ground truth settings and call it a day.

They must educate the AI. This entails repeatedly assigning it the same task — example, analyzing data from a few hundred brain scans — and requiring it to perform the same algorithms.

They'd call it a day and think it was flawless if the machine got 100 percent on the first try for no apparent reason! Even if they have no idea why — remember, everything takes place in a black box.

If it fails to satisfy a large threshold, they change the algorithm's parameters until it improves, which is more commonly the case. Imagine a scientist tuning in a radio transmission through static without glancing at the dial.

BS in, BS out

Consider the fact that this machine only gets it right around 7 out of every 10 times. That was the best the team could come up with. They couldn't have made it any better.

Its dataset contains less than 200 people, and it already has a 50/50 chance of guessing correctly without any data.

So feeding it all of this sophisticated brainwave data only improves its accuracy by 20% over chance. And only when a group of academics from three major colleges joined together to develop "state-of-the-art artificial intelligence algorithms," as they put them.

By contrast, if you provided a human a dataset of 200 unlabeled symbols, each with a hidden label of 1 or 0, the average person could certainly recall the dataset in a reasonably short number of repetitions if you simply gave them the feedback of whether they guessed right.

Consider the most ardent sports fan you know: how many players from the sport's history can they recall just by team name and jersey number?

Given enough time, humans could memorize a binary in a database of 200 with 100 percent accuracy.

However, if you handed an AI or a human a new dataset, they'd both face the same problem: they'd have to start from scratch. It's almost probable that if given a completely fresh dataset of brainwaves and labels, the AI would struggle to achieve the same level of accuracy without further tweaking.

Benchmarking this prediction model is just as valuable as determining the accuracy of a tarot card reader.

Bad framing, good research

That isn't to suggest that the research isn't worthwhile. I wouldn't say anything negative about a research group committed to revealing the shortcomings in AI systems. You don't resent a security researcher who uncovers a flaw.

Regrettably, that is not how this study is framed.

According to the paper:

Although the direction of causality is unclear – do people's brains reflect their political orientation, or do people choose their political orientation because of their functional brain structure – the evidence here warrants further investigation and follow-up research into the biological and neurological roots of political behavior.

This, in my opinion, is on the verge of being quackery. The inference is that, similar to homosexuality and autism, a person may not be able to select their political viewpoint. Alternatively, it appears to show that just adhering to a specified set of political ideas might rewire our brain chemistry – and at the tender age of 21 no less!

This experiment is based on a sliver of data from a small group of persons who appear to be demographically similar. Furthermore, in any sense of the scientific method, its findings cannot be confirmed. We'll never know why or how the machine predicted anything.

We need more study like this to see where the boundaries of exploitation are with these prediction algorithms. But it's risky to imagine that this study has yielded anything more complex than the "Not Hotdog" app.

This isn't science; it's data prestidigitation. And portraying it as a possible advance in our knowledge of the human brain simply serves to legitimize all the AI frauds that use the same technology to cause damage, such as predictive policing.