Shankar Vedantam: Big Data and The Human Brain

Vedantam giving a talk at the U.S. Coast Guard Academy in 2016. Photo by Cory J. Mendenhall.

Vedantam giving a talk at the U.S. Coast Guard Academy in 2016. Photo by Cory J. Mendenhall.

CHRISTINA LUKE: A Google image search result showing brawny, shirtless white men flashed onto a large screen in the Healey Family Student Center. Standing in front of this projection of spray-tanned male models, by contrast, was Shankar Vedantam, a Harvard fellow and the host of the NPR podcast, “Hidden Brain.”

In his guest lecture on Jan. 29, Vedantam shared the capabilities and caveats of big data, unveiling how the algorithms operating in computers are not too different from processes that take place inside the human brain. As a result, big data systems tend to reflect the biases of the people from which they crowdsource their information, straying from the reality they claim to reflect.

“This is the Google search for the word ‘men,’” Vedantam explained, “And this is what comes up. And you can ask yourself how representative this is of the larger world of men.” 

This search result is one of many instances that show how the biases we have as individuals get collected and consolidated together in big data machines, such as Google Images, churning out results that can be overwhelmingly unbalanced. “In some ways, [big data] is an extension of the problems — both the advantages and disadvantages — of what you see inside the mind.”

The effect of biases coded into data systems extends beyond Google searches. As big data becomes more integrated into the way professionals make decisions, disadvantaged individuals will encounter heightened difficulties in qualifying for certain statuses and privileges. Judges will use data to determine jail sentences, doctors in diagnosing patients and banks in granting loans — all with biases coded in.

When describing data-based decisions in law enforcement, such as whether or not to keep someone in jail, Vedantam noted how human error was amplified in data. “The computer might, in fact, be accurate at predicting that someone is going to get rearrested by the police, but what it doesn’t know is that the police have a bias in arresting that person. The person might not actually be a threat, all the algorithm is predicting is where the police are going to be acting.”

Commercial data practices also present some thorny implications. Vedantam revealed how people’s spending habits can be analyzed to determine loan risk. Data analysis has found that people who purchase a skull hood ornament, for instance, are less likely to repay their loans. However, people who purchase scuff-protectors for their furniture are likely to do so. With laughs from the audience, Vedantam described how at first glance, one might be able to see how people who buy skull hood ornaments might be bigger risk takers. “But there’s something really disturbing about this,” he continued. “Do you really want decisions about whether you should get a loan be made on whether you bought a skull car ornament or not?”

A similar question arises in how data is being used in political life and campaigning. Vedantam not only raised concern over campaigns targeting advertisements to subsets of users and consumers, but also malevolent political actors being able to more effectively exacerbate hostilities and pit groups against each other based on personal data. 

“If I convince you that I’m on your side — that I’m part of your tribe in some way — I actually have enormous influence over you in all kinds of different ways,” Vedantam said. “When we think about how we process information, we are highly skeptical of information that comes from our political opponents and we are largely gullible when it comes to information that comes from our friends.”

When asked by a student the ways this problem can be combated, Vedantam joked that he was flattered he would think he had the answer to the question. But Vedantam has an idea of where to start: “I think the real solution is for us to be skeptical of our own conclusions and how we reach conclusions. What worries me the most is that people are often not skeptical enough of what their friends are telling them. I don’t know how you solve that, but I would love to figure out another way.”

Christina Luke, AY 2019-2020