Science for Truth-Building on Social Media

Last year, when the Pew Research Center asked Americans how much they trusted different sources of information, about one-third said they trusted information on social media. By contrast, 72% said they trusted the national news media. Moreover, with reports about the spread of fake news leading up to the 2016 elections coming out almost daily, it seems likely that disinformation on social media might even have affected the democratic process. Given the dubious state of information on the Internet, it seems unlikely that social media could help build trust in anything. And yet, if factual information on social media is to prevail, isn’t science—with its penchant for seeking truth—both the perfect tool and a test subject for building trust?

 

Scientific tools could help promote truth on social media, while social media might be used to promote public understanding of science. In this synergistic process, scientists could use social media to show the world not only who they are, but how they think and create knowledge.

If scientists are to use social media to promote trust in their enterprise, they must first help social media with its own trust problems. One potential way to do this might be through newly developed artificial intelligence (AI) algorithms. Aaron Edell, a machine learning researcher in San Francisco, recently created an app that assigns a probability that an article is a trustworthy news story. The app, called Fakebox, uses a set of machine learning techniques called Natural-Language Processing to analyze the text of a given article, determine whether or not the article is a “real” news story or not. After being trained on a known set of real and fake news stories, Fakebox performed with 95% accuracy on a set of stories it hadn’t seen before.

 

Fakebox is just one example of an individual researcher using AI to combat fake news. The major social media companies, which have already collectively spend billions of dollars on AI research, need to invest resources into creating robust AI to detect and label dubious articles that appear on their platforms. Just last year, Google pledged $300 million to combat fake news. Its initiative consists of a myriad of efforts, with AI solutions alongside more traditional methods like empowering authoritative news organizations and educating young people to spot questionable news sources. Facebook, which was arguably ground-zero for the spread of fake news, would do well to invest more in AI efforts to stop fake news. However, some have argued that it was AI that allowed fake news to spread on social media platforms in the first place. As with any technology, we must use AI with caution and careful oversight.

 

As scientists fight distrust on social media using AI, they could use social media to show the public what the scientific process actually is, and who the practitioners are. Before delving in to how social media could be used to bolster trust in science, it makes sense to ask what issues science actually has with trust. According to a recent Pew poll, the American public generally trusts scientists, with 76% saying they trust scientists “a fair amount” or more (compared to 79% and 27% who said the same of the military and elected officials, respectively). So, if the public generally trusts scientists, why does it sometimes not trust the science itself?

 

Partially, the distrust might be resulting from fake news on social media. A particular example might be revealing: an earlier Pew poll found that 37% of U.S. adults think that genetically modified foods are safe to eat, compared to 88% of scientists who think the same. Similarly large gaps in agreement were found on topics of climate change, evolution, and the safety of pesticides.

 

Food bloggers often capitalize on this distrust. One popular blogger nicknamed the Food Babe condemns the presence of many chemicals in various foods. Often, however, her attacks reveal her lack of understanding of basic nutrition and chemistry. As ignorant fearmongers like Food Babe continue to spread misinformation online, scientists must fight back with truth. For every misleading post that Food Babe uploads, scientists need to put up ten more, each more detailed and clear than the other.

 

Social media has the potential to be an excellent platform for this kind of trust-building. Apps that could visually depict not just the scientific process but the people who engage in it could be an entry point for trust-building in science. Eventually, similar efforts could help the entertainment, medical, judicial, and even political spheres. But trust won’t come from faith alone. The public needs to understand the methods scientists use, see how the experiments are set up, have access to data they collect, and be familiar with the individuals who make up the scientific community. People are smart enough to understand science—perhaps they just need the right sources of information.