AAAS 2019 Pt. 1: Fake News

Here’s the first part of a multi-part series on the things I learned from the AAAS 2019 conference. AAAS is the American Association for the Advancement of Science (not to be confused with the astronomers’ AAS) which works on science policy, education, advocacy, and diversity and inclusion issues. I primarily attended the conference as the co-leader of Penn State’s Women and Underrepresented Genders in Astronomy Group (W+iA) along with my colleague and fellow co-leader Emily Lubar.

Originally, I was going to have each day of the conference be a separate post, but there was far too much information in each day to make that in any way tractable. So instead, I’m breaking it down by topic – this particular topic only had one session, but future posts may combine multiple sessions on the same topic.

Here are some fun facts, resources, themes, solutions, and jargon that I learned at AAAS about…

Fake News

From the AAAS 2019 session Fighting Fake News: Views from Social and Computational Science 

Fun Facts That I Learned:

  • The most viral fake news stories were shared more before the 2016 election than the most viral real news stories.
  • Only 15-30% of people believe fake news on first glance, but this number doubles if the information resonates with our existing biases.
  • There is no measurable relationship between exposure to fake news articles and change in voting behaviour in the 2016 election.
  • 25% of highly educated Trump voters will say that the photo of Trump’s inauguration has more people in it than photo of Obama’s – here, vocally denying fact is another way to express an opinion.
  • There’s a low correlation between quality of online information and its popularity. This is more prevalent when the quantity of information is high and your time to digest it and fact-check it is low.
  • More partisan people are more vulnerable to fake news.
  • If you remove the top 10% of bot-scores on Twitter (accounts that are deemed likely to be bots by the Bot-O-Meter mentioned in the next section), you get rid of almost all of the links to low-credibility sources.
  • Facebook is the social media platform that plays the biggest role in the spread of fake news, and it’s shared at the highest percentage by the demographic aged 60 or above.

Resources I Found:

  • Bot-O-Meter: An online tool for Twitter to determine how likely an account is a bot, developed by the Network Science Institute (IUNI) and the Center for Complex Networks and Systems Research (CNetS) at Indiana University. You can also check an account’s followers and friends.
  • Hoaxy: An online tool developed by the Network Science Institute (IUNI) at Indiana University that helps visualize the spread of certain claims and fact-checking across Twitter. You can see animations of the spread of certain claims over time, and which nodes in the Twitter network are likely bots (using Bot-O-Meter scores).

Overarching Themes:

  • To quote Twenty One Pilots, don’t believe the hype! Garden-variety misinformation (constant, intentional, factual errors) are far more dangerous than news articles about made-up stories.
  • Fake news is more a reflection of our polarization than the cause.
  • The people who spread fake news are never exposed to the debunking material because they happen on two different sides of the algorithmic social media network (ex. the shares of fact checking sites on Twitter never interact with the shares of the original fake news).
  • If you put together a toy model that only has social influence (where each node influences another towards its position when they share across the link) and unfriending (each node has a small probability of disconnecting from another node that’s too far from its position) you end up with echo chambers naturally – they are inherently built-in to the design of current social information infrastructure.

Short-Term Solution:

  • In order to fight the fake news epidemic, we need to convince Facebook to make all social media ads and their microtargeting information public.

Long-Term Solution:

  • We need to redesign our social information infrastructure to make it harder for disinformation to propagate.

New Jargon I Learned:

  • Selective Exposure: Seeking out facts to confirm pre-existing biases.
    • Most people don’t actually do this, but the people who do it engage in it a lot. These people tend to be at the extreme ends of the political spectrum (most liberal and most conservative), but those who are more conservative are doing it more in the current political climate.

Best Moment: According to Bot-O-Meter, my heavily themed Twitter account (@SETIPaperReacts) is probably a bot – 45% “Complete Automation Probability”. I only wish I could automate my paper reading!

I’m probably a bot – sad!

Other Uncategorized Thoughts:

  • When trying to understand “fake news”, we have to understand the interplay between factual truth and authenticity: they are not the same thing. It’s hard for us, as scientists, not to equate the two. It’s certainly hard for me to understand why someone in power would tell an easily disproven untruth. But telling an untruth shows that you’re flouting a norm of truth-telling, which shows your contempt of the establishment, demonstrating authenticity. Authenticity wins out when the legitimacy of the system is questioned and when the elites have abandoned the public. The legitimacy of the system is questioned both for valid reasons and because of deliberately propagated disinformation (hence the fake news problem).
  • Fake news stories hang around – you have to look at their effects as long, complex networks over years.
  • Bots in coordination with a fake news source will retweet within a few seconds and they systematically reply to high-popularity accounts… but most of their retweets are done by humans. Bots are like viruses, and they’re effective!

***

It’s very hard to keep track in my notes of exactly who said what, but I want to give credit to the three presenters in this session, listed below!

Presenters:

  • Brendan Nyhan, Dartmouth College, Government and Quantitative Social Science
  • Stephan Lewandowsky, University of Bristol, Cognitive Psychology
  • Fil Menczer, Indiana University Bloomington, Informatics and Computer Science

Leave a comment