In the aftermath of last week’s election, the media has been actively looking for someone to blame for President Elect Donald Trump.  Cultural blowback against Grubhub CEO, Matt Maloney, for sending out a company wide e-mail suggesting that Donald Trump supporters resign their positions within his company has demonstrated that a large percentage of Americans are aware that Trump didn’t win the election due to a strictly racist and misogynistic voting population.  Instead of acknowledging that Hillary Clinton may not have been a particularly electable candidate, a number of outlets have taken to blaming Facebook.  Specifically, the fake news stories that are frequently shared by Facebook’s users.

The theory is not without precedent.  Forty-four percent of Americans say that they do get at least some of their news from the social media platform, and Facebook admits to conducting experiments to gauge the effect different content has on its user base.  In 2010, Facebook conducted an experiment in which they showed sixty-one million users one of two messages encouraging Americans to vote in the primaries.  One group was shown a simple message suggesting that they go vote, while the other group was shown the same message, with the addition of profile pictures of friends that had indicated that they voted.  By comparing their user information to public voter polls, Facebook determined that the people that saw the different presentation of the “go vote” message that included their friends were significantly more likely to go vote in the primary themselves.

In 2012, Facebook conducted another experiment in which they intentionally showed one large group of users only negatively toned content and another group only “upbeat” posts.  They found, once again, that those shown negatively toned content were significantly more likely to post negative content themselves, and vice versa.  Earlier this year, Facebook was under fire for intentionally censoring conservative news stories from their trending news feature.  Despite disputing these claims, Facebook switched to an algorithm to create trending news topics, instead of a team that’s capable of intentional or unintentional bias.

This algorithm, however, has made it easier than ever for fake news stories to appear in the “trending” section of user’s platforms, but is Facebook really to blame for the amount of fake news stories shared throughout their site?  At what point does the responsibility transfer away from the user to exercise a critical eye and onto the platform itself?