Fake news is a real thing. It has been a real thing on the internet since the first bulletin board popped up in the late 70’s. Knowing human nature, I’d be willing to bet that you could probably even find some fake news on the precursor to bulletin boards, known as Community Memory, but since that was WAY before I was born, and I don’t have any evidence of it, I’ll stick with my original statement.
In the age of social media, especially the ease of which Facebook allows grandma to share stories, Fake news has become near an epidemic, and specifically fake science news has done a lot of damage. This is the number one reason I decided to start A Dash of Science podcast. The only way I can see to counter this issue is by increasing the amount of quality sources on the internet filled with real science, and educating the internet community on how to properly sort through real and fake news.
In that vein, I found myself reading a rather interesting article in Science News dated August 4th 2018, titled “Computer programs call out lies on the Internet” by Maria Temming.
One thing I thought was interesting is that in the beginning of the article, it talks about how prevalent false news items are, and that a lot of them are easy to spot, such as The First Lady wanting to hire an exorcist to rid the white house of Obama-era demons. I laughed briefly until I realized that I have actual friends and family who would completely believe this, and pass it along without a second’s hesitation. It made me realize, that unless someone is already aware of the ease with which fake news articles sneak into the mix, it is going to be significantly more difficult to solve this issue than I originally thought. All I can do, however, is to continue to provide tools and information. You can teach a horse to check for valid sources as they say. Ok no one says that, but you get the point.
So how do we stop this? Facebook this last year or two has been publicly making a big push to weed out fake news, which on the surface sounds awesome. However I recently read an article in the Guardian that casts doubt as to the effectiveness, and even the ethical aspects of self governing. Claims of passing the buck, pushing rumors their own fact checkers have marked as false, and even putting a priority on false news that affects advertisers. For me, one of the truest tells that Facebook doesn’t actually care is looking at their advertisers themselves. Within 5 minutes you can find scam sites pretending to sell items and stealing money as legitimate advertisers. Or my personal favorite, the game commercials advertising a game that doesn’t even exist to get you to download their crappy money grabbing game. Time and again I report these adds under the “scam” option. Every single time I get a response from Facebook – “You did the right thing by letting us know about this. We looked at the ad you reported, and though it does not go against our Ad Policies we understand that you may not want to see ads like it.” How does your Ad Policy not cover advertising items that don’t exist?
But this article isn’t about Facebook. It is about how to recognize fake news. To that I have pulled from this article some rather interesting findings. First, not something that the average user can rely on, but it is of interest to note that the majority of real news sites garner traffic from direct URL (The user went to the page itself by typing in the URL) or from a search engine result ( 48.7% and 30.6% respectively ) while only 10.1% are referrals from social media. On the flip side, false news sites get upwards of 41.8% of their traffic from social media links.
Another method that is still being tested to teach bots how to distinguish reality from lies, is essentially by playing on 6 degrees of separation. Basically the programs take nouns gathered by the quick facts on Wikipedia panel, and see how long it takes to get from one noun to another through relations. The idea is the longer it takes to get from one to the other, the less likely it is true. While this might be a good rough idea, I am not a fan of this method. First, it requires that we take Wikipedia as a unquestioned source of truth, which while I trust Wikipedia over a long period of time, at any point in time it can be extremely wrong. Second, it seems more of a popular belief check rather than a factual one. Third, if there really IS some sort of conspiracy, this would not be a way to find it, and we have to be cognizant of the fact that while every conspiracy is not true, and the existence of one does not give legitimacy to all, there are conspiracies that have been factually verified and even admitted in the past, and thus in general, they do exist.
One of the most interesting methods being employed, and the one I really wanted to share, is the frequency of word use comparisons. Articles that tend to be true use words like “think”,”know”,”consider” to express insight where as false news articles use a lot of “always”, “never”, “proven” to express certainty. I’ve included the chart as published in Science News to give you a better idea. True articles are on the left, while false are on the right.
|think||know||consider||Words that express insight||Words that express certainty||always||never||proven|
|work||class||boss||Work-related words||Social words||talk||us||friend|
|not||without||don’t||Negations||Words that express positive emotion||happy||pretty||good|
|but||instead||against||Words that express differentiation||Words related to cognitive process||cause||know||ought|
|percent||majority||part||Words that quantify||Words that focus on the future||will||gonna||soon|
Lastly, it was shown that on social media site Twitter, real news tends to spread from a single, central source (or a small handful of sources), whereas fake news spreads more from reposting other people’s reposts.
So what can we take from this? Besides learning to check the sources of our articles, and besides understanding the biases of ourselves and our news sources, we can check the article themselves. These won’t give us a definitive answer, but they should help raise a red flag that further checking is needed. Of course, none of this matters, if we can’t get people to read past the sensational headline before they share, but that is an entirely different hurtle.