An FBI agent investigating Secretary Clinton’s email leaks has been found dead in his apartment suspected of a murder-suicide. That’s according to the Denver Guardian; a website created for the sole purpose of disseminating fabricated, distasteful, clickbait news stories.
During the last ninety days of the American presidential campaign, eight million fake news stories circulated, while only seven million real stories were shared during the same time frame, according to DCU journalism professor, Steven Knowlton.
Fake news is now more prominent and harder to ignore than ever before having possibly influenced the results of the American presidential election. A fake story saying that the Pope himself had endorsed Trump had over one million shares. Should it not be the responsibility of those at the top to fight this phoney ‘journalism’? Those who create these stories are not concerned about veracity but rather they just want to “get viewers and make some money” or so a fake news writer worryingly told The New York Times.
A fact is only a fact if it has been obtained through honest research and that has never been truer than now. People want their prejudices confirmed and that is why the readers of fake news don’t rigorously fact check. Facebooks algorithms have been built to show us material we agree with. This is done by essentially monitoring what we click into, what we post about, it looks at what we google search, who we follow, what we buy and many other aspects of our internet activity. We are being provided with a plethora of agreeable information.
With big organisations like Google and Facebook only being concerned about what gets the click and what doesn’t, you will only be shown what you already engage with, not what is truthful and not what stretches the user or forces an open mind. Fake news, real news, who cares? One million readers didn’t care when they shared the story of Trump and Pope Francis.
So far, the likes of Facebook and Google have done little to impede the circulation of fake news having only blacklisted them from their ad server websites.
Tackling the issue is no easy task.
Algorithms themselves currently lack the ability to assess in a way the human would, they don’t see the full picture nor do they seek the truth in a particular piece of data. The algorithm simply takes into account what you already engage with and takes it from there. The issue with this, aside from it being unable to decipher truth and accuracy, is that it does not show the user a information that may disagree with their views. Looking at both sides plays a huge part in accurate decision making, whatever it might involve.
So if not algorithms, then what? Many people have argued that humans should be the ones to decide what is truthful and what is not. Apart from the volume of information currently in need of assessment – being far too high a pile to sort through – our only other current option has a fatal flaw. Humans, naturally have a bias, politically or otherwise. When did truth move from being absolute to subjective? We can’t afford to live in a society where truth is based on an individual’s personal biases and not on absolute proven fact and if this is the case then can we trust our own human filters to decide what is factual truth or post-truth?
People alone will not solve the problem, but neither will algorithms. The Oxford English Dictionary named ‘Post-truth’ word of the year defining it as “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”. Fact is fact, that’s set in stone; however, the post-truth era and those influencing it have managed to make fake news, blurred lines and agreeable but untrue information the president of their republic.
Algorithms have not yet been imbued with human characteristics and humans are not made up of ones and zeros. If big corporations cannot tackle that issue, then it is up to us all to learn to discern the difference between factual truth and ‘post-truth’ truth.