In this blog post, we’ll explore whether fake news blocking systems can strike a balance between freedom of information and accuracy. Join us as we analyze the effectiveness and limitations of blocking systems.
Introduction
Over the past few years, fake news has become a major social issue around the world, especially after the 2016 U.S. presidential election, where the influence of fake news was a major issue, social media giants such as Facebook and Google have introduced fake news moderation systems. To filter out fake news, these systems rely on user reports and their own algorithms to identify and block news. However, the debate is still ongoing about how effective these systems really are and what their side effects are.
The impact of fake news
During the 2016 U.S. presidential election, fake news accounted for 8.7 million of the top 20 most talked about stories on Facebook (in terms of likes, shares, and comments), surpassing the 7.36 million for real news. This has sparked a lot of discussion about the impact of fake news on the outcome of the election, with Facebook and Google being the main culprits. In response, both companies introduced systems to block fake news and began taking other steps to restore public trust.
After the U.S. election, the impact of fake news was also highlighted in elections in other countries. For example, in the French presidential election, Facebook and Google focused on curbing the spread of fake news, and as a result, analytics showed that fake news was less prevalent than in the US election. While these data suggest that fake news suppression systems have been somewhat effective, they also show that the effectiveness of suppression can vary depending on the media environment in each country and how citizens consume news.
The effectiveness of fake news systems
Initially, the system was reactive to user reports, but it has since evolved into a proactive measure that stops fake news before it spreads. Facebook has suspended more than 30,000 accounts for spreading fake news, Google uses its search algorithms to reduce the visibility of fake news sites, and more.
However, there is still debate about how effective these blocking systems actually are. For example, fake news that is blocked on Facebook can spread to other social media platforms like Twitter, as well as covertly through private messages and private groups. Furthermore, if the criteria for determining fake news is vague, there is a risk that legitimate news or opinions could also be blocked. This raises concerns that the free flow of information could be impeded.
It has also been pointed out that even after Facebook and Google introduced their blocking systems, fake news hasn’t completely disappeared and is still spreading through other channels. This suggests that blocking systems are not perfect, and that simply suspending accounts or blocking news is not enough to solve the problem of fake news.
Limitations and side effects of blocking systems
The biggest problem with blocking fake news is that it’s difficult to instantly determine whether a story is true or false. Even traditional media outlets often publish misinformation or write stories without clear evidence, and there is a risk that these stories will be labeled as fake news. Furthermore, if the content of the news is controversial, the blocking system may reflect a one-sided viewpoint and suppress certain opinions.
For example, some users tend to report news that doesn’t align with their political leanings as fake news. If these reporting systems are abused, certain groups of people’s opinions can be overly suppressed or dissenting voices can be underrepresented. Facebook founder Mark Zuckerberg has commented on this, saying that “people often tend to report things they disagree with as fake news.” This shows that the subjective judgment of users can affect the blocking system.
In addition, pre-blocking systems can also preemptively block issues that require social discussion, depriving people of important debate. In the absence of a clear definition of fake news, pre-blocking news can be contrary to the basic principles of freedom of information and democracy.
Conclusion and recommendations
Current systems for blocking fake news still have many problems, and their effectiveness is limited. It is difficult to prevent fake news completely, and there is a possibility of biased blocking based on the subjectivity of the system operator. To be more effective, fake news systems need more objective criteria and algorithms to determine the authenticity of news. In addition, blocking fake news alone will only go so far, so it’s important to educate the public and empower them to determine the truthfulness of information.
Furthermore, Facebook and Google need to rebuild trust by increasing the transparency of their blocking systems and providing users with a clear explanation of why news is blocked. Fake news is not just a technical problem, it is also a social and political problem. As a result, fake news blocking systems will need to evolve to ensure freedom of information while minimizing the negative impact of misinformation on society.
Facebook and Google’s blocking systems need constant improvement. The systems need to go beyond screening and blocking fake news to earn the trust of users in a more comprehensive and sophisticated way. To do this, platform operators will need to work with external experts to verify the veracity of news, provide clear notifications about the reasons for blocking, and set technical and ethical standards to effectively curb fake news.