How the battle over online content could be won
Posted 10th December 2024 | Category: News
Governments around the world are wrestling to formulate regulation to control harmful and toxic content appearing on social media and poisoning the hearts and minds of their populations and civil society. Big tech and social media companies are investing millions of dollars in content moderation, not to mention the enormous costs in the toll this task takes on the mental health of the people doing the content moderation, the energy to power the supercomputers used for these tasks and the water to cool them. According to former Facebook content moderator turned social activist Chris Gray, even with the increasing use of AI assistance for online content moderation, the army of content moderators has grown 10-fold between 2017 and 2024.
While AI can help reduce the army of humans required to watch the ever-mounting daily quota of content for moderation, the AI bots require constant updating. And this still involves an army of human moderators around the world to train and retrain the AI. We will still require humans to continue to watch and categorise the content for AI moderators and also humans to exercise the nuanced judgements that AI cannot yet manage.
While companies are failing to keep up with the burgeoning amount of content generated daily, equally, governments are struggling to formulate ways to regulate them or compel them to take responsibility for the harmful and disturbing content that appears on their platforms.
For example, last month Australia took the bold move to require social media companies to take 'reasonable steps' to prevent users under the age of 16 to hold accounts with them. While this may support parents to contain the amount of harmful exposure their children are subjected to, one cannot help drawing parallels with the efforts of King Canute and the effectiveness his efforts may have had.
Yet we should not despair. Regulation, while never perfect, has helped in other spheres with powerful actors and huge financial interests. It has held to account the excesses of print media in the UK; governments have developed testing and regulatory regimes to prevent big pharma from making unsubstantiated claims for its products and ensuring they do not produce and promote those found to do more harm than good.
Perhaps regulators should seek a solution to the problems of controlling harmful online content by looking at the problem of differently. Social media is a business and its income derives from driving increased viewing, increased engagement from us, its users. Tech companies specifically set out to appeal to our worst selves precisely because this drives increased viewing. They use vast resources to develop and deploy algorithms to manage, forward and personalise our news and entertainment feeds specifically aiming to amplify exposure to content we will find controversial, alarming and divisive because that creates a spiral of increased viewer activity and engagement.
Rather than trying to control the output we see on social media (and other platforms) through content moderation, it is the input, the machinery of online systems itself on which regulators should be focusing.
Therefore, in order to counter these tendencies from big tech companies and ourselves, regulation should turn its gaze from the Sisyphean task of trying to control the content itself to regulating the algorithms large companies may use to drive engagement, the algorithms used to appeal to our worst selves. In short, we need regulation to reverse the process by which harmful content is deliberately incentivised through the corporate algorithms. This may be the solution we need to successfully tackle the exponential rise of harmful content from big tech platforms .
Post a comment
There are currently no comments on this blog.