Outside the violent footage of the respective Sydney stabbings,X is awash with pornography,Facebook is filled with AI “trash”,and TikTok is repeatedly found to “algorithmically supercharge” anxiety. It feels like social media is deteriorating at rapid speed. But this content is a symptom of a broader problem.
Social media is getting worse in large part because companies are stripping cautionary investments and shifting resources away from user safety and user protections. Without serious,legally enforceable incentives,the trend toward safety minimalism will only continue and disturbing footage like that we’ve seen this week will continue to circulate.
The eSafety Commissioner’s focus on content take-downs is like catnip for free-speech champions like X Musk. But any critique he or like-minded users may have over these requests would be better directed to the Online Safety Act and the government’s proposed misinformation bill,which was shelved last year but now looks to be revived.
Loading
Both instruments are positive steps forward,but not enough to confront the issue at its root. The act and the bill share elements of an increasingly outdated approach to digital platform regulation,where well-meaning policymakers have carried across principles from traditional broadcasting to digital media distribution that cannot scale,burden the wrong players,and may inadvertently stoke institutional mistrust.
As it currently stands,tech accountability amounts to regulators tailing global multinationals and issuing letters or threats of hefty fines once the harm has already happened. But it can be so much more than this.
Social media companies have a deep knowledge of how their platforms work and access to real-time and granular data on operating conditions. And yet,despite this information asymmetry and capability gap between the tech giants and the government,thanks to the Code of Practice on Disinformation and Misinformation,the industry still enjoys self-regulation and an industry-crafted voluntary code.