Are Facebook and other social sites doing enough to stop hate online?

As Facebook announced it was purging the profiles of Louis Farrakhan, Milo Yiannopoulos, InfoWars and others from its platforms as they were designated as ‘dangerous’, the question should be asked why they didn’t remove these accounts at the time they were determined to have be in violation of Facebook’s rules, rather than in one big announcement.

Or was this designed to generate positive publicity for Facebook, given their history of slow action and increasing public pressure?

Facebook recently announced that “Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and white separatism.”

However, seven weeks after the Christchurch mosque attack, parts of the livestream video of the attack as of Thursday were still available on Facebook and Instagram, with CNN Business reporting yesterday that it had obtained nine versions of the livestream from Eric Feinberg of the Global Intellectual Property Enforcement Center, which tracks online terror-related content.

Feinberg said he has identified 20 copies of the video live on the platforms in recent weeks and Facebook’s policy director Brian Fisherman reportedly told the United States congress that the company’s livestream algorithm didn’t detect the massacre because there wasn’t “enough gore”.

Clearly in this case their algorithms need further enhancement and while Facebook, Twitter and YouTube employ tens of thousands of content moderators, whose job is in part to enforce their codes of conduct by removing statements, videos and images that don’t comply, the process continues to be reactive and slow.

Further the codes of conduct across various providers e.g. between Google and Facebook are not consistent and the way in which they enforce and manage hate speech online are not the same.

Three weeks ago, the U.K. government released a detailed proposal for new internet laws that would dramatically reshape the ways in which social media companies like Facebook operate. While the proposal remains preliminary, the plan includes setting up an independent social media regulator and giving the U.K. government sweeping powers to fine tech companies for hosting content like violent videos, hate speech, misinformation and more, and as with Australia’s new Criminal Code Amendment that was passed earlier last month, social media executives like Zuckerberg could even be held personally responsible if their platforms fail to enforce the plans.

Australia’s new laws mean that platforms anywhere around the world are required to notify the Australian Federal Police (AFP) of any “abhorrent violent conduct” being streamed once they become aware of it. Failing to notify the AFP can result in fines of up to $168,00 for an individual or $840,000 for corporations. It also makes it a criminal offence for platforms not to remove abhorrent violent material “expeditiously”.

In the modern world social media has become fundamental to how we communicate. It is estimated that global social media users have reached over 3 billion in number and with around 2 billion active Facebook accounts these challenges and the usage of social media is only going to grow.

The issue is how do we control hate speech and incitement to violence on social media across all mainstream platforms?

Ultimately, in the world of social media, legislation is unable to quickly adapt to the ever-changing conditions of online publication and distribution of content. The answer I believe is not to rely on the companies themselves to do all the work to manage this and what’s needed urgently is a model of regulation that can deal with the myriad of challenges the world of social media creates.

While individual country laws and proposals will help, I believe that the best way to ensure genuine protections and a transparent approach is an independent regulator, as is used to promote accountability and ethical standards in the traditional print media.

Voluntary regulation at the industry level, which includes the adoption of a universal code of conduct and the creation of a body that will ensure its application, provides a much more effective system to address these challenges and I think will also help in driving improvements in technology to more effectively enforce and manage the removal of hate speech, sexual and violent online content across all social platforms.