

The Christchurch Terrorist Attack on March 15 2019 changed New Zealand forever. The premeditated attack on two Christchurch Mosques took the lives of 51 New Zealanders and severely impacted many more. It was a direct assault on the country’s cherished ideals of multiculturalism.
By capturing an act of terror live on social media and by using the internet as a tool to boost exposure to the killings, the gunman ensured his hateful agenda was maximally amplified.
In response, New Zealand’s government-owned investors, supported by 105 local and global investors representing approximately NZD$13.5 trillion (as at 31 December 2019), began a collaborative effort to engage the world’s three largest social media companies (Facebook, Alphabet and Twitter) with a single focus: to strengthen controls to prevent the livestreaming and dissemination of objectionable content.
Over two and a half years later, we are winding the engagement up (See RI’s coverage here). We are supportive of the improvements the platforms have made to strengthen controls to prevent the spread of objectionable content similar in nature to the Christchurch terror attack. However, the success or failure of the social media companies in preventing the spread of harmful content is likely to determine whether users stay on the platforms or move towards alternatives. Board and Executives must avidly continue to evolve crucial preventative safeguards in an environment where the goal posts keep moving. Continued improvement in this area is fundamental to the basic viability of the platform businesses and also to their ability to respond to a crisis.
Accountability
The heart of the problem is where accountability lies between platform users, platform owners and governments on the issue of objectionable content. Every instance of abuse or spread of objectionable content across platforms is different, with varying contextual circumstances. Yet a common set of policies, designed by the companies themselves, applies.
Regulation is emerging because the companies haven’t moved fast or far enough in terms of taking accountability for objectionable content on their platforms. However, producing regulation of this nature is anything but simple. All measures to find and prevent the spread of objectionable content have trade-offs with human rights implications for both users and viewers.
Striking the right balance is extraordinarily complex. We need protection from abuse by those with intent to use the platforms maliciously, including those with decision making powers, but, fundamentally, we also need the ability to share our views without restraints that impinge on freedom of expression. For some forms of objectionable content such as the Christchurch attack, there are clear boundaries for defining what is illegal. But in many other areas, there are ‘grey’ distinctions between what is harmful but also lawful and this is extremely hard to navigate to ensure human rights are protected.
The platforms have a major role to play. In fact, they can intervene much faster than any government body could which means they will always have a significant role to play. This is one of the reasons why we called for clear (and ultimate) lines of governance and accountability to senior Executives and Board members in an open letter released on behalf of the Social Media Collaborative Engagement on the one-year anniversary of the Christchurch attacks.
It is also the reason we believe our engagement has made some good inroads, at least with Facebook, who has strengthened its Audit and Risk Oversight Committee charter to explicitly include a focus on the sharing of content that violate its policies. It also included a commitment not just to monitor and mitigate such abuse, but to prevent it. This notable improvement is directly attributable to the work of the engagement and a real strengthening of the Board’s governance and accountability on this issue. It puts the company on the front foot in working towards prevention, rather than just fire-fighting problems as they arise.
However, the companies are only at the start of their journey. They must keep this issue elevated and a core focus of the Executive and Board, with considerable resourcing and open and honest reporting on progress between boards and investors.
Independent scrutiny is vital in informing solutions
The issue of content moderation is becoming one of the defining legal and socio-political issues of our time. It deserves its own body of specialist expertise stretching across a range of academia, law and policy. In our view, it is vital that emerging regulation be anchored in the language and law of human rights and any restrictions put in place must be constantly assessed for balance and refinement.
We urge the companies to open up the platforms to allow independent scrutiny of policies and related decisions and actions. This will, in due course, drive effectiveness, improvement and accountability.
Katie Beith is a Senior Investment Strategist within the Responsible Investment arm of the New Zealand Super Fund