A major social media-focused collaborative engagement initiative led by New Zealand Super Fund (NZSF) will be winding up after the investors said the platforms are “highly unlikely to install measures to absolutely prevent the spread” of objectionable content.
According to an external review of the programme, highlighted in NZSF’s annual report last week, social media platforms have made “reasonable efforts” and “material progress” to reduce the spread of offensive or dangerous content, but there is an unavoidable time delay before they can classify content as objectionable.
An additional report notes that Facebook, Twitter and Alphabet – the main targets of the initiative – have continued to decline investor requests for a meeting with a board member.
Following the 2019 Christchurch terrorist attack, NZSF set up a shareholder campaign with fellow state-backed investors the Accident Compensation Corporation, the Government Superannuation Fund, National Provident Fund and Kiwi Wealth.
The initiative, which counted 103 investors members with $13.5trn under management and included big players like Aviva, HSBC, Nomura and Northern Trust, aimed to ensure the tech giants strengthened controls to prevent live-streaming and dissemination of objectionable content.
On its winding down, NZSF's report said:
“As a collective, we take great comfort in the fact that the measures introduced by the platforms have a high likelihood of significantly mitigating the scale of the spread of future objectionable content… We also note and understand the reality that the platforms are highly unlikely to install measures to absolutely prevent the spread of content documenting events similar to that of the Christchurch attack.”
“Therefore, as we move to wind up this collaborative engagement, we do so with the message to Facebook, Twitter and Alphabet that we expect them to avidly continue to make efforts to reduce the classification time-delay, evolve crucial safeguards and remain focused on this issue,” it added.
“Each shareholder will continue to monitor this as a serious business risk.”
NZ Super commissioned think-tank the Brainbox Institute to conduct the externa review of its efforts, explaining that it is a "difficult job… for investors to assess whether or not these changes are appropriate for the scale of the problem."
In its conclusions, Brainbox found that the measures taken by the platforms are highly likely to significantly mitigate the scale of dissemination of future objectionable content. But it also highlighted that “all measures to find and prevent the spread of objectionable content have trade-offs between the human rights of those exposed to the objectionable content and those using the platforms to share content, whether objectionable or not.”
For example, Brainbox said that the use of automated classification in content moderation can lead to inaccuracies and either allow objectionable content onto platforms or block non-objectionable content. Such AI may also be biased and have discriminatory effects.
Facebook, Twitter and Google/Alphabet have been contacted for comment.