Holding tech titans accountable amidst global uncertainty

Can tech giants create sustainable and accountable growth models?

RI New York 2019 featured a plenary titled: "Investors and tech giants: creating accountable and sustainable growth models."

Can tech giants create sustainable and accountable growth models in today’s fast-moving, uncertain regulatory environment? 

The answer is, yes, they can. But investors must first demand that tech behemoth business models are designed and operated in a way that respects and protects human rights, including the right to privacy. 

Investors should be asking them to do this heading into the 2020 proxy season. 

Investors should also be working with leading tech companies to ensure the frameworks and operational systems required to achieve compliance with basic legal and human rights standards are in place.

The 2019 Ranking Digital Rights Corporate Accountability Index, released in May, evaluated 24 of the world’s most powerful internet, mobile and telecommunications companies on their commitments, policies and practices affecting users’ freedom of expression and privacy. 

While some companies have aligned some of their policies and practices with RDR’s standards based on the UN Guiding Principles for Business and Human Rights, implementation is selective and inconsistent. 

The UNGPs stress the importance of conducting human rights due diligence to determine and then prevent or mitigate human rights harms connected to a company’s business. Unfortunately, most major multinational tech sector companies have failed to acknowledge — let alone take responsibility for — human rights risks and other negative social impacts associated with business models and product designs. 

Companies will be better prepared for regulatory change and uncertainty if they work proactively to align their policies and practices with international human rights standards. 

The EU’s General Directive on Privacy Regulation (GDPR) was  the first step in a wave of tougher privacy measures: a new “e-privacy” regulation is under negotiation for the coming year, and a number of countries around the world have recently enacted or are working on new data protection laws in response to recent failures at Facebook, Google, Equifax, and other companies. 

A national privacy law in the United States, long overdue, now appears inevitable in some form. Even if the scope of federal privacy protection in the U.S. remains subject to debate, it is coming, and the European rule regime has established expectations for this type of standard. 

For now, California’s new privacy law will be the de facto U.S. standard in 2020. 

While Elizabeth Warren’s election promises to break up the tech giants may or may not be realized, stronger antitrust scrutiny around future acquisitions and mergers is a clear global trend. 

Meanwhile, the outcome of a volatile election year in the U.S. may determine how the tech giants will be regulated. The industry lobbyists are as thick as ever in Washington DC, attempting to contain the scope of regulations within bounds that will not fundamentally challenge companies’ business models.

 As our societies continue to grapple with the consequences of violent extremist speech and strategic disinformation campaigns on social media, it is time for investors to step up and act. 

Divergence in public statements on respect for privacy and human rights & lobbying actions

In public, some U.S. executives including Apple’s Tim Cook and Microsoft’s Brad Smith have called for more stringent privacy regulations, which would level the playing field and ease pressure from European regulators. 

Both Microsoft and Amazon’s Jeff Bezos have asked for clear rules around the use of facial recognition software. Way back in April, 2018 Facebook’s Mark Zuckerberg even said he’d welcome more regulation of political advertising.

Investors should be demanding better alignment between public statements and the lobbying activities of the global tech players.  

A transparent globally-aligned regulatory regime would reduce the costs of setting up elaborate systems for self-policing online content, and remove at least some of the business risk of acting ethically at the expense of maximizing revenue. 

The political moment, unfortunately, does not favor decisive legislative action in the U.S. With a political leadership vacuum in the U.S., Brussels may drive the global tech regulatory regime as it seeks to do on sustainable finance. 

For the foreseeable future, the global tech sector  will continue to operate without clear rules in their home country for respecting the public interest, the right to privacy and related human rights. 

Meanwhile, international human rights standards—particularly the UNGPs—provide the best framework around which to align tech companies’ policies and practices that affect public discourse as well as individuals’ freedom of expression and privacy. 

The Ranking Digital Rights Corporate Accountability Index sets forth a clear roadmap for what policies and disclosures companies must put in place in order to meet these human rights standards. 

Investors need to demand accountability from tech sector laggards

Some companies have taken positive steps mainly in addressing risks caused by governments that force companies to assist with surveillance and censorship in ways that violate internet users’ rights to privacy and freedom of expression.  

The 2019 RDR Index found that Google, Facebook, Microsoft, and a number of European telecommunications companies showed strong commitment to respect and protect users’ privacy and expression rights in the face of government censorship and surveillance demands. 

Most companies that are members of the Global Network Initiative earned relatively high marks for governance in the RDR Index because they conduct human rights impact assessments when governments require them to assist with surveillance, censor content, or shut down networks. 

However our researchers found that these same companies failed to address the human rights implications and risks related to their product design and business models. 

The human rights implications of companies’ content rules and policing mechanisms, algorithms and machine learning systems, and targeted advertising business models are evident to anybody who follows the news. Examples include deletion of Syrian war crimes documentation on YouTube, discriminatory career and housing advertising practices, incitement to genocide in Myanmar on Facebook, violent extremist grooming and recruitment by white supremacists in the United States, and the strategic use of targeted advertisement to spread disinformation in elections across the world. 

An opportunity for tech sector investors to close the corporate accountability gap

Yet accountability on these issues among companies evaluated by the 2019 RDR Index was found to be scant or non-existent. RDR examined whether companies disclosed any evidence that they conduct human rights impact assessments on how their content-policing rules are set and how their enforcement mechanisms are deployed. 

Neither Google/Alphabet (which owns YouTube), Facebook, or Twitter—companies whose content and advertising issues dominate the headlines and regulatory debates—showed any evidence of any such due diligence. Nor did they disclose any evidence of human rights due diligence around their deployment of algorithms, machine learning and artificial intelligence. 

Not a single company in the RDR Index was found to conduct human rights due diligence in relation to targeted advertising business models. 

Investors looking for signs that a company is positioned to get in front of regulatory uncertainty should look for improvements in the scope of company oversight and due diligence over the coming year. Ask companies for evidence that companies are working to understand and mitigate all potential human rights harms stemming from all aspects of their operations, product design, and business models.  

Publicly ranking corporate conduct while we wait for legislative action

In early 2019 the Investor Alliance for Human Rights published an open letter to companies, calling on them to adhere to the human rights standards set by the RDR Index. 

Since the 2019 RDR Index was released in May, IAHR members have been using RDR Index data and indicators to develop company engagement strategies, and to draft resolutions for the 2020 proxy season. 

The number of shareholder resolutions concerned with tech giants’ impact on digital rights, and governance of related risks, rose from less than a handful in 2018 to over a dozen in 2019. 

Given the level of shareholder activity over the past year, that number is likely to increase even further in 2020 – unless the SEC slows the momentum by implementing a proposed rule raising the submission and re-submission thresholds for submitting shareholder resolutions.

Data-driven corporate accountability

The next RDR Index will be published in early 2021 with more detailed indicators examining companies disclosure and practice around their deployment of algorithms, machine learning, and targeted advertising. 

Regardless of how long it takes for regulatory consensus to emerge, investors can use the RDR Index as a tool to push companies to make credible and verifiable steps to prevent and mitigate human rights risks of their designs, technologies, and business practices.

Rebecca MacKinnon is Director, Ranking Digital Rights