How can tech investors make sure they are not funding human rights abuses?

With the recent indictment of tech execs for ‘complicity to torture’, investors must wake up to growing risks for the sector, argues Elizabeth Chiweshenga

With four executives of French surveillance companies Amesys and Nexa Technologies indicted last month for ‘complicity in torture’ for selling surveillance technology to the Libyan and Egyptian governments, the potential for human rights to be infringed by technology is clear and, what’s more, tech companies are now being held accountable.

Amnesty International has said: “These indictments send a clear message to surveillance companies that they are not above the law and could face criminal accountability for their actions.” Clearly, the indictment has wide-ranging implications for technology companies and their investors.

As sustainable and ESG-focused funds are particularly likely to invest in technology companies, and to bear additional reputational risk if human rights are violated, it’s very important for investors to understand where the risks may lie and how to engage with companies involved in developing technology to fully understand and mitigate the potential for harm.

ESG-focused funds are particularly likely to invest in technology companies, and to bear additional reputational risk if human rights are violated

While emerging technologies have brought significant social benefits and conveniences over the last few decades, trust in technology companies is eroding. High profile scandals such as the sale of Facebook data to Cambridge Analytica and the controversial use of facial recognition by US law enforcement agencies have demonstrated that tech companies know an alarming amount about us and can make use of that information in ways we may never have imagined.

Technology designed to carry out surveillance and automated decision-making is particularly controversial, and can be used across a number of sectors. These technologies rely heavily on the use and manipulation of personal data, raising serious questions about how they work in practice and the quality of the conclusions they draw. Given the wide range of applications, inaccuracies or inappropriate use of these technologies can have severe consequences – from enabling stalking or discriminatory recruitment practices, to causing the arrest and detention of innocent people.

For example, Apple’s AirTag and the Tile tracker device, both designed to help people keep track of personal belongings, were accused of facilitating stalking and surveillance. When used in recruitment selection, AI may give rise to discrimination – Amazon’s attempts to develop a CV-screening tool were abandoned when it was found to discriminate against women. Some facial recognition tech has failure rates that rise to 46% the darker a person’s skin colour is, enhancing risks of false accusations; as a result, Amazon, Microsoft and IBM have suspended sales of this tech to US law enforcement agencies.

With the speed of change and adoption, regulators are often playing catch up and locking stable doors long after the tech and AI companies have bolted. However, regulation is now seeking to catch up with technological developments and attempting to restore the protection of rights that are possibly being fast eroded. The UN has called for a ban on sales of private sector surveillance equipment until human rights safeguards are in place, and some states of the US, plus Luxembourg, Belgium and Morocco have all banned facial recognition tech. The EU is drafting a legislative framework to govern the use of AI. If adopted, it could act as a blueprint for regulations in other regions, similar to the effect of GDPR . Updates to existing legislation around ‘dual-use’ equipment, which has both civilian and military uses, are also being considered.

So how can investors in technology companies protect themselves against any involvement in human rights abuses? The key to avoiding risk and being prepared for any future legislation is ensuring that investee companies undertake robust end-use due diligence and customer vetting. This would include asking any prospective company how it understands and addresses possible harms to people that could be caused by its products and services in use. Advice from independent, credible experts and stakeholders is essential. Where high human rights risks are identified, enhanced due diligence and safeguards should be put in place, as far as possible. It’s very important that tech companies ‘know and show’ how they address adverse impacts that could occur from the use of their products and services.

There are, beyond doubt, extremely valuable investment opportunities to be found in the tech sector, but the possible reputational and regulatory risk has to be very carefully considered. Sustainability funds need to be particularly aware as they are major tech investors and face larger reputation damage should any human rights controversies occur.


Elizabeth Chiweshenga is a Senior Responsible Investment Analyst at Aberdeen Standard Investments