Is artificial intelligence the next frontier for responsible investment?

‘Roboethics’ must become a consideration in ESG, argues Alexander Paladino

As the news media swarms around the epidemic of sexual misconduct in the workplace, many of us have been shocked to see so many of the well-known faces we’ve welcomed into our living rooms exposed as predators. Hopefully, that jolt is enough for those of us working in the technology industry to start taking a hard look at the inherent biases and insensitivities we may be building into the algorithms that increasingly power our world.
In 2016 alone, companies invested upwards of $39bn in artificial intelligence (AI)-powered technologies, and analysts anticipate the technology will contribute up to $15.7trn to the global economy by 2030, making AI one of the brightest spots in tech for the foreseeable future. It’s also one that requires a new way of thinking about the role of corporate social responsibility and ethics in the programming process.

AI requires a new way of thinking about the role of corporate social responsibility

What makes AI so exciting, of course, is the idea that it is technology that understands nuance and learns as more inputs are collected. An AI programme designed to help doctors treat cancer, for example, can analyse thousands of medical journal publications, scores of electronic health records, and individual patient history to recommend a course of treatment. Ultimately, the software learns what protocols work best and which do not, as it accumulates more and more information. That unique capability makes the technology “smart.” But it also allows it to be influenced by the humans that build it.
Take the recent experience of University of Virginia computer science professor Vincente Ordóñez, who, while building an AI-powered image recognition programme, noticed that the programme was unconsciously injecting the biases of his researchers into the programme. Studying the phenomenon further, he found that the machine-learning software was more likely to associate women with images of kitchens, shopping and washing, while images of coaching and shooting were associated with men. A similar pattern was found in a separate study conducted by researchers at Boston University, which used a natural language processing programme to analyse text collected from Google News, and found that the programme amplified male/female gender stereotypes.In one example, when the researchers asked the programme to complete the statement “Man is to computer programmer as woman is to X,” it replied, “homemaker.”
The fact is, AI learns from humans, absorbing any number of unconscious biases that can be coming from the gender, race, ethnic or socioeconomic backgrounds of its programmers. In the case of the technology sector in the US, that means AI is learning primarily from a workforce that is 68.5% white and 64.3% male, according to the country’s Equal Employment Opportunity Commission.
These basic facts, paired with the revelations of widespread workplace harassment, should catapult the topic of ‘roboethics’ to the forefront of tech companies’ AI agendas. As technology becomes more human, it also needs to be subjected to the type of ethical scrutiny that ensures we are not building algorithms that expose our companies to new legal, operational, and social risks. That means building internal ethics committees into the technology function and committing to rigorously testing the technology not just for its utility, but also for its neutrality.
As AI becomes relied upon for all manner of data processing and screening tasks, the stakes are high to get the ethics part of the formula right.
For example, a recent investigation conducted by ProPublica tested for machine bias in a risk assessment software called COMPAS, which is used by the US Department of Justice and National Institute of Corrections to conduct criminal risk assessments that help inform sentencing decisions for felony offenders. The analysis found that the system disproportionately applied lower risk scores to white defendants than to black defendants. That kind of example demonstrates just how serious an impact machine bias can have if left unchecked.
There’s an old adage in the world of computer science that says simply: garbage in, garbage out. It means that the quality of any output is determined by the quality of the inputs.
As an industry, the tech community needs to heed that warning when it comes to building the algorithms that are central to the growth of so many exciting new breakthroughs. The alternative is a future in which our digital counterparts are no less flawed than their human creators.

Alexander Paladino is Global Managing Director of the Technology Practice Group at Thomson Reuters