From Terminator to Blade Runner and The Matrix, stories of a world where computers break free from their human creators and take over the world have always captured our imagination.
But reality could soon surpass those tales of the tyranny of almighty cyborgs.
Certain limited relief can be found in the new book by James Lovelock, the legendary scientist behind the Gaia hypothesis.
Coinciding with his 100th birthday, Lovelock has just published Novacene: The Coming Age of Hyperintelligence.
His vision of the future is this: the current ‘Anthropocene’ age is nearly over. The Novacene is about to come, a new epoch dominated by intelligent electronic beings, which they will have designed and built themselves from our current artificial intelligent systems.
The code of life would no longer be written just in DNA, but also in other codes based on digital electronics and instructions yet to be invented.
He writes: “As a chemist, I would love to see how life in the Novacene constructs itself from the Earth’s array of elements. Instead of solar cells, think of trees connected directly to the electricity grid.”
A European Coalition of AI Investors
Inadvertently, the world of finance is already paving the way for the Novacene. Signs of how to channel capital to such a new epoch in a responsible fashion are emerging under ESG considerations, as well as ethical investing red lines.
Like its sister the High-level Expert Group (HLEG) on Sustainable Finance, the European Commission set up an HLEG on AI.
Just a couple of weeks ago, it published a report with policy and investment recommendations for a “trustworthy” AI, which followed on its ethics guidelines released last April.
The HLEG-AI says investments in the EU should increase by at least €20bn per year in the next decade so as to not miss out on the promised benefits of AI.
That figure could arguably overlap with the annual €180bn over the same period that the EC estimates is needed to decarbonise the economy, especially if AI technologies can contribute to delivering on the Paris Agreement targets.
One HLEG-AI recommendation is for the creation of a European Coalition of AI Investors. It also calls on Europe to “champion the use of AI towards sustainable development in line with the UN Agenda 2030” as well as Sustainable Development Goals to measure AI’s societal impact.
The main HLEG-AI’s principles (“AI is not an end in itself, but a means to enhance human wellbeing and freedom”) are closer to the current ‘Anthropocene’ than to Lovelock’s Novacene.
Nonetheless, there are already some cutting-edge examples of AI in the investment industry.
Japan’s Government Pension Investment Fund (GPIF), the world’s largest pension pot, commissioned research from Sony CSL on the use of AI for asset manager selection, cautious of the imbalance between high fees and low returns.
Naori Honda, a spokesperson for GPIF, tells Responsible Investor that the initial findings of this study, in its first phase, have not so far been used in manager selection.
“We are now in the second phase [of the research]. So far the team at Sony CSL analysed the trading data for domestic equities and they are working on foreign equities,” Honda says.But could it be possible that AI is used not just to select managers but to replace them altogether?
Yuval Noah Harari, the best-selling author of Sapiens and Homo Deus, wrote in the latter book about the case of an algorithm that was appointed to the boardroom of a venture capital firm.
This algorithm, called VITAL (Validating Investment Tool for Advancing Life Sciences), helped Deep Knowledge Ventures to evaluate investments in biotech start-ups back in 2014.
Dmitry Kaminskiy, co-founder and managing partner, tells RI that VITAL is by today’s standards a very basic AI system, but at the time, it served its purpose.
Given that nine out of 10 biotech start-ups fail, VITAL showed good results in detecting red flags and even displayed some levels of “over protection”, according to Kaminskiy.
“It helped to understand the practical limitations and applications of AI, and allowed us to very clearly define hype versus reality in the AI sector itself,” he says.
Kaminskiy’s subsidiary firm Deep Knowledge Analytics also produces research on the AI friendliness of publicly traded companies.
He says that by 2022, AI should be able to allow corporations to analyse financial parameters, deliver insights and audit reports to board members, shareholders and government authorities.
By 2024, Kaminskiy foresees such systems being appointed “independent entities to the board of directors” of progressive corporations that “wish to be transparent and maximally responsible towards their shareholders”.
He adds: “Perhaps in some progressive technocratic countries it will be required by law that government organisations, financial institutions and pension funds have such AI systems, which will report to the public the current situation inside the organisation.”
A common objection to AI board members centres on who has responsibility for machines’ decisions.
However, Lord Hodge, Justice of the UK Supreme Court, has entertained the idea of giving a machine separate legal personality, similar to the way that English law allows for an office occupied by a natural person to be a ‘corporation sole’.
Delivering a lecture on the potential perils of FinTech last March at the University of Edinburgh, he said:
“The law could confer separate legal personality on the machine by registration and require it or its owner to have compulsory insurance to cover its liability to third parties in delict (tort) or restitution. And as a registered person, the machine could own the intellectual property that it created.”
Responsible AI engagement and investment themes
When it comes to ESG investing and AI, there are two main considerations to bear in mind. First, AI can be an investing theme in its own right, identifying the companies that are in the business of developing such technologies. And secondly, there’s the exposure of any given portfolio company to AI.
“Most companies do not have a good understanding of their overall AI footprint,” Christine Chow, Director at Hermes Equity Owner Services, tells RI.
Hermes EOS has recently published a report on investors’ expectations on responsible artificial intelligence and data governance, which summarises a year’s worth of engaging companies on this topic.
“The application of AI is not specific to tech companies. The value chain of AI could be very long and we are asking companies to understand at the group level where AI is deployed,” Chow says.
As demand for AI applications will only continue to increase, Chow says, companies need to understand their exposure both in the business-to-business and business-to-consumer contexts.
In that respect, Hermes EOS supported a shareholder proposal filed at Alphabet (Google) by Boston-based multi-family office Loring, Wolcott & Coolidge asking for the creation of a societal risk oversight committee of the board.
Chow told Alphabet’s AGM in June: “Today, I’m speaking in favour of proposal number 6. In our view, there is a gap in the necessary skills in the board to provide the required oversight.”
Such skills to oversee risks associated with AI include statistical analysis and social sciences (to understand the probabilistic nature of AI and its societal impact), as well as neuroscience (in order to grasp the functioning of, for example, artificial neural networks).
When it comes to AI as an investment theme, it’s now five years since research and investment advisory firm ROBO Global launched a proprietary index of predominantly small and mid-cap robotics, automation and AI “most promising” companies.
ROBO Global, which is a signatory to the Principles for Responsible Investment, followed this up a year ago with a more focused AI index, which as at April 2019 had 69 constituents representing the “global value chain of AI technologies”.
On July 2, Legal & General Investment Management announced the launch of the L&G Artificial Intelligence UCITS ETF, which tracks the index.
Richard Lightbound, CEO EMEA and Asia at ROBO, tells RI that the AI theme has a “very natural tilt towards sustainable investing” with technologies that, for example, improve health care or help farmers to increase productivity while reducing water consumption.
“What it is interesting is that our [ESG] policy hasn’t really had a lot of impact on the index. We probably removed a few companies that were previously in our strategy. We didn’t set out to design something that was so aligned behind ESG — it is more of an outcome.”
The ESG filter, however, screens out technologies or products being used to create weapons of mass destruction and excludes other violations of ethical norms.
From an ethical investment perspective, this is a critical risk that emerges from the AI theme. According to a report by PAX, a Dutch NGO and founder of the Campaign to Stop Killer Robots, a significant amount of AI-related investments are linked to its military applications.
PAX’s report analyses the state of AI in seven key countries, raising concerns about a new “Sputnik moment” among competing countries that could lead to a potential AI “arms race”.
In particular, PAX takes issue with Lethal Autonomous Weapons Systems, which can select and attack targets without human control. PAX offers the UK example of the Ministry of Defence-funded Taranis armed drone, which is being developed by BAE Systems.
To what extent are institutional investors already allocating capital to the AI theme?
ROBO’s Lightbound estimates it could be $30bn globally now tracking the theme in one way or another. “The $2.5bn that tracks our index globally, absolutely includes pensions money,” he says.“We were very early in the market and back then a lot of the investors were entrepreneurial high net worth individuals or private banks. For bigger institutional money it takes longer to get comfortable with the theme, it needs to be more prudent and have a bit of established choice in the market,” Lightbound says.
Deep Knowledge Ventures’ Kaminskiy says the lack of investment choices is holding back expansion.
“This situation creates a barrier preventing the conservative investment community from getting involved, who are very interested, but would prefer not to deal with venture funds and angel investments due to issues related to liquidity.”
As such, Kaminskiy says his firm will launch in 2020 an AIM-listed fund called Longevity.Capital, which aims at combining “the profitability of venture capital and the liquidity of hedge funds”.
Another way to look at AI is from the perspective of data providers.
Truvalue Labs uses AI to provide alternative ESG data insights to institutional investors, such as the UK’s £30bn Brunel Pension Partnership, State Street or GPIF. It uses natural language processing and machine learning to mine data from over 100,000 non-company sources.
Truvalue considers that corporations are no longer the sole authors of their own narratives, and therefore, self-reported and unaudited information has severe limitations.
According to Tom Kuh, the industry veteran and former head of ESG indices for MSCI who is now heading a new index arm for Truvalue, the use of AI to harness growing mountains of unstructured data from external stakeholders has not been properly explored.
Kuh tells RI: “If you are not looking at the volumes of unstructured data, you are missing out on what’s going on. There is a broad sense of dissatisfaction with traditional ESG research and the investment applicability of ratings.”
Kuh has just published a white paper, ESG Research in the Information Age, which provides some estimates of the “superabundance of unstructured data”.
Based on figures by the International Data Corporation, Kuh writes that 90% of the data in the world was generated over the last two years. By 2020, every second every person on earth will have created 1.7MB of data. By 2025, there will be 163 zettabytes in the world and in the meantime, 2.5 quintillion bytes are being newly created every day.
While Lovelock’s vision may seem today exaggerated, it is clear that AI systems are doing more than just brutally devouring our data.
AI has no human rival at Go, the oldest and most complex board game. After beating the South Korean world champion, AlphaGo just played AlphaGo Zero and AlphaZero, its AI successors. Behind the projects is Google DeepMind.
Just last week, Pluribus, a Facebook AI’s system co-financed by the US Army Research Office, outplayed five top poker players.
Al not only is able to call a bluff but it is also cutting its teeth at the arts. Earlier this year, Huawei-powered AI technology completed Schubert’s Unfinished Symphony.
Even an article like this could have been written by an algorithm which would, no doubt, draw it to a close by saying: “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate…”