Comment: The curious case of the wet signature

AI can help sustainable investors assess company positions on complex topics such as modern slavery, writes Will Martindale.

In tackling modern slavery, asset manager CCLA is a world leader. Its modern slavery benchmark, compiled by experts, assesses companies on a range of indicators in both policy and practice.

The question in the benchmark that I remember best – and I’ll explain why later – is whether the modern slavery statement is signed by the CEO, with what CCLA calls “a wet signature”.

The test is composite, as follows: 1. Is there a signature? 2. Is it just text? 3. Is it a signature, but clearly a computer-generated one? 4. Is it a physical signature? 5. Has it been printed out and physically signed and then scanned?

The test is a proxy for CEO involvement in reviewing the statement.

Previously, analysts would first review the companies’ disclosures – the modern slavery statement if there is one, the sustainability policy or perhaps the human rights policy, due diligence disclosures, annual reports and any relevant website disclosures.

But with 50 questions (the wet signature being just one) and hundreds of companies, it is not a straightforward (nor historically cheap) activity. And although modern slavery is a priority sustainability theme for CCLA, it is one of many.

Even if the analyst knows the company well, the task involves reviewing potentially hundreds or even thousands of pages of text, potentially across multiple formats, in multiple languages and recording answers to questions, many of which are subjective.

The best analogy I have is that our ability to understand and retain knowledge of any one company is equivalent to a half-pint of water, but the content we need to review the company is equivalent to a pint of water. Once the half-pint glass is full, if we assess something new, it’s likely we’ll forget something old. If we pour more water into the half-pint glass, it simply overflows.

After multiple trials, CCLA, in partnership with Canbury, trialled a hybrid approach, using LLMs in combination with human-in-the-loop review. While still a pilot, the results are encouraging.

CCLA’s questions were first translated into prompts, accompanied by scoring criteria and guidance. The prompts, many of which were composite, were then pulled into a Python programming environment which returned data that, in turn, responded to the questions with scores, referenced quotes and reasoning (the LLM’s explanation of the proposed score).

Over the course of a few weeks, the prompts were refined and refined again. The results were closer and closer to CCLA’s own analyst-led scoring process, first on a handful of companies, then a few more, until accuracy was high enough (and in some cases surpassed analyst-led scoring) to scale across all the companies.

If you’ve experimented with the free versions of the LLMs tools, the paid-for versions are significantly more powerful – OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude.

The skill is in understanding the strengths and weaknesses of the tools, using them in combination, and through in-house sustainability expertise, understanding and refining the results.

It marks the start of a welcome shift in ESG data. With the advent of LLMs, investors have – for the first time – the ability to access, at scale, esoteric data points, assessing companies on data points specific to their investment strategy at cost-effective price points.

Like any tool, the LLM is an extension of the user’s expertise, be it modern slavery or nature risks – or moving beyond traditional ESG issues, companies’ cashflow management.

Which gets us back to the “wet signature”.

The use of LLMs is typically considered binary. Do you use LLMs or not? Do LLMs make mistakes or not?

The hybrid approach is the most effective. The LLMs can identify if there’s a signature, they can posit whether it’s computer-generated or physical, and they can have a go at whether it’s scanned.

But better still, they can share a link to the document or documents, the page number and a view on the format of the signature as an Excel output. It takes an analyst an hour or two to review 100 companies. Previously it would take days.

LLMs are not the answer, but they are a large part of the answer, opening doors not previously possible to responsible investors.

When it comes to CCLA’s benchmark, companies review the disclosures and where there is room for improvement, CCLA is finding that many respond positively. Not all, but many companies welcome the insights, learning from the benchmark and CCLA’s in-house experts and taking steps to address the gaps.

The analysis that underpins the benchmark is making that possible.

Will Martindale is co-founder and managing director of Canbury Insights