As we gradually emerge from the pandemic with tech more deeply embedded in our daily lives, issues relating to the scope and use of the vast amounts of data being collected, such as data security, and the use of predictive technologies and artificial intelligence (AI) are becoming increasingly important.
This rise in prominence has meant that awareness of the ‘darker’ side of AI, as highlighted by mainstream hits on Netflix such as Coded Bias, and specifically the ways in which machine learning algorithms have led to greater discrimination and inequalities is on the rise.
Technology itself, and the data we collect on a daily basis, is not inherently ‘good’ or ‘bad’. However, as humans, we can program our biases into the technology we create. Existing racial, social, and gender biases can be programmed into the algorithms we develop, often unconsciously, or the data that are available to train algorithms, biased in nature, inevitably affects the outputs. In the US healthcare system, for example, an algorithm for determining healthcare risk and the need for extra medical care, was found to have a racial bias, favoring the extra care of white over black patients. This stemmed from the algorithm being trained based on previous patients’ healthcare spending, a very poor indicator of actual health care needs in the US, given the privatised health care system, unequal distribution of financial wealth, and structural racism.
The concept of equitable AI reflects the notion that AI systems should be designed in a way that ensures human or social biases do not translate into algorithms. This ethical approach to designing and implementing AI systems is vital due to the significant risks associated with leaving these biased AI algorithms unchecked. The AI market is projected to grow significantly over the coming years. According to research by Fortune Business Insights, the industry is expected to experience compound annual growth of 33% per year, from $47.47 billion in 2021 to $360 billion in 2028, in addition, recent research by McKinsey states that two thirds of companies plan to increase investment in AI over the next three years. As such, deeper attention to equality in data analysis is essential to avoid biases being further embedded and magnified.
‘We can work towards developing equitable and more ethical machine learning algorithms and ensure the historical biases and prejudice we are seeking to eradicate in society, do not continue to be replicated by AI’
The risk of basing decisions on biased data is also increasingly relevant for the investment community. Responsible investors are expected to possess an intimate understanding of portfolio companies’ impacts across all levels of the supply chain, with the incorporation of complex social issues a crucial component of this. Concurrently, heightened scrutiny means that to avoid claims of ‘impact-washing’ or ‘greenwashing’, due diligence processes must be watertight in their assessment of how company activities impact a diverse selection of stakeholders. On top of this, for impact investors, determining the real-world impact of their investment decisions is vital to meeting their performance targets and securing long-term success.
While AI is often characterised as critical to business success and scaling, what is less talked about in the tech and AI sector is how we can leverage data and machine learning to stimulate and accelerate social change, justice, and equity.
We recognise that there is huge untapped potential in the vast amounts of text data related to social change and impact around the world. At ImpactMapper, we have developed analytical software tools to surface trends from evaluation and research reports, grantee or investment reports, stories of change, interviews, and Corporate Social Responsibility briefs to create aggregate level quantitative indicators to measure not only social change, but also progress on climate change, sustainability, social justice and human rights.
We are seeing many biases built into AI, with algorithms being trained on datasets that are not representative of the diversity of perspectives which corresponds to our communities. We are therefore working to align the collection of data with social justice principles, which will inevitably lead to an alternative, more equitable outputs than traditional data analysis.
Imagine a world where voices from underrepresented groups were prioritised, such as human rights activists, social justice activists, people of colour, girls, adolescent youth, members of the LGBTQI+ community, and people living with disabilities, among so many others. By creating databases that are pro-social, pro-equality, and pro-diversity, we can shift power imbalances and harness the power of machine learning and AI to mobilize social good and equity. When we train our databases on the vast amounts of social change data that exists in the form of evaluations, research reports, reports on progress from nonprofit organisations and social movements around the world, we start to leverage and harness the power of data for social change and good in a way that we have never seen before. In doing so, we can work towards developing equitable and more ethical machine learning algorithms and ensure the historical biases and prejudice we are seeking to eradicate in society, do not continue to be replicated by AI.
ImpactMapper partners with like-minded foundations, nonprofit organisations and social justice activists, UN agencies, networks, and corporates that have deep commitments to equity and rights around the world. And there are many other exciting emerging initiatives to bring more equitable insights into the AI space being undertaken by researchers and data scientists around the world. These include the A+ Alliance (Alliance for Inclusive Algorithms), AI for neurodiversity, the Data and Feminism Lab at MIT. These will be important places to watch and fund.
Examining our relationship with data and aligning it with social justice principles will be vital as we move forward. Taking this approach could be transformative not only for the development sector which represents a group of largely unheard voices but could also have much wider implications for the future of AI and how it affects all aspects of investment and funding.
Alexandra Pittman, PhD, is the Founder and CEO of ImpactMapper, a software tool which helps corporations, donors and nonprofits track, visualise, and optimise the real-world effects of their social impact activities