RIAA launches investor guide on AI and human rights
Potential harm to people, reputational and operational risk to companies, regulatory and financial risk - all reasons why investors should care about AI and human rights.
Wednesday, May 1st 2024, 6:48AM
by Andrea Malcolm
The Responsible Investment Association of Australasia (RIAA) has launched a toolkit to help investors address these rapidly evolving issues.
AI has immense potential benefits but carries risk when it is inadequately designed, inappropriately used or maliciously deployed, says RIAA co-CEO Estelle Parker.
Factors including the magnitude of data processed by AI systems, the opaqueness of systems, pressure to increase shareholder returns, and uneven access to technology threaten can lead to breaches of human rights.
AI can increase system vulnerability to cyber-attacks resulting in widespread privacy violations and facilitate targeted attacks on vulnerable groups, particularly children, as well as other human rights abuses. This includes racial or cultural bias compounding systemic discrimination of individuals, particularly those from historically marginalised communities.
Companies failing to effectively manage the risks posed by AI can face significant reputational, legal, and financial impacts.
The RIAA toolkit outlines issues, provides case studies, covers methodologies for understanding risks, and details strategies for investor engagement. It also looks at internationally recognised human rights frameworks, as these may shape the future regulatory framework around AI globally.
Stewardship guide
RIAA research shows that stewardship is now the most used responsible investment approach and an important part of reducing the risks associated with AI.
“Once investors understand their exposure to adverse human rights impacts and flow-on risks, they are better-placed to prioritise engagement based on their portfolio’s most salient human rights issues,” says Parker.
The toolkit includes an AI-related human rights risk matrix, a generic AI-related human rights due diligence and stewardship guide for engaging with companies (including example questions for use in engagement) and a specific human rights engagement guide which is more focused on people’s rights.
A need for the toolkit was identified after attendees at RIAA’s 2023 conference raised a range of concerns related to digital technology.
Investor concerns about digital technology impacts on human rights were polled at RIAA’s 2023 conference in Australia. The top five concerns were privacy and data protection, political participation – disinformation, polarisation, barriers to democracy, online safety, discrimination, and conflict and security.
While the question was broader than AI, each issue can be linked to the use, and potential misuse, of the technology associated with AI.
As highlighted by Mint Asset Management, the rapid adoption of AI in New Zealand’s financial organisations occurs in an unregulated environment as there are no AI-specific laws. AI is currently covered by the 2020 Privacy Act, which places personal responsibilities on agencies and organisations for protecting personal information.
Last year the Privacy Commissioner offered supplementary guidance on how obligations can be met under the Privacy Act 2020 when using AI tools.
« Ethical investment fund gets an international philanthropic investor | Mindful Money adds two fixed interest funds to ethical list » |
Special Offers
Comments from our readers
No comments yet
Sign In to add your comment
Printable version | Email to a friend |