A jigsaw puzzle for AI ethics owners

Friday 29 January 2021

Picture of AI

Author: Kate Lavrinenko, CFA

Professional Ethics and AI

It is widely acknowledged that maintaining high standards of professional conduct in the investment industry ensures that investors can trust in fair and transparent capital markets that allow market participants to be adequately rewarded for the risks taken. As capital moves across borders, Ethics for the global investment industry supports markets integrity beyond local customs and culture. Laws, regulations, and enforcement cannot be underestimated, but it is individual behaviours that make the market.

Recent technology advances allow to benefit from the use of AI solutions at scale. In finance, we can encounter AI being used in many different applications, such as asset allocation, trading, surveillance, client onboarding and KYC checks, compliance, risk management, research and many others. The confidence in technical solutions relies on three main pillars of trustworthiness:

  • AI solutions should be robust and solve the problems posed to them;
  • They should be compliant with applicable laws and regulations; and
  • They should be applied ethically.

In the investment industry, accounting fraud, market manipulation, insider-trading scandals, and other misdeeds ruin trust and raise the need for a behavioural standard. Similar processes are unfolding in the AI Ethics space where algorithms may bring physical, material and non-material harms to different AI stakeholders. These can be model business owners, employees, customers, and other groups of people directly or indirectly affected by the solution. For example, customers may be subject to unfair discrimination and suffer material loss due to poor model predictions for under-represented groups of population.

It is tempting to borrow existing approaches to managing ethical issues from other areas and apply them to organisations’ risk management and governance frameworks in order to innovate ethically and confidently. However, it may be particularly complex to navigate associated conflicts of interest and tensions and to generalise an AI ethical framework to work across business functions, contexts and geography.

 

Building AI ethical risk frameworks

In order to develop a general risk framework in a top-down approach, organisations usually start with defining the scope of the problem and how it can be categorised into different types of risks. Any reasonable objective or expectation towards how a system should work creates a corresponding risk of not meeting the expectation. For example, when there is an assumption that an AI solution does not harm society as a whole, the risk is not meeting this objective. This dual approach, together with learning best industry practice, allows us to develop risk frameworks for a given context.

Ethical frameworks can be inspired by values-based principles, human rights frameworks, privacy principles, corporate principles statements, supervisory guidelines and regulations, etc. When an AI ethical framework is formulated, organisations can capture each dimension of an AI solution and manage associated risk.

However, with ethics things can get particularly complicated as it may be hard to generalise from the original context. For instance, geography of AI use can create a point of divergence in an ethical framework with respect to personal data processing. There is a different attitude towards personal data in China and in the West, and therefore disparate views on ethical use of this data in AI solutions. Practically, it means that for a large company that operates across regions and products, more than one framework may be necessary, or it should be formulated at a rather high level.

 

Operationalising Ethics

Most large organisations have functions such as data protection, compliance, strategy, corporate ethics, data science and research, and risk management, and each of them can become a starting point for AI Ethics operationalisation. But similar to the example with geographical disparities, it may also be challenging to generalise Ethics across adjacent business areas and to make it embedded and synchronised throughout the firm’s practices.

For example, Data Ethics steps up from the personal data protection and naturally extends into automated data processing and decision-making. AI Ethics and Data Ethics overlap and complement each other, but their focus is different. Data Ethics protects rights of data subjects whose data is being processed (e.g., organisation’s customers or employees), while AI Ethics is concerned with broader societal benefits of using automated systems, whether processing personal data or not.

In a recent report about possible models for ethical responsibility in large technology companies by Emanuel Moss and Jacob Metcalf, authors draw on interviews and observations to define ‘Ethics Owners’ who are responsible for addressing ethics-related issues. Authors note that firms can start operationalising Ethics in different parts of the organisation, and the voice of Ethics Owners should be heard on the most critical and uncomfortable issues via effective governance processes and supporting infrastructure.

While we have been discussing generalisation problems across geographies and business areas, authors of the report extend the thinking. They claim that the inability to identify all the impacts, groups of direct and indirect stakeholders and to evaluate organisational rather than individual behaviours means that there is no mechanism of effectively owning Ethics today. Conflicts of interest and values are inevitable, and no single effective solution exists to unambiguously quantify or resolve the tensions, especially given the scale at which technology can operate. For example, personal ethical considerations of AI Ethics representatives may conflict with their corporate duties and get into the news.

As long as there is no clear-cut resolution to many of these tensions, Ethics Owners should navigate rather than resolve conflicts that can happen within an organisation or with external parties. Prospective directions for common work for ethical representatives include developing public use cases, openly discussing AI ethical failures and successes, promoting ethical business, and supporting colleagues in broader society.

One of the ethically sensitive use cases of AI in retail finance is using it for detecting potentially vulnerable customers – individuals that are susceptible to higher risk of financial detriment if not treated with special care by financial organisations. People can get into vulnerable circumstances due to health issues, life events, low financial resilience or capability (which is low knowledge and confidence in managing their finances). Organisations are encouraged to use AI to detect vulnerability in their customer base to ensure good customer outcomes, but this can create risks for the involved people. For instance, due to misuse of the information on vulnerability that can lead to financial exclusion, miss-selling or fraud. As an example of an ethical public use case, the Financial Conduct Authority has held public consultations with the industry, research institutions, charities and the public in order to develop guidelines for the treatment of vulnerable customers in the UK.

 

Conclusion

In order to innovate with confidence, organisations can build AI ethical frameworks for different contexts of technology use and start embedding them in different parts of the organisation. On this journey, it may be hard to generalise ethical requirements across contexts, geographies, and business areas due to tensions that inevitably arise. The scale and variety of modern technology applications make resolving these conflicts even more complicated, and existing tools for ethical oversight can become inefficient when balancing interests within a large organisation or embracing the scale of a product with millions of users.

This makes it crucial to openly address AI ethical risks and improve capability to recognise ethical issues. The latter can be achieved through new forms of ethics-related collaboration, inclusion of diverse perspectives on ethical risks, and commitment to case studies that can provide deep-dives into particular contexts of technology use.

 

A picture of Kate Lavrinenko, a specialist in AI Risk, working on the financial services side of Deloitte Risk AdvisoryKate Lavrinenko, CFA, is a specialist in AI Risk, working on the financial services side of Deloitte Risk Advisory. She ensures that models developed outside of immediate regulatory oversight work as expected and solve business problems posed to them. Kate is passionate about AI Ethics and data protection, while on the other end of the spectrum, she designs Data Science solutions and supervises postgraduate projects in Operational Research.

 

 

 

 

 

 

Related Articles

Aug 2023 » Investments

The path to regulation for digital assets

Jul 2023 » Technology

Thriving cities will be those most capable of adapting to change

Mar 2023 » Technology

ChatGPT3: The unexpected virtue of ignorance

Feb 2023 » Technology

Adoption of Finance Libraries: Pros and Cons