All Party Parliamentary Group APPG – AI – Corporate Decision Making and Investment

All Party Parliamentary Group APPG – AI – Corporate Decision Making and Investment

Ross Edwards - IORMA

Ross Edwards – IORMA Technology Director

Corporate Decision-making and Investment: best-practice guidelines for AI adoption is a Parliamentary Brief based on the All-Party Parliamentary Group on Artificial Intelligence’s (APPG AI) online Evidence Meeting on 11th May 2020.
Stephen Metcalfe MP and Lord Clement-Jones CBE chaired the Evidence Meeting.
It was organised in collaboration with ICAEW’s (Institute of Chartered Accountants in England and Wales) Corporate Finance Faculty

APPG AI Corporate Decision Making

David Petrie, Head of Corporate Finance, ICAEW

David spoke of how AI can be used to smooth the process of helping companies receive financial help through the Covid 19 crisis and how they will recover from it.

Boosting public and private funding is vital to this recovery, care must be taken in choosing the AI to be used in this. He also spoke of the use of AI in reviewing contracts, legals and in financial analyst modeling.

He spoke of how KPMG worked with IBM on key risks using AI and that PWC have analyzed thousands of documents to see if companies are aligned with their responsibilities to climate change.

David mentioned that there should be a recommended level of disclosure the corporate requirements and explanation as to how AI is deployed and that there is clarity of the various responsibilities of executive and non-executive directors.

Jan Chan, Associate Partner, UK & Ireland Transaction Advisory Services Chief Innovation Officer, Ernst & Young EY

Jan spoke about how we should be clear on what AI can and cannot do, for example it can identify good targets for private and public investment and simulate how investments turn out.

He made the point that we should make the most of the assets we have in this country, the UK has three of the top ten Universities in the world, of which 45% of the students are international. He asked how can we use and encourage them to stay and drive our post covid economy.

It was also pointed out the need to build trust and understand how machine algorithms work and that it’s really important to understand the time and effort that goes into preparing and cleaning the data for AI and that it should have peer review.

On the subject of how AI can help prepare the workforce return in an appropriate manner after the Covid 19 lockdown, he put forward two ideas:

1.   Retraining the existing workforce to handle the new data skills needed.

2.  How to encourage best practice along the lines of the Macpherson report.

Jan mentioned how they find and check for bias in data sets by the use of reverse engineering but the trouble is most of the time is quality of data being used is very bad.

Charles Radclyffe, Head of AI, Fidelity International

Charles spoke about the importance of looking at the ethics of technology.

He suggested part of the reason the public’s relationship with technology has broken is because the level of governance has not been appropriate to the task.

He compared the issues facing implementation of AI as similar  to the Environmental Sustainability movement which is very much in vogue with the investment community at the moment.

Charles also mentioned the rise of Facebook culminating in praise in the use of social media in the Arab Spring to its partial downfall with the Cambridge Analytica scandal.

Charles made the point that Technology is not the problem but its ethical application.

In terms of governance, GDPR  which is only 4 years old and is only the start of a legislative program the European Commission will be implementing also the need for an Ethics Board and a feedback loop for stakeholders this would help how companies handle risk and show what best practice should look like.

Dr Zoë Webster, Director – AI and Data Economy, Innovate UK

Dr Zoe Webster spoke about that 80% of an AI project is in engineering ie cleaning and preparing the data.

She also mentioned AI could be used to develop more personal financial products.

Zoe said that AI should reflect human and societal values and make it reliable and robust and that deployment should be both effective and available to everyone.

Four key areas that Zoe thought  important were:

1. Data  –  obtaining the right data sets

2. Skills  

3. Innovation

4. Balance between the speed of deployment and ethics.

Dr Christine Chow, Head of Asia and Global Emerging Markets, Hermes Investment Management

Dr Christine Chow spoke of how AI can improve due diligence and that investment into the UK sectors including fintech, clean energy and AI soared to £10 billion in 2019 securing a third of the £30 billion raised in Europe. Christine mentioned three areas where AI’s potential can make a difference.

1.   Can help  to recognise key opportunities

2.   Improves due diligence in science

3.   Strengthens scenario analysis

She spoke about Interactive data helping analysts to see past the noise and how IBM have been publishing ethical AI and user reports since 2014 and HSBC published ethical AI principals in 2020.

On the subject of managing  AI related risks she asked the question and what does 90% accurate actually mean.

Naomi Climer CBE, Co-Chair, Institute for the Future of Work

Looking at data driven technology AI and how tech affects work.  

Naomi mentioned they had released a paper on AI in recruiting and  equality.  The need to be very clear on what AI is trying to achieve, is the aim to make money quickly or are you trying to create growth in a certain area and the importance of consulting all of the stakeholders.

Naomi spoke about not losing sight of the importance of equality, fairness, accountability, transparency and data protection but cautioned that while implementing the above making sure the data is not accidentally biased

Naomi mentioned she was surprised by the failure of companies to take action on the results of when AI was used in an audit and found to be biased and the importance of having a human in the loop.

A question that was asked by Lord Clement-Jones was: ‘Are we turning engineers into philosophers so they create ethics by design?.’

Sanu de Lima, Deputy Director, Corporate Governance, Responsibility & Diversity Business Frameworks Directorate, Department for Business, Energy & Industrial Strategy

Sanu talked about the need to maintain trust in corporate governance and corporate decision making. One use he sees for AI is in market supervision which is very time sensitive, such as flexibilities around pre-emption rights in terms of raising equity.
He also mentioned that AI would excel in being used as a process management tool around shareholder intermediation.

Another area of use would be in monitoring Board effectiveness, the use of AI around the process of decision making and the diversity of the board. Practical uses of AI include mitigating risk, benefits of analysis but also he noted the need of risk taking and innovation and governance around the technology itself.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Translate »