On the 14th March 2022 there was a APPG (All Party Parliamentary Group) AI Evidence Meeting, The topics being AI, Policy & Regulation and feedback on the National AI Strategy White Paper
Led by APPG AI Chairs | Stephen Metcalfe MP and Lord Clement-Jones CBE
Ross Edwards, IORMA Technology and iLab Director
Louis Sztayer-Edwards, IORMA Researcher
First Panellist – Dame Wendy Hall – Regius Professor of Computer Science at the University of Southampton.
In this talk, Wendy Hall first addressed the main “pillars” around AI strategy, and what is needed to be worked on for it to be successfully incorporated. These main ideas included: long term needs, the ecosystem around it, governing AI, and ensuring it benefits all aspects of society, with diversity and public trust at the centre of it. Another point addressed was centred around career prospects involving AI, and how they aim to incorporate more accessibility into the area of work. There was mention of CDTs (centres of doctoral training), for both students and those in industry, numerous fellowships from the Alan Turing institute (with more coming in later phases of the plan), they funded many scholarships for AI-related conversion courses, 50% of which went to underrepresented groups with non-science backgrounds. There were 23 million in February and 2000 more scholarships. The final point made talked about the inclusion of more integration of AI in primary and secondary schools to push understanding at junior levels.
Second Panellist – Sara Al-Hanfy – head of AI and machine learning at Innovate UK
In this talk, Sara addressed how big the prospect of AI was growing around the world, and what we are doing as well as what we need to do to stay competitive. The main aim is to keep the UK as a global AI superpower, and in order to do this, we need to cover areas that we are weak in now such as the systemic issues that prevent AI from being adopted as well as the skill gaps that hold back the supply and demand. Whilst we are using methods to help with these issues (Fellowships, CDTs and conversion courses) none of them fix the short-term issues. In order to fix short-term problems, there must be more communication, and transdisciplinary networking to solve issues, and through this holistic approach create a “culture” around AI. There must also be more investment, but it must be done intelligently, and using it to commercialise AI could help reap benefits.
Third Panellist – Michael Woolridge – Professor of Computer Science at University of Oxford
He believes that Machine learning skills are key and need to be developed on with high level mathematical skills, and it is preferable to have top grades in further mathematics as opposed to base mathematics. He stressed the importance of a skills pipeline, and that “we will know we’ve succeeded when AI practitioners meet the market price of University professors”. He also stressed the need to maintain our status in the AI world, and we can do this through emphasising the long-term needs, there also must be a satisfactory form of regulation put in place as there is scepticism around the EUs idea of regulation as it is too focused on how people are doing things with it as opposed to what the technology is being used for. He also believes that the Turing institute should be centralised for data and AI science in UK, and can help with the “joining up” method of implementing the AI strategy.
Fourth Panellist – Doctor Scott Steedman – Director of standards at British Standards institute
In the fourth talk, Scott Steedman focused upon the AI involvement within society, and how to bring more awareness into it. He talks about standards being founded upon by the consensus of stakeholder viewpoint, and that throughout BSI, many communities with open participation policy so consumers can can be represented. He also states that governance is very important for AI, and that there should be a progressive, risk-averse approach that contains minimal regulation, as that helped benefit the innovation for Nanotech. He mentioned that increasing stakeholder awareness is going to be key in “bringing the public with you”, and that this could be done with the integration of a pilot standard AI hub, which will help bring greater public awareness and understanding to the topic of AI. He stated that there should be a more risk-based type of regulation, and through that find out the minimal level of regulation required to protect the consumer.
Fifth Panellist – Caroline Gorski – Group director of R squared Data Labs at Rolls Royce
In the final panellist talk, Caroline Gorski talked about the successes of the Rolls Royce through R squared data labs, for example, Alethia and Alethia 2.0, a safety framework developed by them and how they’ve began integrating skills across their company, that they’ve integrated holistic digital training to every level in 2021, more than 75,000 hours of skill development to more than 35,000 workers. She also however stressed that there is still much more to do to for the progression of AI to continue smoothly. There should be funding through the treasury, skills development from primary to tertiary settings from education, systems development through transport, export opportunities through trade, defence capability through MOD, energy security from BEIS and department for levelling up. She also mentioned that there should be more representatives from the industrial sector into policy making, as there needs to be more questions asked regarding safety in that area.
Is it possible that public funding becomes tight? – It’s difficult to tell but the government seem committed, all the research and development for covid vaccines can be seen as an example, though there is said to be a gap in commercial R&D funding, and this, in turn with the skill gap, could waste a lot of the potential of AI.
What about the potential growth of AI to take over human jobs? – It was stated that rather than being displaced, jobs are being “augmented instead, and we will likely see both (AI and human) collaborating, so that the AI can cover human errors and vice versa, an example of this can be seen in radiology. R squared data labs already involve union workers in the Alethia framework, and the Consumer Public Interest Network already provides representation of consumers, so that there can be more discussion about these issues on a public level.
How much space and resources should we invest into upskilling and supporting existing regulators into refining and articulating the remit in the context of AI? – Regulatory structures are already being relied upon as AI is being pushed into different areas of work, and there should be investment into it, as in Rolls Royce there were not any AI specific regulators in the aerospace market though it was covered by safety critical regulation.
How can we ensure responsible use of AI across different sectors and industry? – There must be discussion over which toolkits are best to use for start-ups, and increased awareness on which ones are trustworthy, a collaboration between social scientists and businesses could help this. There should be increased consideration on data, making sure it isn’t controlled by big business and spread fairly, so SME’s get a fair chance and there aren’t any power imbalances, but also making sure that data is treated more like an asset, and give more confidence in the standards and assurance of the AI tools they develop. There should also be an assurance system in place for the public (could involve the forms of an audits or a conformity assessment).
AI for everyone education, is it specific for Computer Science courses, or does it have a broader range? – There are 2 types of courses, the industry funded ones, which are based more around machine learning and thus require mathematical skills, but also the AI conversion courses, where any degrees are welcome. Whilst machine learning jobs are in heavy demand of jobs right now, this will eventually diminish, and as AI becomes more integrated, AI-focused jobs may become more vital.