All Party Parliamentary Group | Designing fair and robust AI-based assessment systems
EVIDENCE SESSION (online)
Designing fair and robust AI-based assessment systems
Monday 19th October 2020
Ross Edwards | IORMA Technology Director and IORMA iLab Director
This APPG AI was around the Algorithm used to work out the grades for students taking A levels and what can be learnt from it
The first speaker was Simon Buckingham Shum Professor of Learning Informatics at the University of Technology Sydney (UTS)
Simon spoke about adaptive AI Tutors that coach students at their own pace until they master core skills and knowledge. He acknowledged that this works better for certain topics, such as STEM subjects, then others. Key factors in this being teacher training and how well they know the technology.
He also spoke of its benefits like timely support for students in large classes where they can continually analyse and assess students over time compared to traditional exams that test students for a few hours under artificial conditions.
Teachers and students must be equipped to question and over rule an AI diagnosis.
Simon also spoke about students perceiving chatbots as being less judgemental than a human when receiving feedback, students from minority groups particularly have shown a preference for this type of interaction.
Human contact and perception on how students are coping will still be needed and an essential part of the learning process.
He finished his talk with a question, What sort of education do we want?.
Victoria Sinel from Teenagers in AI
Victoria spoke about her experiences of taking A levels this year and the way they were judged, using the algorithm this year lowered her grade from the ones she received from her mocks due to the algorithm downgrading her because of her college’s previous record over the last three years .
Cori Crider, Co-Founder Foxglove
Cori, a lawyer who is involved in a number of cases around the A Level algorithm pointed out that the main aim of it and the strategy around it was to ‘limit a one time spike’ in the yearly figures. She said it was both a policy and digital failure and was also in her view unlawful.
Cori said it did not create a fair grade, the algorithm clearly favoured private schools, one way this was shown was in the marks given in classes that are not generally taught widely like Latin . Cori was asking that for a more transparent system in place.
Priya Lakhani O.B.E
Priya asked the question ‘What is assessment for and what do we count as success?’ She spoke very eloquently on the subject of digitisation saying too often it is about just placing the current way of doing something directly online, digitisation should be used to re-evaluate the process and make full use of the benefits that technology offers.
Priya also referred to an earlier APPG report that suggested implementation of both hard and soft skills is needed and care is needed on how things are implemented, she then quoted ‘If any measure becomes a target it ceases to become a good measure’.
She then spoke about combining learning and assessment that it should run hand in hand along with teacher buy-in is essential and that they must be given the time and training to make the most of the new technology.
One of the uses put forward for AI was to find deficiencies in marking and be able to find areas of weakness a student may have in a subject during the term time so steps can be made to make corrections rather than finding out in an exam at the end of the year. Priya mentioned a group that was looking into this area. rethinkingassesment.com
She also mentioned that it was important to remember that 1 in 10 families do not have a computer or laptop at home.
Laurence Moroney, Lead AI Advocate, Google
Laurence spoke about there being at present a negative public view regarding the uses of AI in education. So he asked the question,How do we educate people about AI and create experts in the field?
He mentioned the trough of disillusionment due to the high expectations of what AI is capable of but not a realistic view of what it can do. For example in Indonesia Google is working to help and fund people to understand AI. He is also predicting the possibility of a massive growth of prosperity similar to the tech one a few years ago and that it could even be bigger.
Laurence spoke of some issues in Universities such as making sure what is being taught is current enough and apparently one trouble being is that if a university wants to bring in a new technology class they have to retire another which brings about other issues.
He also quoted a developer saying ‘Nothing works until version 3’ and that has to be avoided at all costs in a sector like education.
Prof Janice Gobert, Professor of Learning Science & Educational Psychology, Rutgers University
Prof Janice’s work is about looking for ways that teachers and algorithms can work together using AI assessment tools and using two bits of software made by her company Inq-ITS , for students, and Inq-Blotter , a teacher dashboard, where students are not assessed on exams or multi choice questions, instead students do virtual science experiments which are then written up. She commented that because they use science simulations to do science the tasks are authentic and closer to real life experiences.
She mentioned issues involved in conventional assessing which is just on writing is that it can you can give false negatives, those students who can do science but are not good at communicating it and false positives those who can copy and memorise but not grasp the basics and there is some data that shows that 30 to 60% of the results using the current method can be mis-assessed. It also allows teachers to monitor larger classes more effectively and when the system is unsure of a students competency in a particular skill it will flag it up to the teacher.