Navigating the AI ‘Wild West’
While AI has extraordinary potential to transform English language education – it must be used sensibly and be guided by robust ethical frameworks, writes Dr Evelina Galaczi, Director of Research, English, at Cambridge University Press & Assessment. Dr Galaczi says an ethical approach to AI is critical to help avoid an AI ‘Wild West’ where anything goes!
The rapid rise of AI has been likened to a modern-day gold rush. Nowhere is this more apparent than in education, where automated tools promise speed, efficiency, and scalability. The benefits of AI in English language education was a topic I recently presented at Cambridge University’s Clare College to a group of international professionals involved in delivering English language education. I asked the audience what they thought English assessment would look like in ten years and two overriding themes were convenience and personalisation.
Integrated Learning and Assessment
I then made a prediction on the future of assessment. I predicted that AI powered tests in the future would cover new skills and criteria for success and will help us to deliver what we call ‘Integrated Learning and Assessment’ – an approach that assesses students at regular touchpoints throughout their journey to help inform next steps and build a picture of learning. The rise of AI will make this easier because in many contexts it will allow learners to be assessed behind the scenes without having to learn-stop-test. So, every time they use English (or any other language or subject they are studying), that will generate information about what they can or can’t do in English. This will form part of their individual profile as learners, which can become the basis for assessments that are hidden and take place in the background. I also predicted a future which uses AI to test skills beyond language such as, for example, the ability to skillfully use AI to write an essay.
But, perhaps most importantly, I didn’t describe a future which had no involvement from humans.
A word of caution
Despite this exciting future, we have to exercise a word of caution, because without an ethical framework in place, we run the risk of AI losing credibility in English language learning and assessment or, worse, damaging assessment by leading to the loss of important competencies.
So how should we approach AI to ensure its delivered ethically?
What do we need to consider for ethical AI?
It’s critical to use a human-centred approach to AI in English language education, which acknowledges the vital role of educators and assessment experts in both language learning and quality assessment. At classroom level language learning must remain a human endeavor with a teacher in the driving seat. Whilst AI can enhance learning experiences, it cannot replace the uniquely human experience of acquiring and using a language. So, it’s essential that AI in education supports and empowers learners but doesn’t overshadow the human touch.
This is also critical outside of the classroom, such as when AI is used for high-stakes purposes such as English tests for admissions or immigration purposes. This is an area we must get right as there is growing public concern around using AI in assessment, as highlighted in a YouGov poll from earlier in the year that found that 39% of people were concerned AI-based tests might not be assessing relevant language skills, potentially disadvantaging those taking exams to work, live and study in the UK. At the heart of AI-based assessment, there must be human involvement. This helps to establish accountability on the part of test providers, and allows a person to step in where oversight, clarity or a correction is needed for quality control. To monitor and achieve this, test providers must collect robust evidence to show how AI scores meet the same standards as highly skilled and experienced human examiners.
Fairness isn’t optional, it’s foundational
Fairness is another principle that is critical to consider because AI-based language learning and assessment systems must be free from bias. To achieve this, AI systems must be trained on inclusive and diverse data, and there must be a robust process in place to continuously monitor for bias.
Don’t forget consent!
It’s also essential that data is collected in an ethical way to ensure any systems developed are trustworthy. For example, all parties must give consent and be clearly informed about what data is collected, how it’s stored, and what it’s used for. This creates a big responsibility behind the scenes, and organisations developing AI systems for English language education must implement robust encryption, secure storage protocols, and safeguards against hacking.
Transparency and explainability are key
Learners need to know when and how AI is used to determine their results. To achieve this AI systems must be developed and deployed transparently, with robust oversight and governance. Providers must be able to clearly articulate the role AI plays, as well as the frameworks that are in place to ensure test integrity and accuracy.
Sustainability is an ethical issue
Is a specific AI system necessary, or are there other ecologically friendly and more sustainable options available? These are big questions that must be considered, especially when you consider the fact that AI isn’t just a digital tool – it’s a physical one, with real-world environmental costs. AI systems crunch vast amounts of data and have massive energy demands, which places a big responsibility on everyone, including language providers. This must be kept in mind when choosing which of the different types of AI should be used for different contexts.
And finally
So how do we get the AI ‘Wild West’ under control? We must take a human centered approach to AI and be driven by ethical principles such as fairness, privacy, transparency and sustainability.
Despite the fact that the ‘goldrush’ is already underway, it’s not about who gets there first, it’s about who builds something that is effective, safe and trustworthy. This is the golden thread running through our new paper from Cambridge which sets out six key principles for the ethical use of AI – which we urge others to follow. The future is ours to shape, but let’s make sure it’s one we can be truly proud of!
Dr Evelina Galaczi, Director of Research, English, at Cambridge University Press & Assessment
Evelina’s previous article was: Does AI Assessment Benefit English Language Learners?
Responses