I have been a mixture of excited, fascinated, and terrified by the sudden explosion of generative AI tools. We are all currently finding our feet and dipping our toes in a new world. It is exciting and innovative, but as with all innovation, it brings risk as well as opportunity. Initially nervous, but now intrigued, I have been having a play around with a range of platforms to see what happens. As a result, I thought I would share some of my initial thoughts around the use of generative AI tools in the context of end-point assessment, and ways EpAOs can harness the potential as well as manage the risk.
1: Don’t be scared
I have spoken to many people that have not ventured into generative AI at all, either due to nerves or simply not knowing where to start. I have looked at is as if it is an advanced internet search; you are used to typing in questions or phrases into search engines and it producing a number of websites to look at; well generative AI goes one step further, it doesn’t just link you to websites, it starts to provide you with answers. However, the most critical point is that you should never assume these answers are correct, just the same as you would never assume all websites that an internet search pulls up are relevant to your initial search, or safe (I will talk more about risk later). If you are not sure about where to start, or need that extra little nudge to give it a go, Erica Farmer from Quantum Rise has been running a set of fabulous podcasts “AI for the average Joe” which are chatty, user friendly, insightful, and most importantly make you want to take a look.
2: AI is not just ChatGPT
There are several generative AI platforms out there (free and paid for). Chat GPT seems to get the most press, but there are many others, such as Claude, Bard, Midjourney, NotionAI, Canva, Elicit. Some are based on live data, others are based on what was on the internet at a fixed period of time, an important factor to remember if you are wanting to access the latest information. My advice would be to experiment with several platforms, perhaps use the same question or search on each so that you can see and compare the results (always experiment in a secure manner, as explained later).
3: Using AI to support the development and review of policies and procedures
Generative AI can support research when developing / reviewing organisational policies and procedures. For example, it may help you spot things you have overlooked, or identify different ways of doing things. This concept is not new, I have seen many examples of EpAOs using another EpAO’s policy to help them develop / review their own, and organisations that provide EpAOs with a template and then tailor it to the EpAO business. Generative AI is another route that could be used to support the development and/or review of policies and procedures. The key is that, where it is used, it should be used to support and inform and not be used as the final product.
4: Using AI to support assessment design
As an EpAO you will be developing and maintaining questions banks and/or assessment tasks for a range of assessment methods, but do you have a policy in place for the use of AI across design and development? This is a critical area; as much as AI is an incredible tool to support research to inform design / development, it does not guarantee accuracy, nor does it guarantee that what it comes up will be free of bias (remember generative AI uses unfiltered / unaudited internet content).
Why not experiment for yourself, type in a KSB from one of the standards and ask it to design a question or assessment task, and then ask it tailor what it provides based on length, level and so on.
Once you experiment you will start to see how critical it is to have an organisation policy / strategy for its use, a comprehensive review and scrutiny processes to ensure ongoing accuracy of what is being designed, and clear security arrangements. Don’t forget, if a designer / developer uses AI to support the design of a question, equally a learner can use AI to try and predict what question may come up in an assessment, or to produce stock answers for likely question areas.
5: Using AI to support delivery
Technology supporting assessment has been developing for a while, but it has advanced considerably over the past year. For example, remote proctoring has been around for a while, but AI brings the potential for virtual proctoring, using technology to monitor a learner as opposed to a person. If you are considering exploring such approaches, have you established the ethics, principles, and security behind it to ensure integrity?
AI could be used to assess the answers given to long and short answer questions, using concepts such as Natural Language Processing (NLP); it will be able to process information quickly and consistently, but will it cope with things such as spelling errors, industrial language, geographical colloquialisms / slang, and will it be able to differentiate between views, opinions and facts? I have often heard of its use in job applicant screening, but not so much in the context of qualification assessment.
Generative AI could be used to support reasonable adjustments, but it would need clear rules around how it can be used, and what can be produced, for example, you would not want a generative AI tool amending content to suit a reasonable adjustment which then undermines the integrity of the assessment.
6: Managing the risk of AI in terms of learner malpractice
Generative AI, whilst providing a great platform for research and development, also brings a risk of plagiarism and/or malpractice. For example, how would you identify if a learner’s project report was their own or generated via AI?
Cheating and plagiarism has always been a risk, so EpAOs will already have tools in place, such as invigilators and plagiarism checkers. The key will be to check and review their procedures and tools to make sure they can cater for the increasingly enhanced technology. For example, can web browsers be locked down during an assessment so that the learner can not access any other webpages, and do the plagiarism checkers take account of generative AI content?
Your quality assurance mechanisms may also be able to detect potential AI related malpractice, for example, you will be looking at data trends and answer patterns, if lots of the answers contents are the same or very similar, could this indicate the answer has been AI generated?
One of the most important areas to inform how you use generative AI relates to security and confidentiality. It is critical that it underpins all assessment design / development and delivery. Never paste any confidential assessment information, or business information subject to intellectual property rights, into a Generative AI platform as there is no guarantee of security. As a business you should agree which AI platforms are permitted to be used by your employees and the nature of how they can be used. I would back this up with training on how to use them.
I have developed this article based on my EPA and EpAO experience, conversations with EpAOs and experimentation on a range of generative AI platforms. Although only a snapshot of thoughts, my main piece of advice for EpAOs is to have a whole organisation approach to its use, and to establish a set of rules, ethics and principles before it is used within the business. From there you can adjust to, and embrace, the exciting opportunities that AI, in all its guises, brings. And on a final note, just like the AI platforms are still learning, I am also learning that the better the prompt and or context to the question you ask the AI platform, the better the response.
By Jacqui MolkenthinRecommend0 recommendationsPublished in