If you’ve had chance to look through some of the new apprenticeship assessment plans, you will see some assessment methods that are new, and others that are familiar. However, even if they are familiar, the high-stakes nature of end-point assessment means the way you conduct the assessment, or prepare your apprentices for it, will be different.
Over the last few weeks, we’ve published a series of articles looking at different assessment methods specified in the new assessment plans, and started to unpick what they look like in practice in the context of high-stakes end-point assessment.
(If you’ve missed our earlier posts, here’s our introduction on how end-point assessment is shaping up, and insights into the professional discussion, presentation/showcase and practical assessments. The Institute for Apprenticeships is also starting to release early guidance on assessment methods too.)
So, this week, let’s tackle the Multiple Choice Test, Welcome to Article #4 ...
56% of all assessment plans include written or online knowledge tests, of which half are multiple choice questionnaires. If you’ve been a trainer or assessor in the sector for a while, multiple choice tests will be nothing new. But let’s quickly cover off the basics…
Multiple choice tests comprise a set of questions where the candidate needs to select the correct response from a range of presented answers. The number and complexity of possible answers is used to increase the degree of difficulty for the assessment.
Multiple choice tests form only part of the end-point assessment, and sit within a suite of assessments to measure the competence of the apprentice overall. Multiple choice tests are used to measure breadth and depth of knowledge. They can also be useful in validating (but not directly assessing) practical skills and can help to formulate judgments on behaviour.
The validity, reliability and robustness of the test depends on the quality of questionnaire design. High quality tests have a range of plausible responses in a narrow range of possible answers, and require the apprentice to think through and carefully distill their knowledge of the subject, to select the right answer. Tests will be significantly less valid where low quality or leading response options are given.
Advantages of this assessment method
Multiple choice tests generate numerical scores that can be mapped to a grading profile. It provides a rapid “check” score that can rank candidates across a peer group or provide supplementary evidence of performance as part of a range of assessment methods.
For those conducting the on-programme training, multiple choice tests can be a helpful way to measure the apprentice’s progress and underpinning knowledge, as well as preparing them for the end-point assessment.
Once developed, these tests are easy and cost effective to administer, particularly when using technology platforms that allow you to automate grading and moderation.
Risks of this assessment method
With these forms of assessment, it can be time consuming to develop question banks, which will also need to be updated regularly. It is also difficult to construct a valid and reliable suite of questions. The range of responses needs to be plausible and the correct responses should not be immediately obvious.
Multiple choice tests can be susceptible to random chance. Even in complex questionnaires, up to 15% of the final achievement can be attributable to chance, which should be accounted for in pass and grading thresholds.
Those who have conducted some of the first end-point assessments have reported that apprentices are often anxious and sometimes unprepared when they come to sit the test. End-point assessment is high-stakes and multiple choice tests may be invigilated by an assessor the apprentice has never met, sometimes in unfamiliar environments. Training providers have an important role in preparing apprentices for the end-point assessment, and making sure they are familiar and comfortable with such tests. They may also have a role to play in helping to invigilate the end-point assessment test (particularly where being conducted remotely by the EPAO)
What it means in practice…
- EPAOs will need to put in sufficient time and resource to design high quality and robust questionnaires. Where done well, this assessment can be highly sophisticated. Where done badly, it can be the weakest form of assessment and lead to inaccurate grading. Assessment specialists, like SDN or others, can work with you to help get this right. As a training provider, the EPAO may not provide practice question banks, so you may need to carefully design your own practice assessment questions too.
- This form of assessment is prone to statistical error and inherent chance, so it’s important that your grade thresholds and moderating processes reflect this
- Multiple choice tests lend itself to remote testing – looking at ways to automate and conduct tests online can help to make the assessment cost effective
- Compared with other assessment methods, end-point assessors will have very little opportunity to put the apprentice at ease, so it’s important the training provider prepares the apprentice well so they are comfortable and familiar with this form of assessment
Tim Chewter, Head of Research and Communications, Strategic Development Network (SDN)
To get to grips with end-point assessment in more depth, places are now available on our Level 3 Award course in Undertaking End-Point Assessment. SDN are also producing a set of recorded presentations covering the main end-point assessment methods and critical areas of practice.
These will be available at the beginning of March. Find out more here: www.strategicdevelopmentnetwork.co.uk/sdnevent
Let us know if you have any comments at the bottom of this page.