“We aren’t allowed to assess apprentices on programme, because this only happens during end-point assessment.”
I can’t recall any conversations in recent years in my work with delivery teams where this hasn’t been raised as an issue.
So how have we come to this point?
A significant proportion of apprenticeship trainer-assessors have a background in assessing learners to achieve NVQ certification.
Consequently, the word ‘assessment’ has become a catchall for measuring learner competence against the requirements of units, which make up a qualification.
Is it any wonder then that there’s anxiety when managers and delivery team members are informed that ‘what is assessed/ completed on-programme cannot be assessed for end-point assessment’?
So, to move beyond this misconception, let’s identify the various ways in which assessment is used to support apprentices.
Simply put, the purpose of assessment in our context can be summarised as:
- Formative assessment – informs teaching and learning
- Summative assessment –understand what the learner knows and can do (attainment) at a point in time
- Standardised summative assessment – evaluate what a learner knows and can do to achieve a qualification/ apprenticeship
So, returning to “We aren’t allowed to assess apprentices on programme as this happens during end-point assessment…”, the trailblazer guidance has altered at various points over the time of the reforms.
It states “As a general rule anything that is assessed at the end-point must have been completed after the apprentice has passed the gateway review. Therefore, neither a portfolio of work nor a showcase completed during the apprenticeship can be used as assessment methods by themselves, and so cannot be individually weighted or contribute to the overall grade”.
It’s odd, that this point is only made in the context of portfolios and showcases. The same consideration should be given to other methods such as projects.
However, let’s be clear here, the only scenario in which your assessment activity and end-point assessment activity has any risk of conflict, relates to standardised summative assessment; when you are assessing the apprentice’s work for the qualification achievement on-programme (mandated or not is irrelevant).
Good end-point assessment organisations have created firewalls between what you do on-programme, which is completed and assessed to achieve a qualification, and what is expected to be produced after the gateway, which they will assess.
This avoids double counting and the associated risk of an apprentice receiving two different outcomes for the same piece of work.
This has no bearing on how you choose to support learning through effective use of formative assessment techniques (including how you help them prepare for end-point assessment); and the summative assessment methods you choose to put in place internally for checking what an apprentice knows or can do at various points in time during the programme.
This is most prevalent where you are not using a qualification/ the qualification(s) don’t cover the full occupational requirements of the Standard.
Let’s look at this another way; if you aren’t conscientiously planning to support the apprentice to learn as an outcome of formative assessment or checking progress during the programme to reset the next step in the apprentice’s journey, then you wouldn’t be helping them to reach their full potential.
It results in a gamble every time you agree with the employer to put an apprentice through the gateway.
Reforms or not, Ofsted reports are littered with statements on good/poor practice relating to the effectiveness of assessment to drive the programme of learning. Its grade altering stuff, with the reforms have brought this into sharper focus.
The good news is when you unpick this in practical details with trainer-assessors; formative assessment is often an integral part of what’s already underway (and no, it doesn’t have to be all written down).
Where there’s a larger gap in some occupational areas, is in determining valid and reliable summative assessment methods at key milestones along the apprentice journey (including initial assessment).
It’s unfair and unwise to leave this to individual trainers to determine. Why? Because it limits the potential for your learning as an organisation as you look for trends and patterns to help you make curriculum and resourcing improvements.
Don’t leave these things to chance - the curriculum development stage is a crucial step in determining your planned use of assessment strategies.
Louise Doyle, Director, Mesma