From education to employment

Measuring the Quality of EpAOs

Jacqui Molkenthin

In this article Jacqui discusses ideas and options around developing public facing metrics to measure the quality of EpAOs, and to support employer / provider choice.

For years a significant number of end-point assessment organisations (EpAOs) have been calling for a national measure of quality, and if you go back as far as 2019, there was the intention to publish reports on the quality of EPAOs (Page 23 of the 2019 EQA frameworkThe institute can share the outcomes of reviews with relevant bodies and plans to publish reports or elements of them at some point in the future”).  Well, 4 years on a lot has been learnt, a few evaluation reports have been published, but there is still no indication of a set of public facing measures around the quality of individual EpAOs.

The external quality assurance (EQA) providers for EPA are Ofqual and the OfS. The Institute for Apprenticeships and Technical Education (IfATE) EQA framework states that EpAO compliance with the Ofqual / OfS Conditions provides a “first indication of assurance on each EPAO”.   Compliance essentially means that the organisations are delivering in accordance with the rules (Conditions) that have been set by Ofqual / OfS, providing a level of assurance for the customer / public confidence.

Compliance supports consistency, but it does not assure quality. To assure quality, tangible measures are required that allow for benchmarks to be established and comparisons to be made of performance across EpAOs. Quality measures can arguably lead to greater customer focus, innovation, and better outcomes, so should these be the second indication of assurance on each EpAO?    In this article, I explore some ideas around potential measures.

However, before I begin, I would like to add a few caveats to my thoughts:

  1. It could be argued that some of these measures may not be about quality, but they could still be used by customers to help make an informed choice;
  2. Some measures would only be effective on an apprenticeship by apprenticeship basis, whereas others would be effective at an EpAO level;
  3. There will always be times where things may be out of control of the EpAO, so no measure can be perfect;
  4. No single measure will provide a guarantee of quality, data can be ‘manipulated’, the perception of quality may be subjective, and there will always be ‘exceptions to the rule’;

But we should not shy away from using a range of measures to indicate the quality of EpAOs, if it is in the interests of customers, stakeholders and EpAOs.


Both the ESFA and Ofqual collect a lot of EPA related data, but provide very little information publicly on EPA, despite there being lots of data publicly available on training providers.   To get a true data picture of apprenticeships you should be able to see the entire lifecycle of an apprenticeship, which includes both the training and the end-point assessment.  For example:

  • The ILR collects a range of data on EpAO selection, dates, price, outcomes and achievements (refer to my later points around value for money and outcomes)
  • Ofqual collects data on EPA outcomes, appeals, complaints, malpractice, and other key areas
  • Ofqual carries out thematic monitoring (looking at specific issues across all EpAOs), and technical evaluations (review of EpAO EPA materials), all of which are valuable indicators of quality.

Value for Money

I do not believe that price is an indicator of quality, however, I do believe that there is a lack of transparency of pricing in the marketplace. For example, I have seen examples of hidden extras for things that I believe should be part of the core price, and EpAOs not being clear in their public pricing about who is responsible for providing tools, equipment and facilities for EPA (for example an EpAO may appear cheaper but when you get into the detail they expect the employer or provider to supply the materials, equipment and facilities for the EPA).

Should there be an agreed framework on the publishing of prices for EPA, detailing clearly what is and what is not included, and perhaps including details on the triggers for negotiation or discounts? The Ofqual conditions require the publication of a standard qualification fee, and that the information is clear to a potential purchaser (F1.1 and F1.2) . Such a framework may also be a mechanism to identify bundling, an activity undertaken by some EpAOs which is against the ESFA conditions for EpAOs (an example of a previous article I wrote on bunding can be accessed here).


I think response timeframes could be a good indicator for the customer when selecting an EpAO, and of quality. With the use of electronic systems for managing EPA, such data should be readily available to EpAOs. For example, what are the EpAO timeframes from:

  • Initial / general enquiry to response
  • Selection to arranging gateway and / or assessments,
  • Application for reasonable adjustments to decision,
  • The gateway to assessment,
  • The completion of the last assessment to the issuing of results,
  • The issuing of results to the submission of the certificate request (taking account of timeframes for appeals)

And how do they compare to their customer service policies / pledges? Ofqual could also make use of such data as an indicator around capacity.


Linked to my comments around timeliness, could the flexibility of an EpAO be an indicator of quality? For example, when delivering face to face, how flexible are their locations and dates; do they provide a service outside of the normal 9-5; how quickly can a cancelled EPA be rearranged? However, at this stage, I am not sure how it could be measured on a consistent and fair basis.


Apprentice outcomes are often used as an indicator of quality. However, I would argue that for EpAOs, outcomes are not an indicator of quality, they are merely a sales tool (providers are more likely to choose and EpAO with the best outcomes because part of their funding is based on achievement).  Apprentice outcomes and results are an indicator of the hard work of the apprentice and training provider not the EpAO, and it is the employer’s decision on apprentice readiness to enter EPA. The EpAO is there to independently assess. If, for example, significant numbers are gaining a distinction, does it mean that the apprentices are fantastic, does it mean that the EpAO assessment is not rigorous enough, or does it mean that the assessment plan is not fit for purpose? 

Satisfaction measures

These are well recognised measures of quality, and many EpAOs use them.  There may therefore be merit in exploring a consistent set of satisfaction measures that could be used across all EpAOs.  For example, it could look at satisfaction with timeframes, booking processes, courtesy of staff, responsiveness to queries, ease of access to systems, access to resources, whether the service was what they expected when making the selection and so on. IfATE and other bodies do run satisfaction based surveys, but these are not specific to individual EpAOs and as such only provide a sector overview.

Skills of the EpAO

Every assessment plan has a different set of skills and qualification requirements for assessors, so there is no single benchmark in this area, but is there merit in all EpAOs being explicit about the skills and qualifications of the assessors and potentially their CPD requirements to ensure ongoing industry expertise?

Quality marks

Would other standards be a useful indicator of quality, such as some of the ISOs, or Investors in People? Personally I do not think that additional quality indicators or marks would provide any further assurance of quality, beyond being a ‘nice to have’, but an organisation’s willingness to be reviewed by external bodies may indicate their commitment to quality.

On a final note, Ofqual’s Corporate Plan makes statements such as (a) “We want to improve the information available to those that purchase qualifications”, (b)“the choice must be clear, and the market navigable”,  (c) “improve public awareness and understanding of the range of qualifications available, to support clarity of choice in the market”, (d) “secure that qualifications are provided efficiently and that their price represents value for money”, (e) “innovation in qualification design, delivery and awarding – supporting standards, public confidence, efficiency and international competitiveness”.

I do not believe that all of this is possible without public facing benchmarks and measures of quality. But we must always remember to balance the impact of measures and the bureaucracy, if a range of public facing quality measures were to be developed would employers / providers use them, or will they continue to be led by price and convenience? 

Jacqui Molkenthin
By Jacqui Molkenthin, Consultant and researcher in EpAO start-up, quality assurance, compliance (Ofqual), & growth at JEML Consulting

FE News on the go…

Welcome to FE News on the go, the podcast that delivers exclusive articles from the world of further education straight to your ears.

We are experimenting with Artificial Intelligence to make our exclusive articles even more accessible while also automating the process for our team of project managers.

In each episode, our thought leaders and sector influencers will delve into the most pressing issues facing the FE sector, offering their insights and analysis on the latest news, trends, and developments.

Related Articles