From education to employment

How DfE shares personal data

Data we collect

The Department for Education (DfE) and its executive agencies have legal powers to collect data about individuals in the children’s services, education, apprenticeships, and wider skills training sectors, in England.

This data forms a significant part of our evidence base.

We use it:

DfE shares personal data:

where there is a clear benefit to the education or children’s services sector
to inform debate
which is able to benefit a sizeable section of the target sector and is not solely for commercial gain
to encourage the research community to work collaboratively with the department and build the evidence base together – where the research is likely to have a significant impact, DfE will ensure third parties use appropriate methodologies and make good use of peer review
for secondary research, where:
it is commissioned, funded, sponsored or supported by DfE or the wider education and children’s services sector
it drives behaviour which is consistent with DfE policy
the output does not clash with or duplicate DfE official statistics, publications or other services offered by DfE

You can search and find DfE external data shares of all ongoing personal level data sharing delivered through data sharing agreements, including an update on police, Home Office and Family Court Order use of limited parts of our data when they have clear evidence of criminal activity.

DfE and its executive agencies will ensure that any projects that are permitted to work with our data are fully compliant with data protection legislation and are subject to the 5 safes framework. Together, we ensure that safe people access our safe data for safe projects in safe settings to produce safe outputs.

DfE will only share data with a third party where we have a lawful basis for the data share. That lawful basis will be based on the specifics of each data request and on the personal data they are seeking to use. For example, DfE may use article 6(1)(e) ‘public task’ as the lawful basis where the task or function has a clear basis in law.

The following are some examples of legal powers we have used to share personal data which support the use of public task.

The Education (Individual Pupil Information) (Prescribed Persons) (England) Regulations 2009 allows us to share pupils’ personal data with certain third parties, including:

local authorities
organisations connected with promoting the education or wellbeing of children in England
organisations fighting or identifying crime
other specified crown and public bodies

The Childcare (Provision of Information About Young Children) (England) Regulations 2009 permit the sharing of individual child information from early years providers with persons who are conducting research into the educational achievements of children.

The Education (Information About Children in Alternative Provision) (England) Regulations 2007 permit the sharing of data about children in alternative provision with persons who are conducting research into the educational achievements of children.

The Apprenticeships, Skills, Children and Learning Act 2009 permits the sharing of learner data to enable or facilitate the exercise of any function of the DfE relating to education or training.

The Education and Skills Act 2008 covers the sharing of learner data in connection with the exercise of an assessment function defined as:

evaluating the effectiveness of training or education
assessing policy in relation to the provision of training or education
assessing policy in relation to social security or employment as it affects the provision of or participation training or education

The Education (Student Information) (England) Regulations 2015 permits the sharing of a subset of data for learners in further education data with persons who, for the purpose of promoting the education or well-being of students in England, are conducting research or analysis, producing statistics, or providing information, advice or guidance.

The Children Act 1989 covers the sharing of children’s services data to assist other persons in conducting research into any matter connected with a number of specified functions of the department or local authorities.

The Education (Supply of Information about the School Workforce) (No.2) (England) Regulations 2007, section 8, made under section 114 of the Education Act 2005, permits the sharing of data with persons conducting research relating to qualifying workers or qualifying trainees which may be expected to be of public benefit.

Chapter 5 of Part 5 of the Digital Economy Act 2017 facilitates the linking and sharing of de-identified data by public authorities for accredited research purposes in the public good. It is designed to support the UK research community, both within government and beyond. Currently, DfE shares LEO data using this lawful basis through Office for National Statistics (ONS) governance, Research Accreditation Service.

Vision for sharing data

DfE’s vision for sharing personal data with external organisations is two-fold.

Where data can be shared under DEA, project approval will be managed through the ONS Research Accreditation Service (RAS).
Where data cannot be shared under DEA, project approval will be managed by the DfE data sharing service under DfE legislation.

DfE will only share personal data under DEA which has already been de-identified data by ONS as service provider and DEA accredited processor for disclosure.

You can find out more about how the ONS shares and uses personal data. All research projects under DEA are consistently accredited using the Research Code of Practice and Accreditation Criteria which was approved by the UK Parliament in July 2018. As the statutory accrediting body, the UK Statistics Authority has also established a Research Accreditation Panel to oversee the independent accreditation of processors, researchers and research projects.

Researchers can apply for the following data through the ONS Research Accreditation Service (ONS RAS):

Researchers must apply for access to all other DfE data through the DfE data sharing service.

Five safes

All DfE data, whether accessed via ONS RAS or the DfE data sharing service, will be subject to the 5 safes:

safe settings
safe people
safe projects
safe outputs
safe data framework for how we protect data

Safe settings

Our default route for sharing personal data for research purposes through the ONS SRS physical and virtual datalabs (including remote access). This is a safer way to access data compared with the transfer of data files to individual organisations.

It’s not always suitable to get data through the SRS. If you’re receiving data directly from us, we make sure that data is only provided to your organisation and held in a safe settings by checking:

your organisation’s IT and building security
you don’t keep the data for longer than allowed

Safe people

We only share our data with people we trust to use it safely and responsibly.

To receive personal data directly from us, you have to:

provide a copy of a ‘basic disclosure’ certificate that is no more than 2 years old
sign an individual declaration form to confirm that you abide by our data sharing agreements
complete recognised data protection and information security training

To access personal data via ONS SRS, you have to:

be approved by us
sign an individual declaration form to confirm that you abide by our data sharing agreement
complete the ONS approved researcher scheme

Safe projects

We have a senior board, the data sharing approval panel (DSAP), which makes sure all external requests for personal data meet our data sharing principles and are:


The board includes senior internal and external data experts who meet regularly to consider cases and approve or reject requests.

See Data sharing approval panel (DSAP): terms of reference (PDF, 189 KB, 9 pages) for more information.

Safe outputs

When applying to receive our data, you have to:

make it clear how you intend to use the data
follow the relevant agreement and schedule for the data share

When working through the SRS, if you want to use the results from your analysis outside of the service these will be checked by ONS. They’ll make sure the outputs protect data confidentiality and can’t be used to identify any specific individuals or organisations.

Safe data

We now classify all persona data leaving us against 2 criteria:

the risk that an individual could be identified
how sensitive the data item is

This makes it easier for us to be transparent about:

what kind of data we share with third parties
our decision making

Safe data classification framework

When applications for personal data are made, we use these classifications to scrutinise the data request to make sure that:

we only share data proportionate to the intended purpose
we are comfortable with the level of protection around the individual’s identity that is built within the dataset we are allowing the third party to access

We also use these classifications for checking the additional conditions of processing which is a legal requirement.

We publish the risk of identification and sensitivities in the DfE external data shares.

Assessing the risk of identification

We use 6 levels of identification risk to describe data.

Level 1: instant identifiers

Examples of personal level data that instantly identify an individual within a dataset include:

full names
full addresses
email addresses
phone numbers
IP addresses

Level 2: meaningful identifiers

These are identifiers that are assigned to people such as a:

NHS number
national insurance number

In education, pupils have identifiers such as:

unique pupil numbers
unique learner numbers
national candidate numbers

We call these meaningful identifiers because they:

directly identify the individual
are often known by the individual
can easily be used to link other educational data

A meaningful identifier could be combined with other data, increasing the chance of identification.

Where possible, we’ll:

avoid sharing instant or meaningful identifiers
aim to limit data-sharing to data with a risk of identification set at level 3 or below

If there’s a need to identify an individual, we’ll ensure that:

it’s justified
it’s proportionate to the intended purpose
we build an adequate level of protection into each instance of data-sharing

We provide awarding organisations personal level data with meaningful identifiers so that they can link up the current year’s exam results.

The classification of all data extracts with risk of identification level 1 or 2, will be published as ‘identifiable personal level data’.

Level 3: meaningless identifiers

A lot of research is interested in how individual pupils progress over time. To achieve this whilst safeguarding the individual’s identity, we make use of identifiers that have no meaning outside of our data.

These are less risky than meaningful identifiers as they can’t be used to join our data to non-DfE data.

The NPD uses a data variable called the pupil matching reference which allows users to identify the same pupil across different parts of NPD, but cannot be used by a third party for linking other data sources

Level 4: non-identifiers with higher identification risk

Within our personal level data, there are data variables that do not fall into level 1, 2 or 3 but can still be joined together to identify individuals.

Even if the names, addresses, meaningful reference numbers have all been taken out of the data we know there is still a risk that certain variables could result in an individual being identified. This is what we class as ‘re-identification risk’.

Assessing re-identification risk is not an exact science. We’ve consulted experts in the field and have found that certain combinations are more risky than others. For example the risk increases if we include:

number of siblings
the school a child attends
postcode of home address

We identify these combinations within the data requested and then question whether they are essential to the project purpose or research.

Level 5: non-identifiers with lower identification risk

This is the level of identification risk we give to data variables that do not meet any of the above criteria.

The classification of all data extracts with risk of identification level 3, 4 or 5, will be published as ‘de-identified personal level data (with re-identification risk)’.

Level 6: aggregate or suppressed data

We use these terms to describe the method of aggregating data. These data shares do not come to DSAP.

Where there are small numbers of individuals within the aggregated data, the appropriate levels of suppression are applied to make sure there is only an extremely remote risk of identification.

If a data cell only has 5 children in it, you may be able to infer things from what we have published if you had prior information about that group. For example if you knew 4 of them personally.

Assessing the sensitivity of data

We use 5 categories to describe the sensitivity of data.

A. Public commitment that this data will never leave the department

There are a few data variables that we have publicly committed will only be used for internal departmental purposes. This category is used to make sure that those commitments are embedded into all data governance processes.

Any request including sensitivity A data would be rejected by DSAP.

We collect data about the interactions some children have with children’s services, such as being:

looked after

We consider this as highly sensitive. Sharing this data for research purposes (using appropriate levels of data safeguarding) helps us to understand more about the children’s experience of these interventions to improve children’s services outcomes.

Sensitivity B data undergoes an additional level of scrutiny by the children’s services teams on top of DSAP scrutiny.

The law defines areas of personal data that are particularly sensitive for individuals as ‘special categories’.

Within education, we believe that there are variables that citizens would treat as equally sensitive, but are not covered in GDPR, such as free school meal eligibility.

We use this category to make sure such variables are thought about in the same way as GDPR special category data during our decision-making processes, even if legally there are differences.

Sensitivity C data will undergo the same level of scrutiny as if they were sensitivity D data.

GDPR special categories are clearly set out in law. Most relevant in the context of education data are:

elements of special educational need (SEN) that have a health context

Sensitivity D data requests require additional conditions of processing to be justified, as set out in law, before DSAP can consider it for data sharing.

E. Other

Data that does not fit into any of the other 4 categories, such as exam results.

Read More

Related Articles