1. Methodology background


 National Statistic   
 Survey name  Public service productivity: total
 Frequency  Annual
 How compiled  Based on third party data
 Geographic coverage  UK

Nôl i'r tabl cynnwys

2. About this Quality and Methodology Information report

This quality and methodology report contains information on the quality characteristics of the data (including the European Statistical System five dimensions of quality) as well as the methods used to create it. The information in this report will help you to:

  • understand the strengths and limitations of the data

  • learn about existing uses and users of the data

  • understand the methods used to create the data

  • help you to decide suitable uses for the data

  • reduce the risk of misusing data

Nôl i'r tabl cynnwys

3. Important points

  • The estimate for public service productivity is displayed as an index, showing the change over time of the amount of output provided for each unit of input.

  • To remove the effect of price changes over time, public service output and inputs are measured in quantity terms (also referred to as volume terms), instead of expenditure terms.

  • Some public service area outputs are also adjusted for changes in the quality of activities and services provided, as recommended by the Atkinson Review (PDF, 1.08MB). This is so the outcome of a public service can be observed, rather than the output alone.

  • Public Service Productivity estimates are multi-factor productivity estimates as opposed to labour productivity estimates (a single-factor productivity measure), and so are not comparable with our headline measures of whole-economy labour productivity.

  • These estimates are produced to measure the productivity of total UK public services, but do not measure value for money or the wider performance of public services.

Nôl i'r tabl cynnwys

4. Quality summary

Overview

Total public service productivity is estimated by comparing growth in the total output provided with growth in the total inputs used. If the growth rate of output exceeds the growth rate of inputs, productivity increases, meaning that more output is being produced for each unit of input. Conversely, if the growth rate of inputs exceeds the growth rate of output, then productivity will fall, indicating that less output is being produced for each unit of input.

Output, inputs and productivity for total public services are estimated by combining growth rates for individual services using their relative share of total government expenditure as weights.

Public Service Productivity estimates are multi-factor productivity estimates as opposed to labour productivity estimates (a single-factor productivity measure), and so are not comparable with our headline measures of whole-economy labour productivity. This is because the inputs for public service productivity include goods and services and capital inputs, in addition to labour input. The public service productivity measures included in this article are also not directly comparable with our market sector multi-factor productivity estimates because of differences in the methodology used. For further information, see How to compare and interpret ONS productivity measures and A simple guide to multi-factor productivity.

These estimates are produced to measure the productivity of total UK public services. They do not measure value for money or the wider performance of public services. They do not indicate, for example, whether the inputs have been purchased at the lowest possible cost, or whether the desired outcomes are achieved through the output provided.

The methodology for calculating these statistics is based on the recommendations of the Atkinson Review (PDF, 1.08MB) on the measurement of government output and productivity for the national accounts. Estimates are published on a calendar year basis to be consistent with the UK National Accounts, and estimates are available both for total and for an individual service area breakdown. These are included in Public service productivity: total, UK, 2017.

More information on the methodology and sources used can be found in Sources and methods for public services productivity estimates.

Uses and users

Users of our public service productivity measures include:

  • departments within UK government such as the Cabinet Office, HM Treasury and regulatory bodies

  • the National Audit Office

  • press and general public

  • the Office for Budget Responsibility

  • Institute of Fiscal Studies (IFS)

  • the Nuffield Trust

  • academia

  • international statistical bodies

These organisations use the productivity estimates in a number of ways. Total public service productivity estimates have been used to inform previous IFS Green Budgets, are directly used by the Nuffield Trust, and are regular inputs into briefings for Cabinet Office’s ministers and permanent secretaries. We have, similarly, given advice to government departments on how to incorporate the general methodology of the estimates into their own work.

Feedback from users is received via user surveys and consultation events. Acting on such feedback, we are undertaking a development programme to improve public service productivity statistics across all service areas. As well as the annual estimates that are the focus of this release, we also publish experimental estimates of quarterly public service productivity and economic commentary – allowing for statistics to be provided on a more timely basis.

Strengths and limitations

Strengths of Public service productivity: total, UK, 2017 include:

  • The majority of data we use is administrative data and as such we are not reliant on surveys.
  • The dataset is decomposed in multiple ways, allowing us to provide comparable but different insights; for example, decomposing by service area shows how the quality adjustment for education can affect the total productivity figure, while decomposing by component could show how an increase in spending on a factor of inputs can change the higher level numbers.
  • The impact of quality change on public services output.
  • The dataset experiences continuous improvement because of the open revisions policy – the estimates are not constrained by Blue Book procedures.

Limitations of Public service productivity: total, UK, 2017 include:

  • There is a two-year time lag in producing the estimates – this year, the dataset time series covers the period 1997 to 2017 because of data availability; to account for this, we are now also producing experimental quarterly estimates of public services productivity.
  • Several different ways of measuring output are used in producing the statistics: some service areas are quality adjusted, some are directly measured, and the remainder output is assumed to be equal to inputs, giving a productivity estimate of zero for these service areas; work will continue to develop the output measurements.
  • There is no geographical breakdown of the estimate – the numbers given are for the UK as a whole.

Recent improvements

There are several major improvements that have been made to the associated statistic as part of Public Service productivity: total, UK, 2017:

  • For education: improvements to data sources for the quality adjustment and development of the cohort splits methods for education attainment data.

  • For healthcare: improvement of inputs and output, adjusting the number of days to carry out activities during any year, including a new deflator for intermediate goods and services consumption and introducing NHS "bank" staff data.

  • For public order and safety: development of the quality adjustment and improvement of labour deflator.

  • For police: improvements to labour inputs.

  • UK National Accounts data sources.

Further information on the changes can be found in Improved methods for total public service productivity: total, UK, 2017 and Sources and methods for public service productivity estimates.

Education: improvements to data sources for the quality adjustment

Following Atkinson’s recommendations, various developments of quality changes have been applied to education. The latest changes in this area include the improvements to educational attainment data sources for England, Scotland, Wales and Northern Ireland, as summarised in this section. Differences between countries and, therefore, country-specific data, are important for education services as devolved administrations have significant control.

England

In our first estimates, an average points score series was used. We then included the threshold measure from Department for Education (DfE) data, defined as the proportion of students achieving five or more passes at GSCE (grades A* to C) at the end of Key Stage 4.

After recent policy changes, this threshold measure is no longer being produced as a headline measure of GCSE attainment. The structure of the qualifications has changed and so has the marking system, with a shift from lettered to numbered grades. Details on Key Stage 4 headline measures (PDF, 799KB) have been published by DfE.

To account for this change, we now use the DfE's new headline measure, Attainment 8 (PDF, 274KB), from the academic year 2016 to 2017. It counts performance in eight slots, two of which are reserved for English and Maths and are double weighted to demonstrate their importance. Three of the other slots are for English Baccalaureate subjects, and the last three are for any GCSE or equivalent subject. To reflect the importance of English and Maths for students, we have also added a pass in English and Maths to the threshold measure, which is used in academic years ending 2009 to 2016.

Scotland

For Scotland, we previously used a forecast of the average point score series, which was provided by the Scottish Government until the academic year 2013 to 2014.

We now use data from the Scottish Qualifications Authority (SQA). SQA publishes attainment by qualification type on their website. Different qualifications are for different Scottish Common Qualifications Framework (SCQF) levels. In the SCQF, Level 5 is considered equivalent to GCSE passes of grade C or above.

The three Level 5 National qualifications we consider are: National 5s (the equivalent to GCSEs); Skills for Work and Personal Development (SWPD); and Intermediate 2s. For the latter, these have been phased out, with the last exams for them sat in the academic year 2014 to 2015. The majority of students sit National 5s.

Data on the average attainment per student for each type of qualification are provided by SQA – for example, for National 5s, the average number of grades A to C per student. We use the number of students sitting each type of qualification (also provided by SQA) to weight attainment together. This means that an overall Level 5 attainment index can be constructed.

Please note that the Scottish education system is different from those in England, Wales and Northern Ireland. As a consequence, the data from the Scottish Government to measure student attainment are not directly comparable with those produced by the other countries.

Wales

Welsh attainment data were previously the "average wider points score" for Year 11 students. There has been large growth from the academic year 2010 to 2011 onwards, before a steep decline in the academic year 2016 to 2017. As attainment is a proxy for quality, the conclusions drawn from these data seem unrealistic and the Welsh Government is currently reviewing its attainment measures.

We now use the "average capped wider points score1" from academic years ending 2011 to 2017. Both these data series and other Key Stage 4 indicators are available online. The capped version of the average wider points score considers the students' best eight qualifications, whereas as the original measure has no restriction on the number of qualifications obtained. The Wolf Review (PDF, 2.9MB) discussed the growing contribution to average points score measures in England from vocational qualifications, as more students were studying for and passing them. There were concerns that not all the qualifications were as useful as the attainment data suggested. It appears likely that this same issue is been observed in Wales, resulting in the Welsh Government's decision to review their attainment data.

For the academic year 2017 to 2018, we have used a new headline measure, called "Capped 9"2, as recommended by the Welsh Government.

Northern Ireland

In all previous estimates of public service productivity, Northern Ireland attainment has been assumed equal to England, as a result of the absence of known data sources.

We now use a threshold measure for Northern Ireland, which is publicly available and starts from the academic year 2005 to 2006. Through collaboration with the Northern Ireland Department of Education, we have been able to use data from as early as the academic year 1996 to 1997. This means that we have an attainment adjustment specifically for Northern Ireland, helping to ensure good representation for all four countries in the UK. The measure is the percentage of Year 12s achieving five or more GCSEs at grade C or above and it is available online.

As for England, we now use the threshold measure inclusive of English and Maths beginning from the academic year 2009 to 2010 (also available online).

Education: cohort splits for education attainment data

The Atkinson Review recognized that "The GCSE results are the outcome of 11 years of compulsory schooling" and not only of the last year. Following this recommendation, a new attainment measure was implemented to reflect the quality of teaching of the secondary education provision over the entire five years (or equivalent).

For example, GCSE attainment data published for the academic year 2017 to 2018 reflects the effectiveness of the teaching from Year 7 to Year 11. There is limited evidence on the contribution of different years of schooling to attainment. However, based on the literature available and the structure of secondary education, the new "cohort split" approach applies specific percentages of the new attainment data back to previous years, subject to contributions deemed appropriate, as shown in Table 1.

Further information for the rationale and impact of these weights can found in Improved methods for total public service productivity: total, UK, 2017.

Healthcare: improvements to inputs and output

Public service healthcare productivity includes three changes:

  • The introduction of a "number of days adjustment" to output to account for the effects of leap years and year-to-year changes in the number of working days.
  • The introduction of a new deflator for intermediate goods and services consumption.
  • The incorporation of expenditure on NHS "bank" staff into labour inputs.
Number of days adjustment

As healthcare output is calculated using a cost-weighted activity index, annual healthcare output may vary according to the number of days available to carry out activities during any year. The total annual number of days varies with leap years, while the number of bank holidays and how weekends fall during the year influence the annual number of working days.

Both the total number of days and the number of working days can affect the annual output of different parts of the healthcare service. The effect is particularly notable when using financial year (April to March) data, such as that which form the basis of our healthcare productivity measure, as some financial years may contain four Easter bank holidays and others none.

We have therefore introduced an adjustment to output to remove the effect of changes in the annual number of total days and number of working days on healthcare output.

New deflator for intermediate goods and services consumption

In common with inputs for other services sectors in public service productivity, healthcare inputs are produced in volume terms by deflating expenditure by the most relevant available deflator. In previous editions of the Office for National Statistics (ONS) public service healthcare productivity publication, the main deflator used to calculate goods and services inputs was the Health Service Cost Index (HSCI). The HSCI was produced by the Department of Health, but was discontinued in 2017, with a final data point of March 2017.

To replace the HSCI, a new NHS Cost Inflation Index has been developed by the Department of Health and Social Care (DHSC), in conjunction with NHS England and NHS Improvement, the Centre for Health Economics at the University of York, and the ONS. The NHS Cost Inflation Index uses a range of NHS and ONS data and different components of the index will be used to deflate appropriate elements of our healthcare productivity inputs.

We publish the NHS Cost Inflation Index in an annex alongside the public service healthcare productivity release.

NHS "bank" staff

In the NHS, "bank" staff fulfil a similar role to agency staff, working variable hours in response to demand. However, unlike agency staff, bank staff are NHS employees. We now include bank staff in our labour inputs for England using deflated expenditure data from the NHS, starting from financial year ending (FYE) 2016.

Public order and safety: improvements in the quality adjustment and labor deflator

There have been two changes to the quality adjustment for public order and safety (POS), outlined in this section. There is also an improvement to the labour deflator.

Quality adjustment

Prisons are adjusted for quality using three metrics: recidivism (severity adjusted re-offending rates), escapes from prison, and safety in prison. For safety, we construct an index from growth rates of the number of occurrences each year of slight incidents, serious incidents, and fatalities. The source data, from the Ministry of Justice, is split by self-inflicted and assaults for the two incidents types. We have been able to extend the series for the assaults data back to 2000, from the previously used year of 2004.

For 2017 re-offending source data were unavailable from the Ministry of Justice as a result of data collection updates. We have forecasted the final quarter based on the data since 2012 which were collected and reported on a consistent basis. Sensitivity checks suggest that the quality adjustment is robust to various forecasting approaches, and that the method we have used produces sensible results. These data will be available for the 2018 estimates, removing the need for a forecasting in future publications.

Labour deflator

We have also improved how we deflate POS labour inputs. We previously used the Index of Labour Costs per Hour (ILCH) for the public sector to deflate expenditure on labour in POS. The POS service area covers activities that employ a unique set of occupations, which makes a more specific and accurate deflator possible. For example, a large component of POS is the Fire and Rescue Services. Firefighters might be expected to have wage growth that differs to the public sector as a whole, given their employment conditions.

To improve the POS labour deflators, we have used published data from the Home Office, Ministry of Justice and HM Prisons and Probation Services on the workforces of the POS service area. The breakdown of employment by occupation, role, grade or job title have been used to weight the growth rates in wages of specific occupations, as recorded in the Annual Survey of Hours and Earnings (ASHE). We have implemented these new deflators for the fire and prisons elements of POS, which collectively make up around 60% of labour expenditure in POS.

Analysis indicated that the new deflators better reflect the trends in wages and the volume of labour in these areas, and thus will produce more accurate estimates of productivity. The remainder of POS, covering courts and other legal activities, continues to be deflated by the ILCH public sector index.

Police: improvements to labour inputs

Previously, labour expenditure on local government and central government policing was deflated to approximate the volume of labour inputs. For local government expenditure, ASHE data and police workforce data are used to calculate an appropriate deflator.

We now estimate the volume of local government labour directly from data on full-time equivalent employees (FTEs) and relative salaries for different groups. The main advantage of this change is to allow for the incorporation of more recent data from Her Majesty's Inspectorate of Constabulary and Fire and Rescue Services (HMICFRS), which disaggregates FTEs by policing function (for example, neighbourhood policing, road policing or investigations). Were we to introduce direct output measures for certain areas of policing in the future, more accurate estimates of inputs for certain policing functions would now be possible. This is consistent with principles set out in the Atkinson Review (PDF, 1.08MB), where it is recommended that the measurement of inputs should be as comprehensive as possible.

National accounts data sources

In the latest release we have used government expenditure data consistent with Blue Book 2019. Previously we have used the European System of Accounts (ESA) Table 11 (General government annual expenditure) data. ESA Table 11 is published earlier in the year but is based on the previous year's Blue Book and has some slight accounting differences to the UK National Accounts.

The Office for National Statistics (ONS) has, along with other developments, improved its estimation of capital consumption and of government purchases of goods and services. These updates have increased the consistency and accuracy of inputs data. These and other methodological changes to Blue Book 2019 are detailed in an impact article. Blue Book 2019 itself was published in October 2019.

For capital consumption, asset lives have been reviewed (and found to be shorter in most cases, increasing consumption of fixed capital). For goods and services, the scope of the method to estimate the cost of Value Added Tax (VAT), which the government does not pay, for consistency with private purchases in the UK National Accounts has been widened.

We also made some more general systems improvements to maintain best practice and improve consistency across the different aspects of our processing. In particular, we improved the consistency of the construction of our deflators in our processing system, leading to minor revisions to other deflators (as well as the deflator development described elsewhere in this methods article).

Notes for: Quality summary

  1. The "wider" in "average wider points score" simply refers to the inclusion of various non-GSCE qualifications in the average points score measure, to differentiate it from the standard "average points score" measures used elsewhere.

  2. There are two versions, the original and the interim, both of which will be produced for the foreseeable future. We plan to use the original measure as the interim Capped 9 is only available for academic year 2018 to 2019.

Nôl i'r tabl cynnwys

5. Quality characteristics of the data

Relevance

(The degree to which the statistical product meets user needs for both coverage and content.)

The UK Centre for the Measurement of Government Activity (UKCeMGA) was launched in 2005 to take forward the recommendations of the Atkinson Review (PDF, 1.08MB), with the aim to improve the measurement of government output, inputs and productivity, and to establish a regular reporting schedule.

In the years since the publication, we have developed estimates of healthcare and education output, inputs and productivity. These estimates are updated annually, and any methods changes are explained in prior papers and articles that we’ve published. We also periodically update estimates of output, inputs and productivity for the remaining areas of government final consumption, based on the Classification of Functions of Government (COFOG) expenditure. These are:

  • adult social care

  • children’s social care

  • social security administration

  • public order and safety

  • police

  • defence

  • other government services (includes economic affairs, general public services, recreation, housing and environmental protection)

There are three different statistical outputs published in Public service productivity: total, UK:

  • a volume index of total public services output and indices of output by service area

  • a volume index of total public services inputs and indices of inputs by service area

  • a derived index for total public services productivity and by service area (output per unit of inputs)

Accuracy and reliability

(The degree of closeness between an estimate and the true value.)

Both the output and inputs series for each service area are constructed using a variety of administrative and national accounts data. The accuracy of the derived series therefore depends on the accuracy of the source data. Unless we have introduced substantial methodological changes, the main source of revisions to each service area’s productivity estimates will be changes in source data and expenditure weights.

As there is no other source of public service productivity estimates that is comparable in methodology, validating our results is difficult. This is achieved through regular triangulation articles, as set out in the Atkinson Review.

It is difficult to provide a confidence interval around our estimates given the multiple sources of data on which the estimates are based. There will inevitably be some margin for error from a “true” measure of productivity, which is unknown. We collate triangulation evidence from other government departments and independent sources, which provides additional context to inform the interpretation of the public service productivity statistics.

Coherence and comparability

(Coherence is the degree to which data that are derived from different sources or methods, but refer to the same topic, are similar. Comparability is the degree to which data can be compared over time and domain, for example, geographic level.)

Assessing coherence of the data in Public service productivity: total, UK is difficult as there are currently no comparable measures published. We convert some source data from financial year to calendar year and aggregate results to a UK level, which makes it difficult to make comparisons at a country level. Service areas are also defined by Classification of the Functions of Government (COFOG) rather than administrative department or devolved administration. The different methodology developed for healthcare and education and the “output=inputs” treatment of three service areas (police, defence and other), means that direct comparisons between service areas should not be made.

The different three methods by which we measure output and their distribution between service areas can be seen in Figure 1.

The estimates cover the UK and, where possible, are based on data for England, Scotland, Wales and Northern Ireland. Where data are not available for all four countries, the assumption is made that the available data are representative of the UK. This can happen for quality adjustment, output or inputs data.

Finally, in instances where the data are available for all four countries of the UK, there may be slight variations in definitions or reporting conventions that introduce additional, largely unquantifiable effects on our estimates.

Accessibility and clarity

(Accessibility is the ease with which users can access the data, also reflecting the format in which the data are available and the availability of supporting information. Clarity refers to the quality and sufficiency of the release details, illustrations and accompanying advice.)

Our recommended format for accessible content is a combination of HTML web pages for narrative, charts and graphs, with data being provided in usable formats such as CSV and Excel. We also offer users the option to download the narrative in PDF format. In some instances, other software may be used, or may be available on request. For further information, please refer to the contact details at the beginning of this page.

For information regarding conditions of access to data, please refer to the following links:

In addition to this Quality and Methodology Information, basic quality information relevant to each release is available in the Quality and methodology section of the relevant article.

Notification of changes in methodology are published on the public service productivity topic specific methodology page, as well as historic changes being available in the guidance and methodology area of our archive website.

Timeliness and punctuality

(Timeliness refers to the lapse of time between publication and the period to which the data refer. Punctuality refers to the gap between planned and actual publication dates.)

Estimates of output, inputs and productivity in the total public sector are published on a calendar-year basis, and generally refer to the period (t-2), with t being the current year of publication. If the reference period were to be moved, for example, to (t-1), there would be a significant increase in the use of estimation to fill data gaps in the productivity articles, in advance of the publication of these datasets.

For more details on related releases, the GOV.UK release calendar provides 12 months’ advance notice of release dates. In the unlikely event of a change to the pre-announced release schedule, public attention will be drawn to the change and the reasons for the change will be explained fully at the same time, as set out in the Code of Practice for Statistics.

To date, each Public service productivity: total, UK article has been published as scheduled.

Concepts and definitions

(Concepts and definitions describe the legislation governing the output and a description of the classifications used in the output.)

Our analysis of productivity in UK public services represents internationally pioneering work. Measurement of outputs follows the guidance in the System of National Accounts (SNA) 1993 and subsequent SNA 2008, as well as the European System of Accounts (ESA) 1995 and subsequent ESA 2010. Measurement of outputs (including the need to measure the change in quality), inputs and productivity follows the principles in the Atkinson Review. The estimates presented in the article are for service areas classified by the Classification of the Functions of Government (COFOG).

Geography

Estimates are published on a UK geographic basis, with no further geographic breakdown provided.

Output quality

This statistic is a National Statistic and so meets the quality requirements of this status. It measures total productivity and the productivity of nine different service areas, offering a comprehensive coverage of the data required by the users.

Why you can trust our data

The Public service productivity: total, UK statistic is produced in accordance with the best practices set out in the Statistics Authority’s Code of Practice and the ONS’s Data Policies.

Any revisions to the data are clearly identified as such and limitations are made known to all users.

Nôl i'r tabl cynnwys

6. Methods used to produce the data

Main data sources

A range of data sources are used to provide a comprehensive picture of UK public services. A summary of these data sources is documented in Sources and methods for public service productivity estimates.

How we process the data

The following section outlines the main statistical methods used to compile estimates of public service inputs, output and productivity. A detailed explanation of the methods used is given in Sources and methods for public service productivity estimates. Significant methods changes are published in advance on the topic specific methodology page to inform users of both the nature and the likely impact of methodological changes.

Measuring output

The methods of measuring output vary between and within service areas. Table 2 provides a breakdown of these, as well as a definition of output measure.

The output measures used are based on or taken in chained volume from the Blue Book. Given that most public services are supplied free of charge or at cost price they are considered non-market output. The output of most services is measured by the activities and services delivered. These are usually referred to as “direct output” measures. These activities are measured and aggregated into a single volume output according to their relative cost or share of service area expenditure. This is referred to as a Cost-Weighted Activity Index.

For “collective services” — those that are not provided to an individual, such as defence — it is difficult to define and measure the nature of their output. It is assumed for such services that the volume of output is equal to the volume of inputs used to create them. This is referred to as the “output-equals-inputs” convention.

In addition, a quality adjustment factor is applied to the volume of activity index of several service areas. The purpose of these quality adjustment factors is to reflect the extent to which the services succeed in delivering their intended outcomes and the extent to which services are responsive to users’ needs. This results in estimates differing from those used in the national accounts.

There are currently four service areas that include such an adjustment:

  • healthcare

  • education

  • public order and safety

  • adult social care

Healthcare

The healthcare productivity quality adjustment is a compound measure made up of five components:

  • short-term post-operative survival rates

  • estimated health gain from procedures

  • waiting times

  • primary care outcomes achievement under the Quality and Outcomes Framework

  • National Patient Surveys scores

This quality adjustment process is applied from 2001 onwards. In the national accounts series, no quality adjustment is applied to healthcare output at present.

Further detail can be found in Source and Methods Public Service Productivity Estimates: Healthcare (PDF, 328.6KB).

Education

Output in primary and secondary schools, CTCs and academies are quality adjusted using different GCSE-level attainment measures for each of the devolved nations. As exam performance varies across geographical areas and because education is a devolved policy area that affects the courses studied and exams taken, different quality adjustments are applied to output in each country separately.

Different attainment measures are used over different years, in keeping with changing headline measures’ outcomes. The current attainment measures and data sources are listed below for each country:

  • England, Attainment 8, Department for Education (DfE)
  • Scotland, National 5s and Skills for Work and Personal Development courses pass rates, Scottish Qualifications Authority (SQA)
  • Wales, Capped 9, Welsh Government (WG)
  • Northern Ireland, Threshold measure including English and Maths, NI Department of Education (DENI)

Education attainment measures adopt a “cohort split” approach, whereby a new attainment measure is proportionally applied to the contributing years.

Further detail can be found in Sources and methods for public service productivity estimates.

Public order and safety

Quality adjustments are applied to the criminal justice system elements of public order and safety output. This includes output associated with Crown Courts, magistrates’ courts, legal aid, Crime Prosecution Service, prison and probation services. There are two main sections included. The first adjusts the whole series by a severity-adjusted measure of total reoffences per offender. The second looks more closely at the different service areas. With prisons, this is including escapes from and safety inside the prisons, using number of incidents and their severity. With courts, it uses the timeliness of courts to process cases passed on to them by police.

Further detail can be found in Quality adjustment of public service public order and safety output: current method.

Adult social care

A new quality adjustment in ASC was introduced to apply the concept of adjusted social care-related quality of life and data from the Adult Social Care Survey. To assess how well their needs are met, respondents are asked to rank how well their care needs are met in eight domains, such as food and nutrition, accommodation and safety. Then, each level of response is weighted by importance to quality of life, using weights derived from another survey of community care users.

To note that the quality adjustment is produced separately for working-age adults with learning disabilities, other working-age adults, and older adults for each of residential and nursing care, and community care. The final six components are then weighted together using the same measure of public expenditure as used in the inputs and output. The quality adjusted output is obtained from the rate of change in the aggregate quality adjustment for each year and then applied to the corresponding year of the output index. More information on the methodological developments can be found in Public service productivity: adult social care QMI.

Measuring inputs

The input measures used are based on or taken from a mixture of expenditure, deflator and administrative data sources. They consist of compensation of employees, intermediate consumption and consumption of fixed capital of each service by central government and local government.

Central government expenditure data are sourced in current prices from HM Treasury’s public spending database – Online System for Central Accounting and Reporting (OSCAR) – which collects financial information from across the public sector. Annual estimates are derived from monthly profiles of spending for the current financial year and modified to meet national accounts requirements.

Most local government expenditure data are sourced from financial year returns by local authorities, apportioned across calendar years.

Expenditure data are subsequently adjusted for price changes (deflated) using a suitable deflator (price index). The purpose of this being to measure input volumes indirectly.

For a number of inputs – in particular most healthcare and education labour inputs – volume series are measured directly using administrative data sources (that is, full-time equivalent staff numbers from NHS staff resources).

Deflator or price indices

A suitable deflator (price index) – or composite deflator – is applied to each current price expenditure to estimate a volume series. The Atkinson Review (PDF, 1.08MB) recommends that deflators are applied separately for each factor and that the price indices should be specific for each service. Price indices for labour and procurements should be sufficiently disaggregated to allow for changes in the compositions of the inputs. Currently, deflators are taken from a range of different sources to best represent changes in prices for each service input. Where suitable data are unavailable, the gross domestic product (GDP) implied deflator (acting as a generic price index) is used instead.

These series are aggregated to form an overall estimate of the volume of inputs used to provide each of the public services identified in the total public services.

Further detail can be found in the Sources and Methods for Public Service Productivity Estimates: Total Public Services (PDF, 111.4KB).

Aggregating service area inputs and output

The expenditure shares of each public service component are calculated using a breakdown of GGFCE by Classification of the Functions of Government (COFOG). ONS publishes this breakdown in ESA table 11 and provide the data to Eurostat for the European Deficit Procedure (EDP) in accordance with the Maastricht Treaty.

The EDP source is used for the following reasons:

  • consistent time series are available for all public service components

  • the data are published on a regular basis

  • a detailed breakdown is available, allowing us to separate, for example, adult social care and children’s social care from social protection

Aggregating output

Estimates of total public sector output are produced by weighting and then aggregating the volume of output in each service area. The weights used in this process are the service area COFOG expenditure weights, and are applied to form a chain-linked Laspeyres volume index of total public service output.

To identify the impact of including three sectors in which an “output-equals-inputs” convention is used, Public service productivity: total, UK, 2017 reports the headline results of a sensitivity test that excludes these sectors.

Where:

  • O is a Laspeyres index of output

  • e is expenditure

  • t and j index time and service areas respectively

  • Ot = 0 is set equal to 100

Aggregating inputs

Estimates of total public sector inputs are produced in a similar manner. This involves weighting and then aggregating the volume of inputs in each service area, using the same COFOG expenditure weights as in the calculation of aggregate output. This produces a chain-linked Laspeyres volume index of inputs for total public services, which is calculated using the following equation:

Where:

  • I is a Laspeyres index of inputs

  • e is expenditure

  • t and j index time and service areas respectively

  • It=0 is set equal to 100

Measuring productivity

Estimates of total public sector productivity are calculated using the aggregate output and inputs indices produced using the approach discussed previously.

Including the police, defence and other government services in the calculation of productivity will limit the growth in total public service productivity, pushing estimates of productivity growth towards zero. The extent to which they affect growth of total public service productivity is proportional to their share of total expenditure. During periods when productivity in other sectors is positive, the “output-equals-inputs” convention will reduce productivity growth. During periods when productivity in other sectors is negative, the inclusion of the police, defence and other sectors will tend to raise productivity growth estimates.

How we analyse and interpret the data

The contributions of each service area to total growth in output, inputs and productivity are calculated, as are the levels of revisions. These different findings are shown in a series of charts for stakeholders within Office for National Statistics (ONS), and the reasons behind changes in the figures are identified as far as possible.

The data are then published for use by various external stakeholders, who are welcomed to provide feedback, show us how they use the statistic and provide guidance where we should focus future work in public service productivity.

How we quality assure and validate the data

A number of procedures are followed to quality assure the data. These processes are applied at all stages of the production process – at both granular and aggregate levels.

Internal quality assurance is carried out at all main stages of processing. This is followed by a larger scale quality assurance, involving stakeholders and important individuals. A new addition to this year’s quality assurance was a parallel run of two aggregation systems (by service area and by component). This made it possible to check the accuracy of the data and the processing system simultaneously.

Visual presentations are created from the processed data. These presentations are used for an internal analysis to highlight significant data points or patterns that may warrant further investigation.

How we disseminate the data

The Public service productivity: total, UK releases are published free of charge on the ONS website. They are published once a year, within the Public services productivity section of the ONS website. Supporting documents are clearly linked and accessible to users. Additional data can be provided on request.

How we review and maintain the data processes

Further revisions to the estimates may be required in accordance with, for example, changes to source data. This follows ONS’ revisions policy. There is also available a Guide to statistical revisions.

Nôl i'r tabl cynnwys

7. Other information

Assessment of user needs and perceptions

(The processes for finding out about uses and users, and their views on the statistical products.)

Our productivity releases have a range of users, as given in Section 4.

We have developed two main ways of obtaining information on users and uses of our public service productivity estimates:

  1. User consultation meetings and regular healthcare and education functional board meetings. These meetings allow the exchange of information on data sources, development issues and methods changes that affect our public service productivity estimates.
  2. A user feedback questionnaire is circulated to those who make enquiries about public service productivity.

Useful links

Nôl i'r tabl cynnwys