Independent Statistics
In 2007, a Bill was passed to make official statistics in the UK independent of Government. The Statistics and Registration Services Act followed a long period of arms length oversight from a Statistics Commission, and a formal consultation on the proposals. Many bodies had campaigned for this change to prevent direct interference in statistical production, and statutory regulation to support the use of accurate statistics by Government. In that sense, the perception of misuse was stronger than the evidence of direct abuses, but mistrust undermined public confidence in public policy decisions based on public statistics.
The legislation was prepared by HM Treasury, and so statistics of this kind were referred to as a 'public good', in contrast to a 'private good', which were valuable by being available to all and not depleted by reuse. Although the Minister introduced the legislation in these terms, the debate in the Commons quickly changed the logic so that statistics were to be produced to "serve the public good", reflected in clause 7, the objective. And they would do this in two specific ways in the legislation:
informing the public about social and economic matters
assisting in the development and evaluation of public policy
Although the public might be informed by headline statistics reported by mass and specialist media, there is implied a little more than that, and more detailed data had been published for some years already. However, the decisions about what to collect data on had always been driven by the budget holders in government departments, so there was a long history of Government coming first.
Valuing Independence
In 2023, there is a review of how well the independent statistics organisation is working, not so much to reconsider its independence as what influences should be in place (including the obvious financial accountability). So the questions being asked are around whether the public good is being realised in meeting the needs of users from all sectors, including internationally. And there is a more direct question about the independence of regulation, which ought to get into how this is managed, and whose values are being protected, if the political machinations are not at the heart of choices made about statistical production.
If ministerial directives are not the driving force in statistical production, then something else should be, without setting aside democratic motivation, and becoming purely technocratic. But the objectives about the public good, and informed public, and assisting policy are not immediately clear about what ought to be done when there are conflicting views. One risk is that public policy is prioritised above public information, or that some sectors attract more priority than others, e.g. economics over the environment. But a more insidious issue is where things which are easily measured within established processes is prioritised.
Political Interference
Independence made a strong case for decisions about measurement, publication and interpretation being made by professional statisticians in government, not other civil servants or ministers. So early clashes with departments were about breaches of these kinds about the text of press releases and the need for simultaneous publication of statistical releases, as well as selectively cutting the data. Other issues arose when the austerity agenda led to a number of statistical series being cut, in order to save money, including a challenge to the 2011 census, which has led to a long programme of work on developing new sources of data.
After long running disagreements about the trustworthiness of the Home Office to collect and compile statistics on crime impartially, that responsibility was transferred to the ONS, so the public might feel better informed. But Government users also had frustrations about whether the statistics were suitable for their purposes, with an independent review of economic statistics making extensive recommendations to improve. Independence meant this sort of review was a credible solution to concerns, but the need to have it did speak to a gap in engagement with users: were independent statisticians now too distant from public policy?
Beyond Arms Length
Instead of being accountable to Ministers for their production, statistics were now subject to independent assessment by the regulatory part of the UK Statistics Authority. Many of these assessments made some critique of the presentation and accessibility of the information presented, that it was not explaining the information beyond reporting the numbers. And it was also often seen that too little information was provided about other official sources of data on the same topic, but the biggest consistent finding was a need for more engagement with users, that producers should know more about them and what was important to them.
Similarly, there was very little engagement with politicians now that Ministers had been removed from the picture. When the Act was passed, residual responsibility was transferred to the Cabinet Office, away from HM Treasury, meaning that parliamentary scrutiny was assigned to the corresponding committee. While the Treasury select committee was only really interested in economic statistics, they did have substantial interest in technical issues, and some of the underpinning ideas, like census counts. Statistical matters are very different to the other, more constitutional, administrative issues which the Cabinet Office has responsibility for.
Developing Public Policy
The National Data Strategy was published with a consultation in 2020 and an action plan followed, acknowledging that the work to produce analysis was fundamentally statistical. Indeed, as analysis was criticised, in the form of models of various kinds, it was determined that further responsibility for analytical functions should see standards developed independently. And alongside that, everyone agreed that evidence in support of public policy should be published transparently: even if the availability of the evidence does not necessarily make the extent of the support for a decision clear, requiring evaluation will elicit objectives.
But the objective stated that evaluation of public policy should be assisted, and to some extent that requires Government to want to do that, to find out if its policies are effective. Although this evaluation has not been happening, critical reports have led to a new requirement in funding arrangements, to plan and to publish evaluations. The infrastructure is in place to make it possible but it will take some time to see what emerges, as evaluation requires well specified objectives and data to be designed in anticipation. However, the general pursuit of one of the aspects of the main objective of statistics to serve the public good is apparent.
Power Struggle
Meanwhile, in the Cabinet Office, technical teams have developed the Government Digital Service, to make more services accessible to the public electronically, including open publication of data. While estimation of social and economic patterns in society were specified in the Act, management information on the activity of government, as opposed to evaluation, was not. So quite sensibly work was being done to make more information available, both on processes and assets, and to make some of this open, to encourage market innovation using government information to realise better services for the public.
Obviously for some things challenges are really focused on making things consistent and accessible, but in other cases the definitions and alignment with political priorities are not immediate. And that is before we inevitably consider the quality of data and the difficulty of making inference drawing on several overlapping sets of data with different kinds of errors. So there were both statistical and digital dimensions to the data challenge, and although everyone now just calls this data science, it took some time to settle who covered which bits, and although it is not articulated, independence seems to be associated with ethics.
Informing the Public
The legacy of ethical approaches to data collection requiring informed consent has meant many attempts to explain that data is being collected for particular purposes. But repeated exercises showed analytical uses of data, as opposed to direct identification and service provision, were not familiar to most members of the public, especially older generations. Similar exercises to determine public understanding of core concepts in economics, also established very unclear appreciation of the definition of economic growth, never mind exactly what was included in the official employment figures.
However, an unspoken problem with the notion of public good formulated in the Act was it was otherwise undefined, and unless it was self-evident, no one really knew what it meant. Informing the public in itself could not necessarily be serving the public good, especially as it might be partial or otherwise limited in view. And research with the public fifteen years after the Act was passed did show greater understanding about data, both about some potential of better understanding society and some drawbacks. The main public concern was that data was serving the public good or interest, if it mitigated structural inequalities in society.
Ethical Standards
Statisticians have been concerned about ethics for some time, not just in the digital era, so protection of respondent privacy in official statistics was enshrined in the UN principles in the 1990s. Similarly, the accurate reporting of scientific research and adequate statistical design was a concern beyond the fair treatment of research subjects and their informed consent. So there were professional standards to relate to for new work using data and even now, the belief is that established principles are suitable for the new era of data science. Digital professions are new, and have an individual streak which militates against professionalisation.
In 2016 the Government Digital Service published a data science ethical framework, a leading example of its kind, but precursor to many arid corporate statements. Around the same time, the National Statistician created his own ethics committee, to consider requests for access to data for statistical innovations and external research uses. And in that committee it was agreed that one of the criteria was that access to the data should expect to bring a demonstrable public benefit, but exactly what a public benefit is, and how it is determined has proved vexed enough that substantial guidance has been produced to support researchers.
Committee Accountability
Notwithstanding the wrangling with other parts of government about data, or international professional standards for statistics, there is a need for external stakeholder input without compromising independence. Advisory committees have been established on some contentious topics like price inflation, but they have shifted to a more strategic attention for economics, inclusion, population and other issues such as ethics. In response to criticism, the chairs of these are now convened as a more strategic user engagement committee, to extend accountability beyond the formal board of non-executive directors who have ultimate audit responsibility.
Advice from users or anyone else has limited accountability, especially in terms of the regulation function, so parliamentary select committees have the supervening role. While specific inquiries may well be better informed for engaging with statisticians from government (and beyond), overarching oversight left to a board of non-executives is awkward. Yet Parliament lacks the will and the technical capability to put in the work to evaluate statistical production comprehensively, even as that capacity was needed during the pandemic. The formal public body review therefore needs to consider how to remediate any balances that are missing.
Codifying Practices
Despite considerable success, over time, of the independent model, specific limitations, beyond the general user engagement point, are slated for investigation. One delineates against international users, business, academia, government and citizens. But the more vexed is whether the independent regulation, housed in the same body responsible for a substantial proportion of independent production is tenable. Although there is a formal Code of Practice, most of its enforcement is by the way of informal discussions between regulators and producers, which is sometimes perceived as not pressing hard enough for improvements to statistics.
The Code has set out three pillars, trustworthiness, quality and value, which have captured both the aspects and the tensions of pressure on producers: prioritising one may limit others, but all are important to serving the public good. Each pillar speaks to the need for focused accountability, with trustworthiness best facilitated through parliament, supported by sufficient expertise, and users engaged under the auspice of value through various means including roles on advisory committees and in reviews. But on quality, there can be a block where there are disagreements, and one model which seems relevant is a chief scientific advisor.
CSA UKSA
The role of a CSA in most government departments is to oversee research and analytical programmes, as well as leading the science and engineering professionals. Statistics is a separate profession, and most of the other researchers sit within the broader analysis function, which is led by the national statistician and supported by a team within UKSA. However, CSAs also facilitate a network of connections to external expertise, particularly in academia, about innovation in the department, and that is certainly something new developments in data science and statistical design relies on, to see beyond standard practices.
Another function of the CSA is to offer independent challenge, sensitive to the priorities and pressure of the department, and to facilitate resolution of technical and evidence limitations of policy. So, it is not that expertise cannot exist within the statistics profession in government, but when its practices are challenged, there is considerable difficulty in finding reconciliation. And a CSA would normally be a time limited appointment, bringing in new expertise every few years, and so refreshing the knowledge of the technical frontier, but they would also bring academic experience of cutting edge training programmes in academia.
A CSA would also facilitate participation in some cross-government networks where that has not been the case, for research strategy and science in emergency planning and response. Departments are expected to produce and renew statements of their areas of research interest (ARIs) which explain their approach and priorities for research. A CSA would coordinate and sign off the development of such a statement, but they would also share it with other departments and understand synergies across government. Similarly, a CSA would feed into resilience planning for data needs, and be a representative on emergency advisory bodies.
Staying in Touch
The challenge of independence for statistics seems to be about how to maintain effective stakeholder relations, without compromising the political separation from ministers. Separating completely from all interests, can lead to isolation, where analytical insights were not sought, until a crisis demonstrates their neglect. And it can also lead to competition where other bodies duplicate functions, which can also have political priorities in sight, even as they have only an incomplete view of technical issues.
Similarly, without formal methods for including stakeholders, some can be given priority attention, without necessarily representing views on different issues. But that ought to be soluble with a formal stakeholder mapping (categorising on concepts like interest, impact and influence), and matching that against different activity. However, it has taken many years for UKSA to come up with the need for a stakeholder strategy, and to review whether existing, longstanding arrangements, are comprehensive.
Although the relations may now be suitable, the obvious way to review that would be to set out these responses and their basis in discussion with parliamentarians. However difficult the technical matters are, the management of complaints is a familiar issue to administrative departments, and the management of stakeholders also. Whether the structure of technical advisory functions is suitable may yet be something science and technology committees in parliament could add to their remit seems less clear.
Working with Others
The lingering question is whether the overarching aims of providing data, that the public are informed and government policy is more strongly connected to evidence, is a success. Fundamentally, some evidence is limited, some social patterns are complex, and the analytical tools used to resolve these problems are not easy to understand. So the public may remain poorly informed because the whole system faces a substantial challenge to communicate ideas which most people have no educational background in.
So, for policy to be developed and for societal progress to be described, more effort needs to be made in building capability in all parts, and bridging the gap where things are unfamiliar. But this is a much bigger challenge than for one part of government, and many other bodies outside the civil service also aim to support use and communication of evidence. And fostering that partnership, anchored to the values of the public good of using data seems the big challenge of independence, facilitating shared ambition.
And this seems the ultimate risk of independence, that it makes it easy to exclude others, and reserve judgements without discussing them in public as they are being developed. Independence from ministers ought not to be seen as independence from all, which is really tawdry isolation, but a commitment to impartiality achieved by effective and inclusive engagement. Actually, on this point the work on inclusive data which is slowly emerging in statistical plans is really very encouraging, challenging all.
