QCovid® case study: Lessons in commanding public confidence in models

Methods expert, Jo Mulligan gives an insight into the lessons learned from using the QCovid® risk calculator in commanding public confidence in models

I re-joined OSR around 15 months ago in a newly created regulator role as a methods ‘expert’ (I struggle with the use of the word ‘expert’ as how could anyone be an expert in all statistical methods? – answers on a postcard please). Anyway, with my methods hat on, myself and several colleagues have been testing the lessons that came out of our review of the statistical models designed to award grades in 2020. That review looked at the approach taken to developing statistical models to award grades in the absence of exams that were cancelled because of the pandemic. Through this work, OSR established key factors that impacted on public confidence and identified these lessons to be useful for those developing models and algorithms in the future.

Applying our lessons learnt to QCovid®

We wanted to see if the lessons learnt from our review of the grading models in 2020 could be applied in a different context, to a different sort of algorithm, and test whether the framework stood up to scrutiny. We chose another model to carry out the testing, the QCovid® risk calculator, also developed in response to the pandemic.

In 2020, the Chief Medical Officer of England commissioned the development of a predictive risk model for COVID-19. A collaborative approach was taken, involving members from the Department of Health and Social Care (DHSC), NHS Digital, NHS England, the Office for National Statistics, Public Health England, University of Oxford plus researchers from other UK Universities, NERVTAG, Oxford University Innovations, and the Winton Centre for Risk and Evidence Communication. It was a UK wide approach agreed and including academics from Wales, Scotland and Northern Ireland. 

The original QCovid® model that we reviewed calculates an individual’s combined risk of catching COVID-19 and dying from it, allowing for the inclusion of various risk factors. The QCovid® risk prediction model calculates both the absolute risk and the relative risk of catching and dying from COVID-19. The QCovid® model also calculates the risk of catching COVID-19 and being hospitalised but these results were not used in the Population Risk Assessment.

What is an absolute risk?: This is the risk to an individual based on what happened to other people with the same risk factors who caught COVID-19 and died as a result.

What is a relative risk?: This is the risk of COVID-19 to an individual compared to someone of the same age and sex but without the other risk factors.

The academic team, led by the University of Oxford, developed the model using the health records of over eight million people. It identified certain factors, such as age, sex, BMI, ethnicity and existing medical conditions, that affected the risk of being hospitalised or dying from COVID-19. The team then tested the model to check its performance using the anonymised patient data of over two million other people. Having identified these risk factors, NHS Digital applied the model to medical records of NHS patients in England and those identified as being at an increased risk of dying from COVID-19 were added to the Shielded Patient List (SPL).

This approach was ground-breaking as there was no precedent for applying a model to patient records to identify individuals at risk on such a scale. Before the development of QCovid®, the SPL had been based on a nationally defined set of clinical conditions and local clinician additions. People added to the SPL through application of the QCovid® model were prioritised for vaccination and sent a detailed letter by DHSC advising them that they may want to shield.

The QCovid® model was peer-reviewed and externally validated by trusted, statistical bodies such as the ONS and the results and the QCovid® model code were published. 

What we found from reviewing QCovid®

In testing the lessons from the review of the grading models in 2020, we found that some lessons were not as relevant for QCovid®. For example, the lesson about the need for being clear and transparent on how individuals could appeal any decisions that the algorithm might have automatically made was less relevant in this review. This is because, although individuals were added to the SPL through the model, shielding was advisory only, and individuals (or GPs on their behalf) could remove themselves from the list. Finding lessons that were less relevant in a different context is to be expected as every algorithm or model will differ in its development, application, and outcomes.

As part of this review, we did identify one additional lesson. This concerned how often the underlying data should be refreshed to remain valid in the context of the algorithm’s use and appropriateness, especially if the algorithm is used at different points in time. This was not relevant for the review of grading models as they were only intended to be used once. However, in a different situation, such as the pandemic, where new information is being discovered all the time, this was an important lesson.

What do we plan next?

We found that the framework developed for the review of grading models proved to be a useful tool in helping to judge whether the QCovid® model was likely to command public confidence. It provided assurance about the use of the model and stood up well under scrutiny. Additionally, working on this review has helped us to understand more about QCovid® itself and the work behind it. QCovid® provides a great example that models and algorithms can command public confidence when the principles of Trustworthiness, Quality and Value (TQV) are considered and applied. In terms of how we will use these findings going forward, we have updated our algorithm review framework and this example will feed into the wider OSR work on Guidance for Models as it continues to be developed this year. 

I really hope this work will be useful when we come across other algorithms that have been used to produce statistics and also that when we incorporate it into our Guidance for Models that others will benefit more directly too. So, this concludes my first blog in my Methods role at OSR, and in fact, my first blog ever!

Guest blog: Improving reporting and reducing misuse of ethnicity statistics

Richard Laux, Deputy Director, Data and Analysis, at the Equality Hub discusses his team’s work in improving reporting and reducing the misuse of ethnicity statistics in our latest guest blog, as part of the 30th anniversary of the United Nations’ Fundamental Principles of Official Statistics.

In my role as the Head of Analysis for the Cabinet Office’s Equality Hub I am in the privileged position of leading the team that analyses disparities in outcomes between different ethnic groups in the UK.

The reasons for disparities between ethnic groups are complex, and include factors such as history, relative levels of deprivation, the different age profile of some ethnic groups as well as many other factors. Despite the complexity of the issues, my team and I do all we can to prevent misuse of the data and help ensure that robust and clearly explained data are furthering the debate on race and ethnicity, which is an emotive topic for many people in this country.

My team’s responsibility for this is firmly rooted in the UN Principle 4 of preventing the misuse of statistics. We do this in a number of ways that align with this principle.

One way we do this is through bringing several analyses together to paint a broad-based picture of a topic of interest. For example, when supporting the Minister of State for Equalities on her reports on progress to address COVID-19 health inequalities we synthesised a large body of research describing the impact of the pandemic on ethnic minority groups. Much of this work involved my team reconciling and reporting on different sources and drawing robust conclusions from different analyses that didn’t always entirely agree.

A second way we try to prevent misuse of data is through the clear presentation of statistics, an example being Ethnicity facts and figures. This website was launched in October 2017 and since then it has been a vital resource to inform the debate about ethnicity in the UK. It gathers together government data about the different experiences of the UK’s ethnic groups and is built around well-established principles, standards and practices for working with data like the Code of Practice for Statistics.

We try to make the content on the website clear and meaningful for people who are not experts in statistics and data. It also contains detailed background information about how each item of data was collected and analysed to help those users with more interest or expertise in statistics draw appropriate conclusions.

The Commission on Race and Ethnic Disparities report recommended that RDU lead work to further improve both the understanding of ethnicity data and the responsible reporting of it (and thereby helping to prevent its misuse). As part of this work, we will consult on how to improve the Ethnicity facts and figures website, including whether we increase the amount of analysis on the site to help users better understand disparities between ethnic groups. Some of this might be in a similar vein to Office for National Statistics (ONS) work during the pandemic on ethnic contrasts in deaths involving the COVID-19. This modelling work showed that location, measures of disadvantage, occupation, living arrangements, pre-existing health conditions and vaccination status accounted for a large proportion of the excess rate of death involving COVID-19 in most ethnic minority groups.

Of course, there can be some difficulties with data that might lead to its misuse: datasets can vary greatly in size, consistency and quality. There are many different ways that ethnicity is classified in the datasets on Ethnicity facts and figures, and these classifications can differ widely depending on how and when the data was collected. For example, people might erroneously compare the outcomes for an ethnic group over time thinking it has remained the same whereas in fact it has changed; this might happen if someone is looking at data for the Chinese, Asian or Other groups over a long time period, as the Chinese group was combined into the ‘Other’ ethnic group in the 2001 version of the aggregated ethnic groups, but combined into the Asian group in the 2011 version of the aggregated ethnic groups in England and Wales.

We also try to minimise misuse and misinterpretation by promoting the use of established concepts and methods including information on the quality of ethnicity data. Our quality improvement plan and significant contribution to the ONS implementation plan in response to the Inclusive Data Taskforce set out our ambitions for improving the quality of ethnicity data across government. We will also be taking forward the Commission for Race and Ethnic Disparity’s recommendation that RDU should work with the ONS and the OSR to develop and publish a set of ethnicity data standards to improve the quality of reporting on ethnicity data. We will consult on these standards later this year.

Finally, we raise awareness and knowledge of ethnicity data issues through our ongoing series of published Methods and Quality Reports and blogs. For example, one of these reports described how the overall relative stop and search disparity between black people and white people in England and Wales can be misleading if geographical differences are not taken into account.

We have significant and ambitious programmes of analysis and data quality work outlined for the future. I would be grateful for any views on how we might further help our users in interpreting ethnicity data and preventing misuse.

Why migration statistics matter

Like many out there I wake up every morning hoping that a way, that protects life, has been found to bring peace in Ukraine. As it says in one of my young daughter’s books, “The world’s already far too full of cuts and burns and bumps”[1].

Unfortunately, the conflict continues and people from Ukraine are fleeing. On 10 March 2022, the UN Refugee Agency (UNHCR) estimates that just over 2.3 million people have fled Ukraine since 24 February 2022. Across the UK there has been an outpouring of public sympathy for Ukrainian people forced to flee. The Government has introduced some new visa routes for Ukrainians; and debate continues among the public, in the media and in Parliament about whether the UK is doing enough to help.

As with any crisis, lots of decisions will need to be made. Decisions by individuals, by governments and by agencies and by organisations helping to support people flee Ukraine and build new lives. Data and statistics are a key part of this decision-making process. For example, to inform local and national emergency response planning, the Office for National Statistics (ONS) has published new data about the number of Ukrainian nationals by local authority and the Home Office has published the number of people applying for these new Ukrainian visa routes.

Here at the Office for Statistics Regulation (OSR) I lead on OSR’s work on migration. At the heart of my role is ensuring that data and statistics serve the public good. What does this mean in this context? It means ensuring that the best possible data are available to inform decision-making. And it also means ensuring data are publicly available to help the public understand the impact of decisions made, for example to evaluate the impact of new visa routes for Ukrainians and the impact this has on the make-up of society in the UK.

Earlier this month, we published the first in a series of reports looking at how the Office for National Statistics (ONS) is transforming the way it measures international migration. These statistics provide estimates of how many people are flowing into and out of the country from across the world and what the impact is on the number of migrants in the UK. The previous methods, based on the International Passenger Survey (IPS), had limitations so it’s great to see new robust methods being developed in a credible way and in discussion with expert users. Our report welcomes the ambitious and collaborative approach being taken by the ONS to transform the way it measures international migration and recommends some ways ONS can build on this good work. I would like to thank all those we have engaged with us as part of this work for their openness and time. I look forward to continuing this work to ensure that the transformed migration statistics are trustworthy, high quality and support society’s needs for information.

More widely we also engage with other government bodies responsible for the production and publication of statistics and data on migration. For example, we regularly engage with the Home Office, which is responsible for publishing a wide range of statistics about migrants. We have recently written to the Home Office about the publication of data on migrants arriving in Small Boats. In our letter we welcomed the Department’s plans to regularly publish additional data about this topic.

At the OSR we want our work to have an impact. That means ensuring that data and statistics are there to inform decision-making across society, the public, private and third sectors and to help hold organisations to account. This is at the heart of what I do as a statistical regulator at the OSR and at the core of our migration work. I just hope in a small way this can have a positive impact on what is happening out there in the world today.


If you would like to feed into any of our work on migration statistics please get in touch with Siobhan Tuohy-Smith.


[1] Donaldson J & Scheffler A, 2010, Zog, Published in the UK by Alison Green Books.

 

 

 

 

Why I love evaluation

In our latest blog, Director General Ed Humpherson looks at the importance of government evaluation in light of the Evaluation Task Force ‘Policy that Works’ conference.

There are lots of good reasons to love evaluation. It provides evidence of what works; it supports good policy; it builds the skills and reputation of analysts; it helps scrutiny. 

But I love evaluation for another reason too. Evaluation, done well, is fundamental to changing the way government in the UK works. 

This week the Evaluation Task Force is running one of the biggest ever conferences on government evaluation. So it’s a great time to celebrate evaluation, and my reasons for loving it.  

In a recent speech, Bronwen Maddox, the Institute for Government’s Director, set out a compelling case for why government needs to change. She highlighted the challenges of rotation of officials, moving on for career advancement so they don’t build grounded expertise in their subject matter. She talked of a lack of accountability. And she said these things combined to create an air of “unreality” to the way Government approached key challenges.  

In some ways this critique is exaggerated. There are lots of examples of officials with grounded expertise, taking responsibility for their decisions and implementation, and understanding the realities of the policy problems they are addressing. But there are enough cases where the critique is fair for us all to take it seriously. I saw it when I was looking at the value for money of Government programmes when I was at the National Audit Office. And I see it now in our work at the Office for Statistics Regulation. 

Evaluation is for me a great antidote to these problems. By committing to evaluation, as it is doing through the Evaluation Task Force, Government is investing in corporate memory. It lays down the groundwork for a commitment to gathering and retaining evidence of what works, and, crucially, how and why it works. By committing to evaluation, Government is creating an intelligent form of accountability – not the theatre of blame and finger-pointing, but of a clear-headed consideration of what has actually happened as policy is implemented. And by a relentless focus on real-world evidence, evaluation combats Maddox’s air of unreality. 

It aligns with a lot of what we champion at the Office for Statistics Regulation. We have emphasised the importance of analytical leadership in Government – how Government and society benefits when the analytical function is not simply a passive provider of numbers, but a key partner in the work of Government. And this requires the analytical function to be full of leaders, who can articulate the benefits of analysis, and make it relevant and useful – not just to policy makers but to a wider public.  

And we champion public confidence in data produced by Government. This public confidence is secured by focusing on trustworthiness, quality and value. 

Analytical leadership, and the triumvirate of trustworthiness, quality and value, are central to securing the benefits of evaluation. Analytical leadership matters because to do great evaluation requires clarity of vision, strong objectives, and long-term organisational commitment. 

And trustworthiness, quality and value are central to good evaluation: 

  • Trustworthiness is having confidence in the people and organisations that produce statistics and data – being open to learning what works and what doesn’t, and open about the use of all evaluation, giving advance notice about plans and sharing the findings. The commitments to transparency that the Evaluation Task Force is making are crucial in this regard.
  • Quality means data and methods that produce assured statistics – it ensures the evaluation question is well defined and the right data and robust methods are selected, tested, and explained.
  • Value supports society’s needs for information – it means evaluation can be impactful, addressing the right questions and ensuring the correct understanding of the evidence.

Of course, I don’t claim for a second that evaluation is the sole or perfect panacea for the challenges of government. That too would be an exaggeration. But I do think it has tremendous potential to help shift the way government works. Led in the right way, and adhering to the principles of TQV, evaluation can make a big difference to the way government operates. 

That is why I applaud the energy and focus of the Evaluation Task Force, which has galvanised interest and attention. It’s why I like the Evaluation Task Force’s website and why I celebrate this week’s conference. 

And it is why I love evaluation. 

 

Insight: seeing the bigger statistical picture

Insight sounds self-explanatory…

Many of us have a clear idea of what we mean when we refer to insight – for example, having a good understanding of a potentially complicated issue. However, gathering organisational insight is not a clear, one-size-fits-all approach. What insight looks like for your organisation can vary wildly depending on factors like your purpose, size and target audience.

Insight is invaluable for any organisation – identifying trends across your work programme, encouraging big-picture thinking from your team and even helping shape your strategy. An analogy for this could be a dot-to-dot picture (bear with me). You know there will be a clear picture after connecting the dots but can’t quite see what it is yet. But these hypothetical dots aren’t numbered so there are multiple ways to connect them which you must test in order to uncover the final masterpiece. This flexibility is exciting and gives lots of room for innovation but can also seem daunting.

How is OSR approaching insight?

This is where my new role as Insight and Evaluation Manager comes in. When I joined OSR four months ago, I was impressed at the quality and breadth of our work programme for such a small team. I saw immediately that team members were always encouraged to get involved in work outside their area where possible. There was a clear focus on the importance of highlighting cross-cutting themes and building insight across the UK statistical system, including the state of the statistical system report, data gaps and promoting greater coherence and transparency in the communication of statistics. By bringing these outputs together we can build something bigger and interconnected – how we do this is what makes up a large part of my role.

Before I joined OSR, as part of our insight development we had conversations with other organisations to learn about how they define and gather insight. Thank you to those who took the time to share their experiences. Now we are using what we learned through that work to shape our insight programme around the following themes:

Information

Our biggest aim for this year is to develop a more sophisticated use of evidence. We will identify what data we need to gather at different project stages and capture this in a way that can be easily restructured, analysed and visualised to show key themes.

Audience

Once we have strengthened our internal capability, we can be in a stronger position to build quickly and share insights across the statistics system. The greatest value lies in this next step – providing the big picture to help producers learn from each other. We must communicate effectively with our audience through a variety of channels, share good practice and areas of concern, and listen to their needs.

Embedding

Both themes above depend on an embedded approach to insight across OSR, both in our mentality and processes. Because we are a small team, it’s down to everyone to play their part. If we all see insight as inherent in our work, not a separate afterthought, it is much easier to build a self-sustaining system to generate insight. This is the ultimate goal!

 

We may be small, but we can provide powerful insights which impact across the whole UK statistical system and beyond. I’m excited about the year ahead and will see you in the summer for the state of the statistical system report 2021/22.


If you would like to speak further, feel free to contact Grace Pitkethly.

The new knife crime methodology making police analysts jobs easier

In our latest blog, OSR Regulator and former Police Information Analyst Ben Kendall Ward discusses how the Home Office’s National Data Quality Improvement Service (NDQIS) is improving the way police statistics are recorded.

Prior to joining OSR I worked for the police as an Information Analyst for seven years. One of my duties was to collate knife crime data and send it to the Home Office on a quarterly basis. Police Officers would input crimes onto a system, flagging if a knife or sharp instrument was involved in some form, but the quality was poor and officers often forgot to add these markers onto the crimes.

As a result, I would often manually read through all the relevant crimes for each financial quarter to determine if a knife or sharp object had been used and, if there was a threat, how likely the threat was.  Reading through hundreds of records and marking them before collating them to send to the Home Office was a laborious process.  

I started at OSR seven months ago, and one of my first projects was working on a review of the Office for National Statistics (ONS) and Home Office’s knife-enabled crime statistics for England and Wales.  

Realising that the quality of police recorded data on knife crime and other so-called ‘flagged’ offences was poor, the Home Office set up a National Data Quality Improvement Service (NDQIS), which looked at using computer aided classification to tackle this issue starting with knife crime as a guinea pig. Unfortunately, this tool wasn’t fully developed until shortly before I left the police. I would have loved to have this available when I started, since it would have made my job so much easier.    

The NDQIS tool works by first checking if the offence is one which the Home Office considers when looking at knife-enabled crime (for example if the crime was burglary, the tool wouldn’t check any further, since it’s not an offence the Home Office considers for knife crime). The tool then scans the details of the crime including the free text fields that include the detailed events of what occurred on the crime as recorded by the call handler or police officer, looking for key terms like “stabbed the victim” or “threatened with a knife”.   

Once the NDQIS tool has done this it categorises each crime into three different categories:   

High Confidence: where there is a high degree of certainty a knife/sharp object was used in the crime and the record doesn’t need further review. 
   

Low confidence: The tool is uncertain whether a knife/sharp object was used in a crime and therefore requires manual review. 
   

Rejected: The tool has determined a knife/sharp object was not used in the crime or that it is a possession only offence, which are excluded.  

The Impact of NDQIS

This is a really useful development that reduces administration and burden for police officers, omits the need for Information Analysts to manually input data, and also ensures the statistics are as accurate and as valuable as possible.  

We think that other organisations can really learn a lot from this work, so we’ve asked ONS and Home Office to publish a development plan to share publicly how and when the NDQIS tool will be rolled out more widely.  

Knife crime is a serious crime that often affects the young and disadvantaged and often has tragic consequences. It’s a high priority policy area for the UK Government, featuring prominently in the Government’s Beating crime plan. To fully understand and tackle the nature of the problem posed by knife crime, data collected by police forces must be of high quality and accurately reflect trends in this type of crime to inform public policy. 

TQV: support for Analytical leadership!

Statistics Regulator Oliver Fox-Tatum explores what we mean by effective analytical leadership and how our TQV (Trustworthiness, Quality and Value) framework supports this.

So… what do we mean by Analytical leadership? 

Effective analytical leadership ensures that the right data and analyses are available, and that analysts are skilled and resourced to answer society’s most important questions.  

Crucially, it ensures that data and analyses are used at the right time to inform key decisions, and that they are communicated clearly and transparently. When done well, this supports confidence in the analyses themselves, but also in the decisions based on these analyses.  

We began exploring this concept in our review of statistical leadership last year and see the review’s findings as relevant for all government analysis produced across the UK. 

But I’m not an analyst… is this relevant for me?  

Yes! Everyone in government has an important role in championing the use of analytical evidence and being confident in engaging with analytical experts.   

Whether as a senior minister communicating analysis publicly; an official drawing on analytical evidence for a policy decision, or for an external communication; or an analyst showing leadership in the provision of new data to answer the most important question of the day, strong analytical leadership needs to be demonstrated at all levels and parts of government.  

We all have a stake in ensuring that the data and analyses produced and used across the UK can realise their full potential in supporting: 

  • vital operational and policy decision making 
  • confidence in the analyses and in the decisions based on them 
  • citizens’ and society’s broader information needs – the wider public good. 

How does the TQV (Trustworthiness, Quality and Value) framework support this? 

The usefulness of TQV is as a simple framework – thinking about TrustworthinessQuality and Value as a set of prompts is useful in challenging individuals, teams and organisations about how they approach their work and achieve their goals.  

Stopping to reflect can be a powerful means to think again, to see what works and what else can be done. It is helpful for everyone using data and analysis in their work – not just for analysts, but non-analysts too – and is a particularly valuable tool as a culture of TQV evolves. 

When considered together (and they always should be!) the TQV pillars form a pyramid of strength that ensures that:  

  • the Value (V) of analysis for decision making and the information needs of citizens and wider society is maximised;  
  • the Quality (Q) of the data and methods used is assured;  
  • and the Trustworthiness (T) of both the data and decisions based on them, is supported.  

So, when a government minister invests in the analytical capability of a department by providing additional resources for training, or new IT infrastructure to support automation … they are thinking T and Q. And when they choose to publish management information around a key policy area of wider public interest for transparency, that is thinking T and V! 

Or when a press officer checks the accuracy of the text alongside a chart to be used in a Tweet with an analytical colleague before posting – that is thinking Q. Or if a policy colleague reaches out to an analytical team when developing a new performance measure for a key policy – that is thinking Q and V!  

And not least, when an analyst pushes to attend to a key policy meeting to develop their skills and knowledge in an emerging policy area – they are thinking V. Or their permanent secretary asks them to provide an account of the latest published evidence at a press briefing as they value their objectivity, professionalism, expertise and insight as an organisational asset – that is thinking TQV!  

It’s true to say that T, Q and V are equally important and shouldn’t be considered in isolation, as each support and reinforce each other.   

But crucially, if we all take time to stop and think TQV when working with data and analysis, we can ensure we are truly supporting confidence in those analyses and the range of important decisions that they inform, as well as ensuring that they serve the public good.  

 

 

Guest Blog: Challenges and Opportunities for Health and Care Statistics

The COVID-19 pandemic has thrust health and social care statistics into the headlines. Never has there been more scrutiny or spotlight on health statistics – they are widely quoted, in Number 10 briefings, in the news, across social media, on Have I Got News for you… and everyone (it seems) is an expert.  Nearly 2 years on from the first news reports of the ‘coronavirus’, the public appetite for data and statistics has continued to grow. This has created new challenges for health and care statistics producers, as well as highlighting existing areas for improvement, as set out in the recent Office for Statistics Regulation’s COVID-19 lessons learned report.  The report noted the remarkable work of statistics producers, working quickly and collaboratively to overcome new challenges.

I joined the Department of Health and Social Care early in the pandemic, first leading the Test & Trace analytical function and for the last year as the department’s Head of Profession for Statistics. I have experienced these challenges first-hand and have been impressed throughout by the professionalism and commitment of colleagues across the heath sector to produce high quality and trustworthy statistics and analysis.

One of the recommendations of the OSR report (lesson 7) calls for us to build on the statistical achievements of the last two years and ensure stronger analytical leadership and coordination of health and social care statistics. I reflected at the beginning of the pandemic that it was hard to achieve coherence, given the number of organisations in England working rapidly to publish new statistics. We have made substantial improvements as the pandemic has gone on, the COVID-19 dashboard one of many notable successes, but I want to go further, and apply this to other areas of health and social care.

To address this, I have convened a new Health Statistics Leadership Forum alongside statistical leaders in the Office for Health Improvement and Disparities, NHS England/Improvement, NHS Digital, NHS Business Services Authority, Office for National Statistics, and the newly formed UK Health Security Agency. The forum is chaired by the Department for Health and Social Care in its overarching role and brings together Heads of Profession for statistics and lead statisticians from across the health statistics system in England.

We will use this monthly forum to ensure collaboration across all our statistical work. And we have a broader and more ambitious aim to build a culture (that transcends the complex organisational landscape) which values analytical insights, supports innovation and ensures there is a clear, joined up narrative for health statistics in the public domain.

We have set five immediate priorities

  1. Coherence in delivery of advice and statistics
    We will work collaboratively to ensure that our statistical portfolios are aligned, and we provide complimentary statistical products – working in a joined-up way across the system
  2. Shared understanding of priorities
    Ensuring health statistics address the highest priority areas, are relevant and useful for public debate and provide clear insight to inform decision making at the highest level.
  3. Consistent approach to transparency
    We will ensure alignment of both our internal and external reporting so that the right data is quoted in statements and policy documents – clearly sourced and publicly available in line with the Code of Practice for Statistics.
  4. Shared methodologies and definitions
    We will have clear principles for coherence of methodologies and definitions, an expectation of common definitions where it makes sense to do so, and an escalation route via the forum for disagreement.
  5. Build a joined-up statistics community
    We will build a joined-up health statistics community through sharing our guidance on good practice, our approaches to induction, a shared seminar programme and annual town hall event, joint recruitment, managed moves, and secondments or loans.

Government statisticians have achieved so much as a community to provide statistics and analysis in really challenging times over the last two years, but there are lessons to learn and things we can do better.  I am confident that our Leadership Forum will ensure that we maintain this collaborative approach to delivery, and bring health statistical leaders together to make that happen.

What do we mean by ‘statistics that serve the public good’?

‘The public good’ is a phrase which you might not have come across before. When I first joined the Office for Statistics Regulation (OSR) nearly two years ago, I had no real idea what it meant, but I knew that it was something very important to OSR; something which was mentioned in nearly every meeting I went to. What I know now is that OSR’s vision – that statistics should serve the public good – is fundamental to all that my colleagues and I do.  

So what is serving the public good? It means that statistics should be produced in a trustworthy way, be of high quality, and provide value by answering people’s questions: providing accountability, helping people make choices, and informing policy. As statistics are part of the lifeblood of democratic debate, they should serve a very wide range of users. When they meet the needs of these users, they serve the public good. 

But needs can change quickly, and statistics can be used in ways that also do not serve the public good – precise numbers can be used to give a misleading picture of what the statistics actually say, too much weight can be put on statistics, or they can be described incorrectly.   

At OSR it is our job to support confidence in statistics. Having a really strong understanding of what it means for statistics to serve the public good is crucial to this.   

Over the last 22 months, I’ve been leading a research programme aimed at doing just this, as existing research on understanding public good is relatively sparse.  

My first step in exploring public good was to publish a literature review which explores the current evidence on public good. We have also analysed how researchers think their research will provide public benefits and we are currently running studies to explore what members of the public and our own team think about the public good.  

A key theme coming from this research is the importance of being able to communicate statistics in a way that is understandable to everyone who is interested and needs to be informed by them. This is not an easy thing to do. Statistics potentially have many different audiences – some people may confidently work with statistics, whereas others may not have much experience of statistics, but want to be able to understand them to help make decisions about their lives.  

Differences in how people understand statistics are often attributed to an individual’s literacy or numeracy abilities – we often hear the term “statistical literacy” when this type of understanding is being talked about.    

We think it is wrong though to think of statistical literacy purely in terms of a deficit in knowledge. Rather, we think that producers of statistics need to understand what people find easy to grasp and what they find counterintuitive and think, “How do we work with that to make sure that the real message of the statistics lands properly?” It is our role in OSR to guide producers to do this.  

To help us with this, we will be kicking off some new projects this year aimed at increasing our understanding of how different segments of the public regard statistics. 

The public good may seem like a mysterious concept but, by working to build the evidence sitting behind the phrase ‘public good’ and understand how the statistical system needs to respond to meet it, we are hoping to make it much less so.  

We hope that our research work, which we are undertaking in collaboration with others, will not only highlight the role that statistics play in the choices that people make and the risks to the public value of statistics in a changing environment, but also that publicly researching this area will stimulate others with the capacity and expertise to work in this area. 

It’s beginning to look a lot like Census…

It may be too early to talk about Christmas for some – not for me. I have decided on the design theme, got my advent calendar, am close to finalising the Christmas menu and have started ordering gifts. I am all over it! And getting more excited by the day. 

Christmas means different things to different people, but it is certainly my favourite census related celebration. Weren’t Mary and Joseph off to Bethlehem to register as part of a census? Timely then as part of my, possibly a bit too keen, Christmas preparations, I am pleased to say we have published our phase 2 assessment reports for the 2021 Census in England and Wales and the 2021 Census in Northern Ireland. 

If you read my previous blog, you’ll know I have been leading OSR’s work on the Censuses in the UK and today is a bit of a milestone moment for me as part of this assessment. The publication of these reports is a culmination of a range of work which kicked off three or four years ago and it has been so interesting and rewarding to speak with users and stakeholders of Census data and statistics throughout –  it’s been an absolute gift!  

Our reports recognise the efforts of the Office for National Statistics and the Northern Ireland Statistics and Research Agency in delivering Census live operations and their continuing work to produce high quality, extremely valuable Census data and statistics. I wanted to take the opportunity to specifically thank all of the individuals and teams who have taken the time to engage with me and my colleagues as part of this assessment process. You have been open with us, kept us up to date with developments and have taken on board our feedback throughout the process. All the while getting on with the more important job at hand, working on Census itself. 

As we get closer to the festive season, I wish you all a well-deserved break and raise a glass of sparkling elderflower pressé to one and all.  

Related links:

Assessment of compliance with the Code of Practice for Statistics – 2021 Census in England and Wales

Assessment of compliance with the Code of Practice for Statistics – 2021 Census in Northern Ireland