Theme: Law & justice | Content Type: Digested Read

How to Build Progressive Public Services with Data Science and Artificial Intelligence

Helen Margetts, Cosmina Dorobantu and Jonathan Bright

possessed-photography-YKW0JjP7rlU-unsplash

Possessed Photography

| 10 mins read

There is a widespread assumption that the new Labour government will intend to improve public services. By the time of the election, over three-quarters of Britons considered that most public services, including the NHS, the schools and the courts, were ‘in a bad state’ and had ‘got worse’ over the past five years. The new Labour government’s 2024 manifesto contained a modest phrase to ‘deliver data-driven public services, whilst maintaining strong safeguards and ensuring all of the public benefit’. This article adds substance to this statement.

How technology has not transformed the public sector in the past and why it might be different this time

Why would the latest generation of digital technologies be any different from the disruption, organisational failure and digital insecurity of previous systems? Legacy computer systems built from the 1970s were based on filing systems. Data-intensive technologies, like AI, can empower government to move towards a model where large-scale transactional data is put to the service of improving public services and policy design.

Data-intensive technologies have tremendous potential. They can analyse large quantities of data to understand complex economic and societal trends such as benefit take-up. They can detect things that we want to avoid, from cancer to online harms, predict adverse events, from extreme weather to financial crises, and simulate in great detail the effect of policy interventions.

Interactive large language models (LLMs) such as ChatGPT have major implications for the public sector. Around one in five public sector professionals in the emergency services, schools, hospitals and social care in the UK already use generative AI in their work, and these figures are likely to increase rapidly.

Can AI improve public sector productivity?

Productivity is the main benefit that we expect from AI. Major productivity gains from past waves of technology cannot be demonstrated. However, AI can increase ‘services productivity’. A great deal of government work involves decision making to confer rights to individuals, such as rights to residency, to grant a passport or to establish entitlement to a benefit. These micro-transactions are potentially automatable. Research by the Alan Turing Institute suggests that the UK central government completes 1 billion such micro-transactions each year. Around 120 million are complex, repetitive tasks that are highly automatable with AI. This would speed up processes and make public service workers’ roles more rewarding.

Another way to think about public sector productivity is in terms of ‘policy productivity.’ For the 2024–25 financial year, the UK government is forecast to spend over £1.2 trillion. Data science techniques can help how policy decisions are made and budget lines are allocated. They can capture the interdependencies between policy domains, such as the relationships between schooling outcomes and children's health services. Research on Columbia shows that optimising the allocation of only 10 per cent of public funds using data science techniques can improve the government's performance against its policy targets by up to 50 per cent. Indeed, the Tony Blair Institute (TBI) and the AI company report that “the UK stands to gain £40 billion per year in public-sector productivity improvements by embracing AI, amounting to £200 billion over a five-year forecast.’

The equity challenge of AI in public services

However, inequity is one of the most feared harms. Six years ago, Political Quarterly published a blog post warning that automation risks bringing about a new ‘paradox of plenty’, where ‘society is likely to be far richer overall … but for many individuals and communities, technological change could reinforce inequalities of power and reward as the benefits are narrowly shared.’In the US, replacing workers with technology without mitigating measures geared at training and skills development explains 50–70 per cent of the increase in wage inequality between 1980 and 2016. AI is very likely to reinforce that trend.

Inequity caused by lack of digital access is a concern. The pandemic period revealed that over a million children and their families did not have adequate access to a device or connectivity at home.

AI has also been shown to produce biased outcomes. Humans’ conscious or unconscious biases and discriminatory practices can—and do—creep in.

A ‘pro-human’ path to AI-powered public services

Growing inequity is not an inevitable outcome of AI advancement. We can chart what Acemoglu calls ‘a pro-human’ direction for AI with a ruthless focus on tackling inequities.

First, for ‘services productivity’ it would mean thinking about AI as a way of augmenting public sector work so that services are ‘better’, rather than automating entire decision-making processes and replacing humans altogether. Indeed, public services are in no state for job cuts. AI should focus on tasks that humans find unfulfilling and time-consuming.

Second, a pro-human approach in ‘policy productivity’. We should develop AI and data science technologies that do ‘what humans cannot do well: addressing interrelated problems that require the harmonization of data, knowledge and expertise from different domains.’

Third, government would need to re-establish a central effort on digital inclusion. Research undertaken for the Good Things Foundation found that investing £1.4 billion in digital skills and inclusion could bring economic benefits adding up to £13.7 billion.

Fourth, a pro-human approach to AI in public services would involve tackling head on the issues of fairness and bias in AI technologies and the data sets needed to train them through an ethical framework of the kind laid out in the UK government's official guidance on the ethical use of AI in the public sector.

Finally, a pro-human approach to public sector AI and a focus on equity could have a multiplying effect. Feedback from citizens could be systematically solicited, collected and analysed centrally by LLMs which could ask citizens simple questions as they interact with services—for example, what is your biggest problem in accessing GP services?

How to achieve progressive public services

To achieve the vision would require a break with the past. NAO reports in 2021 and 2023 identified capability building as a core challenge for digital government, pointed to ‘legacy systems’ as another core challenge, and noted that ‘most digital change decisions in government are made by generalist leaders who lack the expertise to comprehend fully and tackle digital challenges.’

Core capacity needs to be built up from a public sector perspective. There will always be a role for the private sector, but if the public sector is to reap the benefits of AI, developing in-house expertise is a must.

Our first recommendation for building AI capacity is to capitalise on the centralisation of digital, data science and AI initiatives in DSIT, drawing in CDDO, GDS and i.AI. The new Secretary of State for Science, Innovation and Technology, Peter Kyle, has already promised to make a ‘digital centre of government’ that includes these agencies. This Digital Government Centre—as we label it here pending its naming—should act as a centre of expertise, headed up by a director-general level civil servant with a dedicated DSIT minister for public services innovation.

Second, individual departments will also need in-house AI expertise and capabilities. We recommend a digital appointment at the highest level—such as director-general for every large department and agency.

Third, citizens also need digital capability. There should be, also within the Digital Government Centre, a dedicated unit to ‘own’ the issue of digital access and connectivity. Digital access should be considered a ‘critical public service’.

Fourth, though LLMs are being used widely by public sector professionals, worryingly only a minority of respondents (32 per cent) felt there was clear guidance on generative AI usage. So, the Digital Government Centre needs to work across agencies to roll out and help with the ‘Generative AI Framework for HMG’ already proposed and published by CDDO.

Finally, research and development in public sector data-intensive digital technology is needed, so the new government should bring in new partners to carry out research and offer independent, neutral advice.

Progressive public services will not emerge overnight Because of AI's real risk of deepening inequality, improving government with AI needs a holistic treatment and path forward for how to tackle AI's equity challenge if public services are to warrant the ‘progressive’ label.

Read the full article on Wiley

Need help using Wiley? Click here for help using Wiley

Volume 95, Issue 3

Latest Journal Issue

Volume 95, Issue 3

This issue features a collection 'Policing the Permacrisis', edited by Ben Bradford, Jon Jackson and Emmeline Taylor, in which academic experts, senior police—both current and former—and commentators offer a diverse set of ideas for changing policing for the better. Other articles include 'Back to the Future? Rishi Sunak's Industrial Strategy' by James Silverwood and Richard Woodward, and 'The Case for a Scottish Clarity Act' by Steph Coulter. There are a host of book reviews, such as a review of 'The Inequality of Wealth' by Liam Byrne, and 'The Eye of the Master: A Social History of Artificial Intelligence' by Matteo Pasquinelli.

Find out more about the latest issue of the journal