Theme: Public Policy | Content Type: Interview

"The Risk of Inequity is Always There, Wherever You've Got AI": Interview with Helen Margetts

Anya Pearson

Helen-Margetts_for-web-1024x683-1

| 12 mins read

Anya Pearson interviews Helen Margetts OBE, Professor of Society and the Internet at the Oxford Internet Institute, University of Oxford. She is also Visiting Professor and Senior Adviser in the LSE Data Science Institute, and until June 2025 was Programme Director for Public Policy and Director of the Public Policy Programme at the Alan Turing Institute for Data Science and Artificial Intelligence.

How did you become interested in the relationship between digital technology and government, politics and public policy?

My first degree was in maths, and before I became an academic, I had a brief career as a computer scientist in the private sector. But I was very interested in politics, so I decided to do a master's degree at The London School of Economics. I was studying political ideas, public policy and public administration, and I was very struck by the fact that nobody ever mentioned computers. I assumed there must be large scale computer systems in government, but nobody ever talked about them. Really, I just wanted to find out where the systems were, how they were shaping government, how they might improve public administration and policymaking. It became a lifelong curiosity.

What was your most important takeaway from your career as a computer programmer and systems analyst?

That large scale bureaucracies are supposed to be rational but they quite often do quite irrational things. I used to observe this in the large private sector organisations in which I worked, but then I would go and tell someone about it, and they would say: ‘Yes, Helen, very interesting, but when will the new accounting system be ready?’

You recently closed a chapter as Director of the Public Policy Programme at The Alan Turing Institute. What do you think Turing would have made of this age of AI?

Well, Alan Turing was a mathematician and revolutionary thinker about digital technology. So I think he would be pleased that the technologies he gave his life to are evoking so much interest in terms of the good that they could do to the world and how they could increase societal well-being. But I think he might also think that there was a lot of hype about AI. He would find some of the wilder discussions of AI utopia and dystopia completely muddying.

One of the things Turing is most famous for is developing a technology to tackle a very specific problem, which was to crack the Enigma code. I think he'd probably like to see more use of AI to tackle explicit problems. Maybe he would have been fascinated by how we can use AI to reduce inequity–that might have been an interesting mathematical problem for him.

Is social media an easy scapegoat for societal problems, for example, in the TV series Adolescence?

In the commentary about Adolescence, social media gets the blame for everything, but there were a lot of other things going on, like a rubbish, out-of-control school; slightly violent father; an inherited temper. There is a danger that we turn social media into a social worker. My mother was a social worker, and when I was little, I asked her: ‘What do you do?’ The way she explained it to me was: ‘In any bad situation, for instance where a child has been injured, society always looks for somebody to blame. That's what a social worker is for’. And I feel social media has become the thing we blame, rather than people or failing institutions. It’s worrying, because then we tend to feel that there is nothing that can be done, whereas change is always possible. But Adolescence was a really great piece of television and the programme itself didn’t put all the blame on social media - it was the commentary that frustrated me.

How do you think Labour is doing in terms of its 2024 manifesto to ‘deliver data-driven public services, whilst maintaining strong safeguards and ensuring all of the public benefit’?

Well, they have recognised the need for central expertise to drive good digital public services across government. They have created a strong ‘Digital Government Centre’. Although it has the same name as the former Government Digital Service, it is based in the department of Science, Innovation and Technology, and has more wide-ranging responsibilities.

I think it's difficult, because Labour came to power with very wide dissatisfaction about public services. And if you're not getting the winter fuel allowance anymore, or you have had a benefit cut, you probably won’t notice how digital is the service to deliver it. It's a challenge to change people's perceptions that public services are broken.

I do think they've taken a good hard look at the state of digital government and the opportunities of AI. But there is a challenge in terms of thinking how this can happen, how they can drive adoption across the public sector. But they must think where they need to build expertise across the public sector and across regulators.The focus is always on the productivity gains, but what about what you need to invest?

How should we address the equity challenge of AI in public services, particularly when the public sector is such a big employer–surely if AI does jobs that are ​​unfulfilling and time-consuming, that will lead to job losses?

Of course it can lead to job losses, but the effect is usually much more long-term. The main way that AI would make the public sector more productive, for example, would be automating bits of people's jobs that they don't want to do. Doctors and nurses are not trained to be administrators and it's not necessarily what they're best at, but a high percentage of their time is spent on paperwork. There are so many people working in the NHS that just a small proportion of their time being freed up would be valuable.

Having said that, you're absolutely right that there are equity concerns. The kind of lower income, lower skilled groups are less likely to be trained to do something else or to do additional things. But it's much more likely that bits of people's jobs are going to disappear, rather than the jobs themselves.

Large language models introduce other equity questions. People who are digitally skilled, who have the right connectivity, who have the income levels to be able to pay for the best versions of these models will tend to see AI as an opportunity, while those in low income groups or with low levels of digital access are more likely to view all applications of AI negatively. So we need to bear those people in mind when we build AI into public services. The risk of inequity is always there, wherever you've got AI.

I'm really interested in how politics should balance privacy and other principles versus progress when it comes to technological advances, and potentially what kind of regulatory bodies could do such a job. Do you have any thoughts on that?

These technologies used in certain ways, which cause us to worry about key values and principles. One of them is privacy, and we do have regulation about privacy, governed by the Information Commissioner's Office. And then there are questions of transparency. These systems are very opaque. If a system is rules based, you can scrutinise the rules. But if a system is data driven, it's like a black box. Famously, Sam Altman of OpenAI said we don't really understand how ChatGPT works.

Another issue is accountability, because AI systems tend to diffuse accountability across lots of actors. I mean, if I'm using a chatbot on my phone and it gives me bad legal advice, who's responsible for that? Is it the company that developed the chat bot, or the company that deploys it, or the company that developed the large language model behind it - or some other actor in the supply chain of my legal advice? The UK has nearly 100 regulators, and they are all having to grapple with these kind of issues. In the UK we have a sector specific approach to regulating AI, whereas the EU has an EU AI Act which looks at it from a central perspective.

Do you think there are advantages to being centralised, or do you think specialism is quite good, given how complex the issue is?

It is a good question. I think in the end, you need both. AI is a horizontal and a vertical technology. It is horizontal in the sense that it gets into everything and makes a difference everywhere, to every market and every area of society, but at the same time it is vertical in the sense that it has quite distinct implications in different sectors. So you need a combination of the UK and EU approaches.

What do you think about the term ‘algorithmic governance’?

I don't really like the use of the phrase ‘algorithmic governance’. Bureaucracy itself is an algorithm; there is a set of rules for how an organisation works and behaves. In that sense, it's very similar to the very early computer systems, which were based on symbolic code, based on rules ‘if this then do that’. But modern machine learning technologies look for patterns in data, for example by detecting patterns associated with fraudulent behaviour. In that sense, they're less algorithmic, because they're driven by the data. They're more inductive than deductive. So when they are used for sentencing (for example), or for working out bail conditions, they're telling you about the risk of somebody doing something like reoffending. A data driven system is actually less algorithmic than the rules based system, which has more in common with bureaucracy. And that's why I get a bit annoyed when people talk about algorithmic government, because it's actually less algorithmic than it was before. When they use this phrase, it is like they have just discovered digital government for the first time.