AI? Yes, Minister
Delivering a solid foundation of modern systems might well be a necessity before government can take advantage of the latest tech
UKGovCamp is part of the cadence of my year. It feels like the year’s really started when you’re with a couple of hundred other folk, grappling with the same issues in different contexts. You can follow the ebb and flow of what’s exercising digital folk’s minds in the sheet of every session description since 2008. No surprise that there were 5 session titles that included AI this year, and that was based on 9 pitched suggestions.
I’m going to draw some themes from those sessions, aided by the campmakers who were typing furiously to get some notes down.
1: Let’s get on with it.
What I noted most was a theme of pragmatism. The mood seemed to me to be “This is happening, how do we make sure it’s done well?”. Folk drew parallels between AI and other sea changes in government IT. Was AI a change like “Digital”, which would make us work differently? Or was it like “Blockchain”, a solution in search of a problem? The consensus: yes, this is a real change and digital folk need to work with the flow, to add AI to their list of tools to deliver on user needs.
My favourite comment all day: “Is this what it feels like to be disrupted?”.
2: But what do you mean by AI, specifically?
The blanket term AI is doing a lot of work. For users, it might be everything from a rules-based chatbot, to image generation. At the least, it’s machine learning algorithms that excel at tasks like image recognition, data analysis, and pattern identification, as much as the Generative AI that’s driving much of the public perception of what AI is. Without that clarity, it’s much harder to delve into the detail of how AI might solve a problem. It was widely understood that AI is a set of tools, not a single service, and it is the service which delivers value to those who use it.
3: Might we have an evidenced reality check?
There’s a consensus that AI has serious implications for both carbon consumption and water; bodies such as the UN Environment Programme have been flagging this for some time. Newer Generative AI models will be more efficient, but given so much of the use of AI is driven by private companies, there’s a lack of openly available data about what the impact is right now.
There’s a deal of suspicion as to how irresponsible use of AI might deepen structural inequalities. At the very least, only the digitally included could conceivably have their data available to be used by AI models, whose benefits would then be less felt by the excluded. Poorly governed and explained demand for data might lead to citizens opting out of the very services that could help deliver better experiences, lessening their impact still further.
When the Government is looking for “£45 billion in productivity savings”, some of which is promised to include the use of AI, will the promised “Responsible AI Advisory Panel” provide credible evidence of impact – both positive and negative? Their remit is currently unclear.
4: Can we get the basics in place first?
I guess folk outside the Civil Service might expect this, but there were some solid calls for caution. The consensus was that without training, clean data, a user centred approach and a capacity to learn from failures and success, AI wouldn’t deliver the promise of its more optimistic advocates. Yes, there will be business cases for AI-powered services – but there will also be solid business cases for decommissioning legacy IT. Delivering a solid foundation of modern systems might well be a necessity before government can take advantage of the latest tech and deliver headlines too.
What next?
One of the final sessions of the day posed the question “What should central gov do?”. It used a technique called 1-2-4-all. 1-2-4-all is designed to help every voice in a group get heard. Everyone considers a question for a minute or so, turns to their neighbour and pools those thoughts, then forms a group of four and produces a collective response. Our session said that central government should:
- offer training for AI literacy, and in the use of specific tools
- give guidance on the sorts of experiments in AI it is looking for first
- support efforts to avoid over-relying on GenAI to solve problems Large Learning Models aren’t best placed to solve
- list example use cases that provide cost saving across the board
- issue guidance on AI service design so inaccuracies delivered by AI are identified and dealt with before causing harm
- update the Service Manual for the AI era, with a checklist of hygiene factors for AI-powered services
And finally…
I know through experience that naming things is hard; but “Humphrey” as the name for a suite of AI tools for the Civil Service? I can’t help but think that fellow fans of 80’s political satire Yes, Minister” will also be amused. I hope Jonathan Lynn hears of it.
I hope civil servants who are told to deliver badly thought through AI solutions, simply so a minister can ride a hype train, feel able to say “No, Minister”.
But I also hope that we get a stack of ministers proudly saying they’ve saved the public purse millions, and improve the experience of contact with government, by using AI.
While the civil servants who did the work of delivering quality services, using AI thoughtfully and safely, get to pride themselves at their good service to the people of the UK.
PS. Thanks to the organisers and helpers at UKGovCamp 2025 for the inspiration to try and summarise such a strong part of the event.
If you’d like to read the verbatim notes, they’re all here:
How can AI work for government & what is parliaments role in Government
AI Bullshit spotting & how to spot AI failures