What is the UK Government actually doing with AI?
AI is, on the whole, a promising fit for a lot of public sector organisations
Having bumped along at the bottom of most people’s techno-anxiety lists over the past decade or so, artificial intelligence suddenly leapt to the top with the release of ChatGPT in November of last year. We now face a world, we are told, where, ChatGPT is likely to throw millions of workers out of their jobs, destroy the notion of higher education, and take over the stock market.
Such speculation becomes particularly alarming when applied to government. The notion of hyper intelligent robots running the state readily evokes Orwellian nightmares.Military authorities predict an increased risk of nuclear war in the wake of enhanced AI capabilities, while the historian, philosopher and author Yuval Noah Harari warns darkly of impending AI-fuelled tyranny and the UN seeks a moratorium on further AI development.
Dismissing these dystopian visions out of hand would be reckless. After all, the technology is powerful, opaque, and advancing exponentially. On the other hand, they sit oddly next to the small handful of public sector AI applications I’ve been exposed to over the past couple of years. These include a proposal to use machine learning to classify plant cover and promote biodiversity in the UK countryside, and a smattering of initiatives involving smart meters and electricity consumption. While there are ethical complexities and legitimate concerns that can be raised about these sorts of projects, they’d nevertheless make for a singularly dull science fiction novel.
GovTech AI: more advanced than you might think…
For people who are alarmed by the prospect of an AI-enhanced government, there’s some bad news: that future is, to a large extent, already here. ChatGPT has been sending shockwaves through the tech world and wider society in the months since its release – but the transformer architecture and the GPT Large Language Models (LLMs) it’s built upon have been available since 2018. ChatGPT is better, cheaper, and in some respects easier to access and use than earlier GPT LLM releases. But for organisations that already had well defined AI requirements, these are (admittedly substantial) differences in degree rather than of kind.
And AI is, on the whole, a promising fit for a lot of public sector organisations. It’s a common complaint of policy specialists that government is only ever called upon to confront so-called ‘wicked problems’ – society-wide complex issues with high stakes, a wide diversity of stakeholders, and a substantial number of unknown quantities. And it’s precisely this kind of multi-parameter scenario that AI is uniquely well designed to address. It’s hardly surprising, then, that the UK Civil Service already treats AI as a potentially standard part of the public-sector problem-solving toolkit. And while looking to AI as a magic bullet for intractable problems can sometimes smack of desperation, the UK generally enjoys strong comparative advantage in this area that the government can be expected to take advantage of.
…less than you might fear
On the other hand, the public sector also operates under a unique set of constraints:
- AI applications typically require a lot of good, high quality, standardised data to work well. But government often lacks sufficient data – and often quite basic data – for the wide ranging, society- and nationwide problems it is called upon to solve
- fears of bias and concerns about data privacy are particularly acute in public sector applications, dampening enthusiasm for large-scale AI rollouts
- the use of complex or ‘black box’ AI models raises difficult questions about governmental and public-sector accountability that remain, for the most part, unanswered
- public sector technology is expected to be exceptionally robust, reliable, and resilient. While AI is maturing fast, many of its applications remain experimental and relatively untested
- there have been a fair number of high profile algorithmically-fuelled controversies and catastrophes in the public sector that ought to, and generally do, fill civil servants with fear and trembling when implementing AI in their own departments
One might reasonably expect, then, that public sector AI projects are likely to be focused on uncontroversial, and non-mission-critical applications fairly remote from the daily lives and concerns of ordinary citizens.
Methodology
Unfortunately, falsifying or confirming this assumption is difficult. There’s no central registry for AI projects – the Office for Artificial Intelligence is concerned with national strategy, not implementation – and even full-time researchers who are paid to find this sort of thing out complain about the difficulty of doing so.
My attempt to find out more about UK Government AI projects involved:
- random googling
- searching for the terms “artificial intelligence” and “machine learning” on various departmental websites – for example, the Cabinet Office and Defra
- poking around on the Alan Turing and Ada Lovelace Institute websites to get a sense of strategic focus
- searching the Contracts Finder and Find a Tender services for all UK public sector awards containing the phrases ‘artificial intelligence’ or ‘machine learning’
- sanity checking against reviews of public sector AI projects carried out by AI Watch for the European Union and by Deloitte.
As a result, findings are highly skewed towards:
- successful projects people want to boast about
- areas where the Civil Service doesn’t feel it has the internal expertise to adopt AI/ML solutions
- unclassified projects. While it’s clear that AI has a significant role to play in cybersecurity and national defence, the kinds of organisation that are focused on this are understandably reluctant to share details
Findings
Even given this skew, however, this ad hoc querying allows an indicative map of public sector AI involvement to be sketched out.
TL;DR: AI is growing fast, from a very low base. And it’s being used for… well, a bit of everything… and this diversity of application is in fact one of the interesting things about it.
There’s not really much AI work going on
I had to delve fairly deep into various government websites to find any evidence of AI-related activity – and often came up empty handed. As for external work, of the 269,620 contracts listed on Contracts Finder as of 20 April 2023, 209 (0.07%) feature the phrases “artificial intelligence” or “machine learning”. For the Find a Tender service those numbers are 80,603 and 162 (0.2%) respectively.
…but that’s changing fast
A fair number of those contracts and tenders, however, turn out to really be prep work for procuring more AI technology. The Cabinet Office, the National Nuclear Laboratory, and HealthTrust Europe are all looking for Dynamic Purchasing Systems to make contracting future AI work easier. Many more organisations are looking for gateways, frameworks, and educational platforms to help connect them with, evaluate, and work with AI providers more effectively. For departments less ready to take this step, there’s a steady flow of tenders for reports on AI, its potential application to various sectors, and the ethical and practical limitations to such applications.
In cases where AI is found to have some chance of being ethically applicable to a domain, the path to funding approval will likely be smoothed considerably by the recent announcement of a £100m fund for a’ Foundation Models Taskforce’, much of which is to be spent on public sector procurement. In the world of foundation models, £100m is nowhere near as much as it sounds. But the Taskforce reports directly to the Prime Minister, and as a statement of government intent, it doesn’t get much clearer than that.
…across almost all domains
Leaving aside all the AI-scaffolding projects, the diversity of problems to which AI is being applied is immense.
This makes the task of summarising what exactly government is using AI for a difficult one, because the distribution of use-cases is extremely flat. I’ve attempted an informal grouping below. But when you read them, bear in mind that there are more cases that fall outside of all these clusters than within them. The ‘misc’ file, in other words, is much larger than all the other files combined.
Current applications
Medical imaging
This was by far the best defined group of applications, and with good reason: diagnostic medical imaging is (at this point in time) a reasonably mature and well studied AI domain, And it’s unsurprising that the NHS and associated bodies are procuring systems capable of this. There were roughly a dozen tenders out for radiology and pathology AI applications. I expect these were not the first and will not be the last that UK health organisations issue.
Collating, interpreting, and analysing ‘big data’
This cluster comprises a variety of areas in which government is wrestling with data volumes too large for humans to successfully parse – sensor data, web analytics, etc.
Classifying plant cover from satellite data
Defra has a difficult time assessing the state of British nature and agriculture – in particular, in determining what plants are growing where. So they’re interested in using machine learning to identify crop cover, and biodiversity metrics, from aerial images across a range of geographies and ecosystems.
On a related note, Cefas (the Centre for Environment, Fisheries, and Aquaculture Science, a Defra subdepartment) is attempting to do the same thing for fisheries, using a range of underwater technologies and seabed samples.
Getting more accurate traffic counts
Counting traffic is apparently more difficult than one would think – and a number of local councils hope to apply artificial intelligence to the problem.
Making sense of large text archives
Historic England, the Intellectual Property Office, and the National Archives all intend to use artificial intelligence to help cluster, classify, link, and/or search their extensive archives of documents.
Optimising networks
The public sector is responsible for a large number of networks, ranging from the National Grid to sewage and plumbing infrastructure to the rail network. Locating bottlenecks, estimating risk, and investigating routing issues are all multi-parameter problems well suited to machine-learning and artificial intelligence solutions.
Anomaly detection
A large handful of government organisations want to use AI for anomaly detection – typically for cybersecurity reasons, but also on occasion to prevent fraud.
‘Grab-bag’ AI
In addition, in many tenders, AI was referred to simply as one of a range of technologies – included alongside for example, the Internet of Things and Immersive Technology – that organisations are willing to consider in relation to a bid. It’s tempting to dismiss this as nothing more than buzzword boilerplate, but the presence of this sort of list is a good indication of areas where government feels problems have been both difficult and persistent. It’s presumably on this basis that CRM and document and case management systems all end up with ‘AI’ attached to them.
Looking around the world…
If we compare the above, rather haphazard, list of UK public sector AI applications to those found in international surveys of the EU and beyond, we find a similarly mixed bag of applications.
Some overlaps are clear – medical use-cases are frequent everywhere, while a preoccupation with traffic and utilities is common to both the EU and the UK. This isn’t to say local variations can’t be found – for example, the European Union seems to have discovered an enthusiasm for chatbots the UK currently lacks, while Canada’s geological-prospecting projects makes sense given its resource driven economy.
In general, though, the main impression one comes away with is simply the incredible diversity of the problems to which AI is being applied, not just globally but within each country individually. The scale, complexity, and uncompromising ‘wickedness’ of the problems that the public sector addresses make AI an appealing tool for at least some subset of the problems almost any government department can be expected to face. Regardless of whether the domain is medical or military, the focus service provision or policy, or the national AI strategy relatively laissez-faire or regulatory.
…and into the future
The speed with which the power of AI has been growing, and the falling implementation over the past couple of years, has been driving a boom in attempted predictions about the technology – including well-considered contemplation of the public sector use-cases that are most suitable for AI solutions.
When considered alongside both the range of problems the public sector is already looking to address with AI and with trends in government procurement – not least the recent announcements of significant central funding for AI and AI infrastructure – the increased availability and affordability of the technology suggests that we may have already hit an inflection point in the public sector.
We either are, or will soon be, at the point where it stops making sense to think about AI as a distinct technology with a set of given potential uses in government, and more as a standard part of the technical toolkit, with the potential to play a role in almost any solution. In the same way that we tend not to refer to ‘digital government services’ – by this point government services are often assumed to be digital by default – it may in the near future seem redundant to speak of ‘public sector AI applications’.
…but first, focus on the present
That said, there are significant brakes on this process. As noted above, questions of data availability and quality, legality and ethical status, and application reliability and transparency all pose significant challenges to public sector implementations of AI. On the evidence of the Contracts Finder and Find a Tender service, right now these problems are largely being approached piecemeal, on a project-by-project basis. So there’s a substantial risk of important lessons being lost, or at any rate having to be learned many times over because of the fragmentation of the AI landscape. If the UK public sector ends up being delayed in its AI adoption, it may be because of too much enthusiasm for the technology rather than not enough. For example, different departments and directorates bounding off in pursuit of solutions already achieved, or following data strategies that others have found to be flawed.
There needs to be a way for civil servants to learn about public sector AI projects other than from catastrophising headlines about them. And perhaps more immediately, there ought to be a way to find out about government AI that’s more reliable than hunting and pecking one’s way through the Contracts Finder. These shortcomings may be addressed as part of the government’s £100m AI fund – and if so, the investment is likely to repay itself several fold. Making predictions about a technology as fast-moving and exponentially-growing as AI can be dangerous, but its potential for the public sector of the future is clearly immense.
To get there, though, we’ll need to first understand its present.