Paul Zimmerman
INV Group Chief Communications Officer
November 14, 2024
This article is drawn from a INV Group podcast interview between Paul Zimmerman and Professor Alan Brown originally posted on YouTube on November 4, 2024.
Alan is named as one of the world’s 100 most influential people in digital government by Apolitical. An author of six books, Alan has been an IBM Distinguished Engineer, a Fellow of the British Computer Society and a Fellow of The Alan Turing Institute. He was recently named as Director of AI for UK-based Digital Leaders.
As AI adoption accelerates across the public sector, the hardest problem is not choosing a model or launching a pilot. It is building the governance, data foundations and institutional confidence needed to use AI at scale. Drawing on a recent conversation with Professor Alan Brown, this piece explores why public-sector AI success depends as much on trust, accountability and service redesign as it does on technical capability.
AI in government is no longer a future question
Government is already using AI.
Sometimes that use is formal and visible: fraud detection, border technologies, eligibility systems, decision support, workflow automation. Sometimes it is quieter: productivity tools with AI built in, summarisation features in everyday software, informal experimentation by staff under operational pressure.
The question is no longer whether AI has a role in government.
The real question is whether public institutions can use AI at scale in ways that are trusted, explainable, auditable and genuinely useful to citizens.
That was one of the strongest themes in our recent Think Tank interview with Professor Alan Brown, Professor of Digital Economy at the University of Exeter and Director of AI at Digital Leaders.
His central argument was clear: government’s challenge with AI is not simply one of access to technology. It is a challenge of organisational design, public accountability, data quality, incentives and governance. He goes on:
“That’s not a technological issue anymore. That’s a human issue. That’s an organisational issue. That’s a communication issue.”
That is the conversation the public sector needs now.
The public sector does not just need AI adoption. It needs AI at scale
There is no shortage of pilots, proofs of concept and demos. Most large organisations have now seen what AI can do in controlled settings.
But that is not the same as scale.
Alan Brown makes an important distinction here. Small teams, working in contained environments, can often generate impressive results quickly. The difficulty comes when institutions try to move from isolated use cases to broad operational deployment across services, departments and teams.
That is when the real questions begin.
How does AI fit into existing decision-making structures? How does it interact with legacy systems? How should risk be assessed? Who is accountable for the outcome? What is the audit trail? What data underpins the output? How should human judgement be retained or redesigned?
As Brown put it in the interview:
“We can understand things on a small scale… But when we say now do that for a hundred people, now do that for a thousand people, now do that for hundred thousand people, we say we don’t know how to do that.”
For government, this matters enormously. Public institutions are not start-ups. They operate under statutory obligations, audit regimes, procurement rules, political scrutiny and public expectations that private firms simply do not face in the same way.
That means scaling AI in government is not just a matter of rolling out tooling. It is a matter of building the operating conditions in which AI can be used safely, consistently and credibly.
AI is entering government in three different ways
One of Brown’s most useful observations is that AI is not arriving in government through a single route.
He describes three broad modes of adoption.
The first is through large formal programmes: major AI initiatives such as border systems, fraud analytics or service eligibility platforms. These are procured, funded and governed in recognisably public-sector ways. They are visible, controlled and strategically important.
The second is through informal use at the edge: officials and practitioners using general-purpose AI tools to help summarise, prepare, interpret or generate content in the flow of everyday work. This is where utility often appears before governance has fully caught up.
The third is through embedded features in commercial platforms: AI arriving through office software, workflow tools and enterprise systems already in widespread use. In many organisations, this may be the fastest-growing route of all.
This matters because government cannot treat AI as a single programme with a single governance pattern.
Different uses carry different levels of risk, different needs for oversight, and different implications for data handling, accountability and trust.
Risk, trust and value: the three sliders government must manage
One of the clearest ideas in the conversation was Brown’s framing of AI decision-making through three factors: risk, trust and value.
That is a useful lens for any organisation. In government, it is essential.
A public body considering AI is rarely asking just one question. It is weighing several at once:
What is the risk of using this tool?
What is the risk of not using it?
What value could it create?
Is there enough trust in the output, the process and the controls around it?
Brown describes these almost as sliders that move depending on the context.
“Constantly we’re trying to balance those three things: risk, trust and value.”
That balance is particularly difficult in government because the cost of getting something wrong is not only operational. It may also be legal, political, ethical or reputational.
At the same time, the cost of not acting can be significant too: backlog, delay, poor citizen experience, avoidable friction, staff burden and wasted public money.
This is why AI governance in government cannot be reduced to a compliance checklist. It has to be a practical framework for decision-making, one that helps institutions make better judgements about where AI should be used, how, by whom, and with what safeguards.
Governance is not friction. It is what makes scale possible
There is still a tendency in some AI conversations to treat governance as the thing that slows innovation down.
That is the wrong frame for government.
In regulated, high-accountability environments, governance is not the enemy of scale. It is the precondition for it.
Without clear governance, AI remains stuck in pilots, informal workarounds and isolated experiments. With the right governance, institutions can move with more confidence because they understand the boundaries, the controls, the escalation paths and the basis on which decisions can be defended.
Brown’s reflections on audit and accountability are especially relevant here. Public bodies do not simply ask whether something works. They also have to ask whether they can justify how it was done.
“The last thing you want to be doing is standing in front of a Public Accounts Committee trying to justify why you did or didn’t do something.”
That is why public-sector AI needs more than enthusiasm and procurement. It needs governance that is operationally embedded: clear ownership, explainability, documentation, data lineage, oversight controls, escalation routes and assurance mechanisms proportionate to the context.
This is exactly where many institutions now need to focus. Not on AI theatre, but on the infrastructure of trust.
Provenance matters as much as performance
Another critical point Brown makes is that a good-looking answer is not enough.
Even if a generative AI system produces something useful, that does not mean it can be relied on in a public-sector setting.
Where did the answer come from? What data informed it? Can its reasoning be understood? Can it be challenged? Can it be audited? Who is responsible for using it? What liabilities might sit behind it?
These are not peripheral concerns. In government, they are central.
Brown captures this precisely:
“Even if that tool gave you an answer which you said ‘that’s a great answer’, I just can’t use it because I don’t know where it came from.”
That line gets to the heart of the issue.
In public service, provenance matters. So does explainability. So does accountability. A plausible output is not the same as a trustworthy one.
This is especially true where AI affects decisions, entitlements, case handling, policy interpretation, safeguarding, vulnerability or public communication. In those settings, traceability is not optional.
The real prize is not digitising the form. It is redesigning the service
A particularly sharp part of the discussion focused on service design.
Government has spent years digitising forms, portals and transactions. Some of that work has been valuable. But too much of it has focused on moving old processes online rather than rethinking how services should work in the first place.
Brown’s example of a highly complex pensioner fuel allowance form lands the point well. If government already holds relevant data about the citizen, why is the burden still on the individual to navigate a lengthy process to prove eligibility?
His answer is memorable:
“The digitisation of ‘we’ve digitised the form’ is sort of missing the point. What we want you to do is say we don’t need a form.”
That should be read as a challenge to the public sector’s AI agenda more broadly.
The real opportunity is not just to automate existing processes. It is to redesign services around better data use, clearer decisions, reduced friction and more proactive support.
That is where AI can create the most meaningful public value. But that is also where governance, assurance and service design discipline become even more important.
Data foundations are the real enabler
There is no serious path to AI at scale in government without better data foundations.
Brown is direct about the current state of play. Data quality is variable. Access is inconsistent. Ownership is often unclear. Provenance, age and accuracy are not always well understood.
“We know the data quality isn’t always high. It isn’t always accessible. We’re not sure where it came from, who owns it, what its provenance is, what its age is, what its accuracy is.”
This is not a secondary technical issue. It is one of the main blockers to trustworthy AI in the public sector.
Too much public discussion about AI still centres on the model layer. In practice, the harder and more important work is often beneath that: data quality, data architecture, classification, metadata, access control, lineage, retention, interoperability and governance.
If those foundations are weak, AI will struggle to move beyond narrow, low-stakes use cases. If those foundations are improved, a much wider range of useful, defensible and scalable applications becomes possible.
Public-sector AI needs institutional confidence, not just technical ambition
There is another thread in Brown’s argument that deserves attention: the tension between those closest to the service problem and those responsible for institutional risk.
Frontline teams often understand very well where better use of data or automation could make a difference. But management structures, understandably concerned with accountability and exposure, may be reluctant to support experimentation or change.
That tension is familiar across government.
The answer is not reckless innovation. Nor is it permanent caution. The answer is building institutional confidence: clear guardrails, explicit accountability, better governance, better data practice and enough shared understanding for responsible action to take place.
In other words, government needs the confidence to move forward safely, rather than the false choice between paralysis and overreach.
The INV Group view: the future of public-sector AI will be won through governance
At INV Group, we believe this is where the public-sector AI conversation now needs to mature.
The challenge is no longer simply access to models or interest in automation. The challenge is building the compliant AI layer that regulated organisations can depend on.
That means:
governance that is designed into workflows, not added afterwards
data foundations strong enough to support trustworthy use
service redesign that focuses on outcomes, not just transactions
auditability, oversight and explainability that stand up in real operating environments
adoption patterns that respect the realities of public accountability
The institutions that will succeed with AI in the next few years will not necessarily be those with the flashiest pilots. They will be the ones that can combine technical capability with governance credibility.
That is what will allow them to move from experimentation to scale.
And in government, scale without trust is not transformation. It is a ris