Service delivery is broken – it’s time to join it up
The Service Standard phases stop teams from working in the right way
Back in August, Ben (one of our Senior Product Managers) wrote that the Service Standard phases were a challenge for digital teams in government. I think he softballed it. Phases, government’s way of framing iterative agile delivery, prevent teams from working the right way.
Ben and I have been thinking more about the problems phases cause, and what government could do to make things better.
Going through a phase
GDS popularised modern software development practices in the Civil Service a decade ago. The phase diagram was an early attempt at showing the journey a new product would go on. The phases, Discovery, Alpha, Beta and Live, represent stages in the journey.
It’s modelled on the agile delivery lifecycle: an idea is tested as an MVP (Minimum Viable Product) before it’s built into an increasingly fuller-featured product. Many major software successes of the last 20 years have gone through a journey like this.
For people new to agile ways of working, the diagram provided reassurance. “Products are on a journey,” it says, “and that journey ends in a service everyone can use.”
A Service Assessment doesn’t prove value. It’s a capability guardrail.
But government loves a Gantt chart. They can provide a false sense of certainty (‘it’s written down here so it will happen’), which may satisfy a risk-averse financial approval process, but it doesn’t support real world delivery. The phase diagram laid the foundation for 2 things that have plagued service delivery since.
Firstly, it enabled people who work in traditional waterfall ways to say ‘actually, we do this too – we’re agile’.
Secondly, it established the specific gates through which services have to pass to get funding.
Right now, both of these things stop the public from getting good value services.
Chasing waterfalls
Today, a team can be commissioned to conduct a Discovery for a new service. Contracts for work like this will pin a team to a fixed window of time and commit them to certain deliverables. These deliverables are often predicated on the idea that a service will be built, regardless of what the Discovery reveals.
Experimentation is within strict bounds. The Service Manual states that you should not start building your service in Discovery. Teams will often be told they can’t show members of the public even the most speculative service. A Discovery should establish whether there’s “a viable service you could build that would make it easier for users to do the thing they need to do”. It’s difficult to do that if you can’t put things in front of people to see what might work. It’s also difficult to do this if a business case exists presupposing what the service will look like.
Decisions won’t be made as part of a Discovery. User value won’t be delivered.
Similar pitfalls exist for every phase. At the other end of the product life cycle, there don’t seem to be many Live services that get reassessed – or even Beta services that go live. This implies that services are commissioned on the assumption they’ll go on a one-way journey towards wide release. For many of them, an open Beta is seen as enough.
Iterative software development is meant to accommodate pivots and shifts of focus. These are the kind of things that confident, empowered teams do when they’re sure of what their users need. The contracts we see written with the phases in mind don’t leave room for that, and lead to a lack of ownership for services once they’re out in the open.
Stay on target
Service assessments are one of the reasons teams are locked into developing these phase-based services. These meetings (in which services are reviewed by a small team of experts from elsewhere in government) use the Service Standard to mark the quality of a service.
As Laura, one of our Delivery Leads, wrote in October, all eyes are on passing the assessment. This means that teams are encouraged to treat the rest of the Service Standard as a checklist. There’s a world of difference between taking accessibility seriously and ‘meeting the sub-conditions of point 5 on the Standard’.
The reason for this is that assessments are essentially funding gates. In the commercial world, funding for growth, especially for agile software projects, is unlocked by teams who prove the value of a service. Teams zig and zag towards that goal, and use agile methods to understand what approaches will best unlock value for users.
A Service Assessment doesn’t prove value. It’s a capability guardrail. Using it to gate funding is the wrong approach, one that’s far too common across government. That’s what leads to teams being commissioned to ‘build an Alpha’ instead of ‘work out how to help people get support’. It’s another thing that keeps teams at arm’s length from outcomes.
Get to value faster
We want to help teams solve problems, and move thinking and quality forward. We want to do that in a way that gives good value for money (it’s part of our mission). So what do we need to change about phases to deliver value quicker?
Put an end to the current phases. Acknowledge that understanding the right approach to a service or feature and building and operating a service often happen in parallel. Develop a structure around commissioning that acknowledges that. This would allow teams to unlock value far faster, and ensure that teams aren’t prevented from testing their assumptions with users early on. Funding a team and making them accountable would help to do this, rather than funding a project and taking the empowerment away on decision making. Including decision makers, for example, policy team members, as part of the funded delivery team would help here.
There’s a world of difference between taking accessibility seriously and ‘meeting the sub-conditions of point 5 on the Standard’.
Assessments in this world would have to be different. There are a few approaches that could be used, but the one that makes most sense to us is an accreditation-based model. It’s an approach government is very comfortable with: we already have to be accredited for certain things to be on particular procurement frameworks. That feels like a sensible way of ensuring that delivery is done by experts who embody the spirit of the principles, rather than treat them as a hurdle. After all, it’s a team who will understand users and their needs, not the service itself.
Done together, trusted teams able to understand and solve a whole problem for users would become the default.
A phase is not an outcome
As Dave, our CEO, argued last month, there’s a lack of accountability in government for outcomes. The use of phases to describe projects typifies this. If teams were on the hook for actually supporting a policy, they wouldn’t simply try to meet the Standard. If commercial teams were experts in delivery they’d stop departments commissioning phase-based builds.
The Service Standard itself is a powerful tool. We’ve worked on projects for local government and arms length bodies, where we’ve used the Standard to say ‘this is what good looks like’. We wouldn’t want that ambition to be watered down. But phases have become barriers for teams who want to deliver value fast, and who work in a truly agile way.