In this new podcast episode, the second in our AI for Industry series, our host Adam Kobeissi explores the potential of artificial intelligence in the justice system with special guest Matthew Cain.

As the interim Chief Digital and Information Officer at the Crown Prosecution Service (CPS), Matthew brings unique insights into how AI is supporting their critical work. With a background in public sector digital transformation, he shares valuable perspectives on how technology is enhancing efficiency and supporting decision-making at the CPS. His experience leading innovative projects makes for an engaging and informative discussion on the future of AI in justice.

The role of the CPS is to make sure that the right person is prosecuted for the right offence, and to bring offenders to justice wherever possible. They prosecute criminal cases that have been investigated by the police and other investigative organisations in England and Wales. It is an independent organisation, making decisions independently of the police and government. 

From managing large volumes of evidence to improving case management, this episode covers how AI could transform the justice system while addressing challenges around public trust and data security:

  • How is AI transforming the CPS and its workflows?
  • What are the challenges of integrating AI into decision-making?
  • How can AI support case management in the justice system?
  • What AI tools or technologies are most promising for the CPS?
  • How is the CPS balancing efficiency with the need for human oversight?

 

Transcript

Adam Kobeissi: Hello everyone, and welcome to another one of our AI for industry podcast here at CGI. As you know, for those of you who've listened in before, this is an opportunity for us to discuss how AI is either playing or could play a pivotal role in the changing industries that we work in. And as ever, I'm joined by a fantastic guest this morning, this time to talk about how we can put AI at the forefront of digital innovation, potentially within the UK's justice system.
So joining me is Matthew Cain, the chief digital and information officer at the Crown Prosecution Service. Matthew has been with the CPS since 2022 and has been a driving force behind the organisation's digital transformation. He's got a rich background in public sector digital strategy, research and communications, including roles at the London Borough of Hackney and Buckinghamshire County Council.
He brings a wealth of experience and a visionary approach to digitalisation and information management. Under his leadership, Matthew and his team have freed up more than 100,000 hours of time for staff through a workflow management modernisation program that's putting job satisfaction, efficiency, and automation at the heart of the program and exploring now how AI could further support that and improve their work.
In today's episode, we'll explore Matthew's journey, the innovative projects at the CPS, and his vision for digital transformation within the public sector. So it gives me great pleasure to welcome you to the pod today, Matthew, thank you for joining us.

Matthew Cain: Yeah, it's a pleasure. I'm really pleased to discuss this exciting emerging area of technology.

Adam: Yeah brilliant, thank you. Maybe, you could just give the listeners a bit of a rundown on your own journey, Matthew. How did you come to become the CDIO of the CPS and your journey so far?

Matthew: There are kind of three things about my work experience which have led to this point and which I find particularly useful in my role at the CPS.
I began my career in policy research, often quite close to the heart of government, or at least it felt like it at the time. I then spent a period of time running a social media research agency, right in the early days of social media, I think we conceived of the business before Twitter was even created. And then I came into strategy consulting and ultimately got frustrated when my job finished with save as PDF, and so I was really keen to find out about this digital transformation stuff. How difficult could it really be? And so that's brought me to here where, of those experiences, the one that is most helpful for me is the experience of running a small startup.

Adam: Amazing. And I guess the inquisitive mind is what drives the change, right? And some of the things you've just talked about there is the ability to reflect at a point in time, on your own career and how you could do things differently, not just for yourself, but for others as well, which I guess naturally leads you to the job you do now, right?

Matthew: Yeah. And it's one of those things that actually, ironically perhaps, AI isn’t so good at, right: joining the dots between disparate experiences in order to create something new. And fundamentally I hope that's a working definition of innovation. But, what I've always sought, through jobs which have been more or less fulfilling, is the ability every day to believe that I'm doing something which ultimately is going to make life better for citizens.

Adam: Well, let's talk about that for a second. In terms of particularly the CPS themselves, we were talking a little bit before, the Crown Prosecution Service is not an area that I've particularly had, much interlink with through my professional career or personal life, thankfully. My experiences are predominantly based around what I see on TV. But I suppose, again, just to set a bit of the agenda around where the dynamics are of the conversation we can have, particularly round AI, if you could just expand a little bit about what does the Crown Prosecution Service actually do, over and above prosecuting cases itself, because it's got a wide and diverse supply chain, right?

Matthew: Yeah. So I mean, much of what you have seen about the role of the CPS in a crime documentary or a drama won't actually be untrue. We make decisions on whether or not someone can be charged in a particular set of cases, with information that police send us, we then prosecute those cases in court. And we do that over 400,000 times each year for criminal activity in England and Wales. 

Adam: Brilliant. Thank you for that amazing overview of what the CPS does, there are a number of things within that sphere of influence that play either a positive opportunity for where technology such as AI could exist or also raise some interesting challenges about how we use the technology in the most pragmatic way.
So maybe we should start exploring a little bit of that together. I think in my introduction, I mentioned, that you've been able to save 100,000 hours of time. Is that right? 

Matthew: Yeah. So, ultimately, from today's vantage point, I'd argue that AI will probably be a sort of neutral impact on the CPS. Obviously in terms of where it is a threat to us, like every other organisation, we're looking at the potential for AI in cybersecurity threats and that makes us stiffen our resolve to manage our cyber threats really well.
And secondly, in terms of simply the volume of evidence, so we have already seen a significant increase, in the amount of evidence and the range of that evidence, when we're prosecuting any individual case. I presented a statistic recently to my colleagues on the executive committee that we could expect, conservatively, the amount of data per case to increase by at least 50% in the next five years. And I do think that was a conservative estimate. So both of those things are kind of in the cost side of the ledger, if you will. 
But as you've suggested, we're also, really interested in the way that we can use AI as a force for good. So if you look across our value chain, it all starts with the quality of the information that the investigating officer from the police provides us. And then our ability, to make a timely and accurate decision on whether or not to charge that case. So there's potential for AI application in simply making sure that that evidence is coming to us in, in good quality, that it's labelled in a way that enables our caseworkers and prosecutors to get to the heart of that case really quickly.
So it's important to stress that, we aren't using AI in decision making. We don't use any form of technology to assist us in decision making. But there is a significant number of activities, that we do in common with, anyone in the legal sector, and I'd hazard a guess, in most sectors too, which is currently done by people that could be done by computers, making sure that documents are well labelled, that they are sent to the right people, at the right time. That that information is in structured data formats, and that where we are seeing and particularly changing levels of demand, that we're getting the right cases to the right people at the right time, to make great decisions. So throughout our value chain, making really clear about the things that we do, which we're uniquely good at and which we need people to lead and where we can use machine learning and artificial intelligence in order to free up those people, to have the most time possible, to make great decisions.

Adam: As ever with these things you've raised both some concrete points around the use of the technology, but also some really interesting questions. I think on the on the pointed side, for me, you know, it's quite clear, as you rightly said, that you've got, an industry and a supply chain that's incredibly information rich, but perhaps time poor, as you put it, and the ability to use these types of technologies to recognise patterns of data, to be able to label items, to be able to file them in the right places is certainly going to no doubt help your case workers to release some of their time and be able to be much more productive or efficient in the way that they go through their job. But on the other side, as you rightly said, where we start to look at things like judgment or semantics, which is something that within law you guys have to deal with every day, you know, clearly it's an area of concern for you, in terms of how you use or don't use this type of technology.
I suppose if we talk about that for a minute. An interesting sort of dynamic for me would be if we took something like video analytics, within the justice system. Alot of the CCTV footage that you would now be reviewing in terms of evidence will be digital, and you could pass that through an AI tool to be able to find bits of information that may be relevant, and therefore speed up the time to being able to gather that evidence out of the footage. At the same time, though, by not, making sure that that footage is watched end to end, do you miss something and how do you balance that between the efficient processing on one side, but making sure that someone has taken the time to review the holistic evidence in front of you? Does that make sense?

Matthew: Yeah, absolutely.  All successful transformations succeed because an organisation is able to consider where they're really adding value. So if we then bring that into our own lives, if you think about something that you have recently done, which involves an element of case management. So what a good example is, if you've been unfortunate enough to have to make an insurance claim, or maybe you've been through a mortgage process, that's a form of case management. And maybe even just getting someone to repair something in your home, too often turns into an act of case management. So those things are difficult, not because of the moment when you actually get the thing you wanted. A remortgage process, getting the money in your account is not difficult; the moment when the plumber fixes the tap, that's not the difficult bit. The moment when your car's towed away and replaced with an alternative because it was smashed, that's not the difficult bit. Arranging the time to speak, having all of the information you need, available in the right format at the right time. It's about the time that's taken to verify the information they send and so on and so forth. 
And so just as all of those experiences in our own lives, are difficult. So much of the criminal justice system is also about a process by which we get the right people together at the right time with the right evidence. And that isn't always straightforward, and it becomes more complex as, obviously, we’re often talking about particularly complex forms of crime. That kind of core piece of value of making sure that the right people are looking at the right evidence and assessing it, comprehensively, accurately, fairly is super important. 
So on the one hand, we're looking at AI in order to make sure that we are removing from people the routine, the mundane, the stuff that which only enables good decision making. And then secondly, being really thoughtful about how we can use AI and ML, to make sure that we're giving people the best possible prospects of being able to look at evidence in the most comprehensive way, to come up with a good decision.
And so we've certainly got more questions than answers at the moment. I think it's really important to go into this without any grand claims for what is going to be achieved. For having that really curious mind, like you would as an entrepreneur, trying to find an opportunity.
But then also as a convener of a system, to have that kind of responsibility, to know that we're asking the right questions, know that we are fundamentally bringing the public with us on that journey. And to have the confidence of knowing where we need to, continue to have, people making the right decisions.  There's a phrase that I'm sure your listeners will have heard before about having a human in the loop, and it's an easy thing to talk about. But we also know from our own lives about how easy it is just to assume that the output from a computer is right. How often is it that if you're driving from A to B and your preferred mapping technology says ‘actually, this is a quicker route’, you just click accept, right? You just believe that they've got that right. And because it's just convenient to do to do so. And so technically you're still in the loop because you're driving a car, but you just relentlessly follow the directions being set out by a disembodied voice.
So we need to be really, mindful about how we make sure that a human is genuinely in the loop, and that they're still actively making decisions and that they're aware of obviously their own biases and also those that a computer might be presenting to them.

Adam: So you've talked a little bit today around the use of automation in AI technology, to support the review and case management arena for your operators, to try and create that efficiency when building case files and managing evidence. Are there any specific projects or AI tools or technology that have been particularly effective that you've seen so far? How far have the CPS got? What's next on the agenda?

Matthew: There are a few things I'm really excited about. So what are examples? So in common with lots of organisations, we are exploring the kind of general-purpose technologies that plug into our core productivity tools. So what are the benefits of using AI to produce better meeting notes, of converting long documents into shorter summaries of 40-page business case into a PowerPoint. So all of that kind of general-purpose stuff. 
There are then some specific use cases we're looking at in terms of how we can make sure that we've got high quality evidence that's been correctly labelled, that's been redacted in the right way, that's been disclosed in the right way. And all of that is being recorded.
And then when we're helping a prosecutor build a picture of what's happening in a case from the various evidence that they receive that might be body-worn footage, it might be witness statements, it might be summaries from the investigating officer. How can we give them the tools in order to get to the heart of that case?
So, how might we use a ChatGPT-like experience to enable a prosecutor to look at a case that's just been sent from police, to get to the heart of the matter in order to make a good charging decision. I'm putting that as a question because we don't fully know the answer yet. And, it's also possible that in technology terms, you can do this supremely well, but we also need to be mindful of, obviously cost, and also the environmental impacts of this stuff.
If you assume that finding out an answer from a large language model takes four times the processing power of a search engine, is it right that we use general purpose technologies to summarise meeting notes? Is it a climate impact price worth paying? I don't know, it's important we work these things out. 
So all of those are things that we're actively looking at the moment. We don't fully understand all of the implications of that. So we need to do that to make sure we are driving productivity wherever we can and to make sure we are bringing stakeholders with us, and that we are, also, giving our, staff the confidence in how we're using technology to drive the best possible outcomes in the criminal justice system.

Adam: Fascinating viewpoint on how you're not only exploring the use cases, but equally, some of the questions that you're trying to answer, as the CPS, as a public body, around not just the best implementation of this technology, but also should we be using the technology given, as you rightly said, some of it's sustainability outcomes?
I'm sort of going to start from the beginning rather than the end of that chain of thought. You talked extensively about the use of both traditional AI terms, but also more of the generative AI aspects, and posed a bit of a question there around, could we get to a personalised case file being driven by generative AI to support a prosecutor? I mean, at a technological level, certainly those things are possible between the interlink of using public, trained large language models and a private data set brought in via a RAG model approach. You could see that happening, but maybe not yet, I don't think I think there's a lot of work that needs to be done, both on your side and the technology side to bring that to a really, truly personalised case.
But certainly in the legal profession, and I'm sure you guys are seeing it too, this ability to translate quite heavy legal understanding into plain English or even be able to bring the subtext of legal contracting or nuances out for a prosecutor or a lawyer or whatever it might be, that's certainly technology that's being driven today through generative AI. I think it's definitely the first step is getting individuals and organisations such as the CPS engaged in those research projects to make sure that what is coming out the other end of those large language models is not just succinct, but also correct, because, as you said earlier, we can't get the law wrong. The law has to be right. If a human fails, we accept that we aren't perfect. If the technology fails, we question why it wasn't perfect. Is that not still the case?

Matthew: So firstly, this isn't the sort of thing that we can simply resolve on a theoretical level, and I stress, we're continuing to kind of learn through doing in the CPS, because that's the most smart way that we can come to the right answers. Secondly, we need to do that in a way which continues to build public confidence. But we also know that our public isn't a single group. And so, historically there's been all sorts of groups within British society that have had lower levels of confidence in the criminal justice system, and we will need to be particularly mindful of their experiences, their concerns about the kind of biases that potentially could be amplified through the careless use of technology as well. Nor is it something that we can simply hope doesn't happen and adopt a wait and see position. Why? Because AI is having an impact on the criminal justice system. It's simply about whether we can also, in my view, explore that as a force for good, as well as recognising the dangers that it will bring to the system.

Adam: 100%. And from an industry perspective, I find that your role at the CPS puts you on a tightrope that maybe others don't around that. And it's great to hear that you guys are considering both sides of that coin. But as you said that force for good is also got to be something that's worth exploring. You talked quite a lot about the victim support side of that and the witness support side of that, where some of these areas of the public don't not only don't have the understanding of law, but also need to have the trust in the legal system. We also work in one of the most multiculturally diverse countries in the world. And so the opportunity to be able to communicate, both in a manner and also a language that brings common understanding is surely got to be an opportunity for the CPS?

Matthew: Yeah, absolutely. The criminal justice system is also one that, spans all ages. And it's important not just that justice is done, but justice is seen to be done. So I think having the confidence of all participants in the criminal justice system, in the evidence that's being presented in and the veracity of that evidence. And making sure that people with varying levels of confidence in digital technology and understanding of that, will be vitally important. And that's as true for a citizen sitting on a jury, as it is for an investigating officer in a police force or indeed for one of our own prosecutors. And so giving people the tools to think critically about this in order to harness it where it is an enabler of effective decision making, where it's enabler of greater productivity, but making sure that we truly understand, not least the risks of hallucination, that we've seen play out in various stories over the last year, or models that can free themselves of the guardrails that they've been given and then can give a different answer to the one intended by the organisation. All of those issues, will, I'm sure be worked through over the coming years.
It's important we actively look at these because else we will be failing in our duty to serve the public in the best possible way, but also that we don't march ahead and simply accept everything with an AI label on it as something that therefore must be smarter, better, cheaper.

Adam: Yeah, makes perfect sense. I suppose in your role as a CDIO, and I'm sort of tying this back a little bit to what you talked about with sustainability as well, given everything that we've talked about today, how are you looking at this training and development side of your own internal staff? How do you make sure that they're either effectively using the AI tools that you've developed so far, or how do you continue that ongoing education around what they should be considering when looking at AI?

Matthew: So I hope that what's coming across in our conversation is that I'm trying to firstly create a space. So I'm trying as CDIO to not have all the answers, not to have preconceived use cases that we're applying this to, to have preconceived areas where we simply couldn't apply AI and that's really important, because the IT function can't simply be a supplier to the business, particularly, in the exploration and utilisation of AI. But nor can it be the provider that the organisation outsources this thinking to. Sitting around me are people far smarter than I, far better trained in the law than me and so I'm trying to hold a space of uncertainty so I can show my colleagues where we can be applying AI and machine learning, to demonstrate why this could be a good thing, but also not to do that in a way which makes any this a fait accompli, or removes from this our collective need to think critically about this and look at this holistically. 
I think, particularly in terms of training and skills, we need to start as we were talking about previously, with the value that we provide as an organisation and our purpose. And so one of the fantastic things about having people making decisions on cases, is those people can be challenged by that. And we have an internal process, and there are external processes too, where our decision-making is challenged, and that's all well and good. 
Imagine a scenario in which a decision was made and in order to work out why that decision was made, we would have to raise a support ticket with a multinational company. That would be pretty bizarre, right? That would significantly dent public confidence in the process, because we would have effectively outsourced that element of our decision making. And so as CDIO, I'm trying to hold both these opportunities, the things that I do and don't know about, and the strengths and weaknesses of the technology and how our business works in order that we can come together not just across the organisation, but with our citizens and with the wider criminal justice system, to make sure we are exploring this at pace, that we are responding to the opportunities that have been set out, but doing so thoughtfully and responsibly and in a way which goes without saying, entirely consistent with the law.

Adam: You explain that perfectly. And I think we've seen throughout the history of technology that when technology is used best and implemented best, it's the interlink between, technical expertise and business leadership that come together to push that forward. So the fact that you're trying to create that safe space, as you say, that allows your team and the business to collaborate around the use of these types of technologies, it sounds fascinating, and I'd love to be part of one of your workshops to be honest, Matthew, because it sounds like a really interesting, conversation.

Matthew: So obviously some of this is, as you'd expect, it’s whiteboards and Post-it notes, but at the moment, we are having a significant impact in being able to talk about this through working prototypes and enabling those to both identify the risks and have the right conversation with stakeholders about what it means. And it's through the shared language that working prototypes can help you develop, that you can make sure you're having high quality conversations across an organisation.

Adam: Yeah, much easier to see it, if you can touch it and feel it as well right and have that debate? If we think about that and, you talk about working prototypes, we've talked today around where you are today and some of the pitfalls you've also got to consider. But, if we look at that AI for good, and maybe as a sort of closing question to you, if you think about the future of AI in the CPS, what are your goals for integrating it into the operation, and is there any future advancement out there that you're seeing that you're most excited about in terms of what it might be able to do for the CPS going forward?

Matthew: It's a great question, and I fear I might match it with an incredibly prosaic response. So, I want the CPS to be able to continue to be the best of itself. It was an organisation created in the midst of the 1980s, not unlike myself, which does a really important, largely, probably less visible part of the criminal justice system, and it does that because it's got really smart people who can look objectively, at the case for a prosecution and then present the right case to a jury and that will always be one which, relies on skills, knowledge and experience of our staff. And so with the developments in crime, the position facing UK plc in terms of what we're likely to have to invest in public services over the next, 3 to 5 years, and as I said, the driver of the increasing amounts of data associated with an individual case for prosecution, using AI to make sure we can be the best Crown Prosecution Service that we can be, is my goal. I'm not, I hope, carried away with what the technology can achieve. We're not doing this, simply to be able to say that we have leveraged one technology or another. This is all about making sure, that our teams and by extension the public, can have confidence in the information that they've got, that they can work through that, efficiently and productively, safely, and to a high quality, in order to make sure we're prosecuting the cases that the country expects.

Adam: Thank you Matthew, I think that was very eloquently put again and prosaic, as you say, but I think it tells a very interesting, story. The CPS, as you rightly said, relies on its experts, it's prosecutors to deliver, criminal justice excellence to the public. And to do that, you need supporting technology that augments that expertise, doesn't try to replace it, and I think what you've outlined today is a way forward for the CPS to be able to do that. And in a playground or a safe playground, I should say, being able to try these things out before you use them in the public as well. So for me, it's been a fantastic, and fascinating conversation. I really look forward to seeing where this goes and where your journey takes you. By all means, if you're interested, come back and see us in six, nine, 12 months’ time and tell us how things are progressing.

Matthew: Yeah, like I say, it's a really interesting conversation. And I think, you everyone listening today will have their own questions, their own thoughts on the topic. It's such a hot topic at the moment. I don't have answers. I don't believe that the Crown Prosecution Service has answers, but I believe that's a good position to be in because we will continue to explore this together. We'll continue to respond to public expectation, and to the opportunities afforded us by technology.

Adam: Thank you. And thank you for everybody out there listening. Please, stay tuned for our next pod in a few weeks’ time. 

[END OF AUDIO]