In this new podcast episode in our AI for Industry series, our host, banking expert Sean Devaney explores the potential of artificial intelligence in the financial services industry.
He’s joined by Charlotte Wood, Global Head of AI and Innovation at leading asset manager, and CGI client, Schroders. She discusses the huge opportunity they see with AI, their use of it and her reflections of AI productivity at Schroders.
To find out more about applying AI in the real world listen to other podcasts in our AI for industry series, visit our AI page.
If you’d like an AI discussion get in touch with our AI experts or Sean Devaney.
- Transcript
-
Sean Devaney: Hello and welcome to this CGI podcast looking at AI in industry. Today we'll be looking at AI in financial services. My name is Sean Devaney. I look after our banking and financial market strategy in the UK for CGI, we're joined today by Charlotte Wood from Schroders, who's the Global Head of AI and innovation. Charlotte, if you'd like to just give us a little bit of background about yourself.
Charlotte Wood: Yeah of course and thank you for having me to talk to you today. So I head up the AI team and the innovation teams at Schroders and the AI part of the remit I've taken on in the last sort of two and a half years, coinciding with the advent of ChatGPT and all of the excitement around generative AI. So in that part of my role, I'm responsible for bringing in AI technologies into Schroders. I have a team that's building central AI tools and solutions for our employees to use, and I also work with the leaders across different parts of our business to think about how they could use generative AI to gain more value in the different parts of the company.
I've actually been at Schroders for around seven years, so prior to that, and part of my role still focused on innovation, which is really looking at kind of, what are the emerging technologies that are going to be relevant for our industry, and how might that affect different parts of Schroders, and how do we want to respond to that? So when ChatGPT came along and everyone was so excited about AI, it fit really naturally into my existing roles, so I've taken that on as well.
Sean: I think today, it'd be great if we could focus on some of the areas where, collectively, our organisations are using generative AI. Maybe give some examples, some real-world examples of how that's being used, where it's got benefits inside our business. And then maybe look a little bit about some of the challenges that we've seen in implementing AI as we go through.
We went on a journey of looking at where in the business we could get most benefit out of it. And so one of the initial use cases was around code development and the more sort of it focused DevOps, squads of people, things like that. And we've had real success in deploying that into those groups, with groups of people who are seeing anywhere between 30 and 50% improvements in their productivity over time. We've looked at how do we expand the use of those tools, to use them in more, sort of softer areas of the business? And we had one piece of great feedback from one of the people in our organisation, who is both dyslexic and has ADHD, and they said that actually, using those AI tools that generative AI tools was a game changer for them in their workspace. They were much more confident in the type of things that they were producing. It took away some of the challenges around that they had, particularly around their dyslexia, and made a real difference to the way they work. And it's not about the final products coming out of these tools. It's about having something where you've started, you've got something to keep going, to add that human quality to, is that something similar to Schroder's journey. How have you gone through that?
Charlotte: Yeah, so I think we think about our use of AI almost in kind of levels, and it does map to some of what you were talking about there. So probably at the bottom and the most widely consumed level, is the use of AI in kind of productivity tooling. So that could be ChatGPT or Microsoft Copilot. We have an internal AI assistant called Genie, which means we can kind of keep our data secure and deploy the tool in the way that we want to. So that's where the users have AI at their fingertips, they don't have to be technical, and they can use it in their day to day to kind of help chip away at those tasks that they have to do. So someone in marketing might use it to help them create content. Someone in technology might use it to help them code. Somebody maybe in our investment teams might use it to pull information out of documents more quickly. So it's very user directed, but can be massively helpful. And I think that's some of what you mentioned there, in terms of helping to write content, for example. Then we have a layer up from that, which does require a little bit of technical build. Which is where we're putting AI into our business processes and our workflows.
So in different parts of the company, we looked at what are the workflows that generative AI is actually giving us a really good opportunity. So maybe we're moving a lot of content around, or we're pulling lots of information out of documents that presented an opportunity to use AI to do that instead of humans, and massively, kind of increase the efficiency of those tasks. So an example of that could be in our ESG team. So where we're looking at sustainability metrics for companies, looking at things like, what is a company's net zero target? Do they have a net-zero target, or what are their diversity levels on their board? All those sorts of things. And having an analyst go through and gather all of that information from across companies for all the companies that we cover is very labour intensive.
So now we have a tool that's using AI to answer first draft of those scorecards for those companies, which has halved the amount of time it takes an analyst to create those scorecards. And exactly to your point, it still takes them half the amount of time because we're still asking them to review it. They've still got to go through, they've got to look at the sources, make sure that the answers are sensible. But it's much, much faster than doing than them doing it from scratch. So we've got quite a few examples where we're already seeing benefits delivered across the company for that.
And then the final level, which again, I think maps back to what you were talking about, is, how could AI help us really differentiate our services and the products that we produce? We're an investment company, so that's maybe thinking about our investment processes. It's maybe thinking about, how do we interact with our clients and provide investment advice to our clients? All of that is quite future facing like we're definitely not about to use AI for all of those processes, but we're doing some research in those areas to see how far it could go. But that's kind of goes to the core of our business and thinking that actually there could be quite a significant industry impact of this technology, and you as an IT company, you're already doing that in terms of how you provide services to your clients. So I think it's whatever the industry, there's a huge opportunity there for the tech.
Sean: And it's interesting. I mean we're an IT company, so our first place to deploy this was in a technical environment, and we've seen some really good benefits come out of that. But also, we're looking at it in more of our customer facing environments as well. So we run a service centre in Bridgend in Wales. It's a five star rated service centre, so it's a pretty efficient operation as it is, but that even in that operation, we've managed to introduce AI tooling, taking those things that still end up as customer facing or customer representative, service agent incidents, and using AI to get to the resolution of those incidents much more quickly than we had done previously. And even in that environment where it's already pretty efficient, we're seeing some web around 10% to 15% efficiencies in the call handling time. So that's got a benefit for us, sure, because we can reduce call handling time, that makes the process more efficient, and that's great, reduces cost, but it also has a knock-on effect on the clients. So every minute that we save on our end is also saving that client a minute on their end. Or a business process that is not up and running is fixed and back and stable much more quickly. So we're seeing that opportunity to not just build out processes internally but actually make a difference to our client base as well, in much the same way as you're looking at how you would use AI tooling to improve your investment advice, or, that kind of outreach and output to your customers.
So I'd be interested to kind of explore a little bit. Where do you think it's going in the future? I mean, at the moment, we're kind of doing this, it's very much a kind of cost saving efficiencies exercise. I think if you look across the board at most organisations, they're still using these generative AI models, to make processes more efficient, save costs and so on. Where do you think we're going in the future of this? I mean, it's early days for generative AI, but where do you think it's going in the future?
Charlotte: Yeah, so I guess a couple of things. I would say that at the moment, yes, we're looking at efficiency, but I'd probably call it productivity, because this technology makes processes much more efficient. It can take away some tasks from humans that they can then go and work on something a bit more value added. And I think it really depends what you want to do with your business area, like, do you want to do the same at a lower cost? So then you can think of it as an efficiency play. But actually, in certain parts of our company we've got growth targets, and we're looking to actually expand. So it actually means that if you think about it in the right way, it can be almost a revenue generating exercise, because it means you can take on more business without having to scale up the costs in the same way. So I think we try not to talk about it in an efficiency. It's a business decision, really, of whether you want to make it an efficiency or a revenue productivity gain.
I think at the moment, there are some use cases as well where we may not be achieving a cost saving, but it means that we can do things that we weren't previously able to do. For example, looking at something like an earnings transcript across all the companies that that we monitor. They're just huge documents, earning calls are really long. If you're covering one of those companies, you kind of have to choose which ones you're going to join yourself and actually listen to, and then which ones you might analyse afterwards. The ability to now take all of those earnings transcripts that we have, you know, they’re publicly available as we have access to them, and be able to analyse those at scale, gives us a whole new data set that previously wasn't really viable, and certainly not with a human analysis. So all of that's quite exciting and exploring those opportunities that we might have.
I think the technology is coming on so fast, and next year, thinking about the kind of additional reasoning capabilities that models will have. At the moment we're going through the 12 days of OpenAI, where they're unveiling new things every day, which is really testament to how fast things are moving. But the technology right now is super impressive, and actually our implementation of it is still catching up with what it could do. So given that we're still quite early doors in implementing it throughout our business, and then also the technology is continuing to develop. I think you're just going to see the benefits here just accelerate going into next year and then beyond.
Sean: Yeah, it's interesting. It was 2022 when the OpenAI models first came out. I mean, it was five minutes ago. Is not very long at all, and those things have moved along so much in the intervening couple of years. One of the challenges that we had internally was persuading people to not think of it as Google on steroids and put in queries that they were put into a search engine, that it's not the same thing to view it as more of an assistant. I think there was an interesting piece of research done recently by Finextra, one of the finance magazines, and they were looking at, amongst other things, they were looking at the future of AI. But one of the things that they found was that only about 24% weirdly specific number, only about 24% of AI implementation generative AI implementations were secure. So a lot of organisations are still using models that are open models, obviously, Schroders has got its own internal version. You mentioned Genie, the internal AI tool. How have you gone about communicating that to your audience inside the business that need to be secure with these models?
Charlotte: I guess part of it comes down to the use case, right? Like we have some guidelines, which we came out with pretty immediately after ChatGPT came out, around the use of public AI services, and we actually have blocked the public version of ChatGPT on our network now because we didn't want people putting any Schroders data in contravention of our guidelines. We have mandatory guidelines, and people have to follow about using those services. From a Schroders data perspective, we are very low tolerance of people putting any of that into publicly available services. So we have a strategic relationship with OpenAI, we're also working with our existing tech partners, like AWS and Microsoft, and our model usage there is all under contract, with a lot of protections within it. For Genie, our AI assistant, no data is stored outside of Schroders at all. All of the data is retained by us, which is really important when we want people to be able to use AI for a lot of different tasks, some of which might be quite confidential data.
And in terms of guidelines for people, it's a new space for a lot of our employees. People are learning more and more as they use the tools more and more, and I think becoming more familiar with the fact you can't just use it as a Google engine. We set up a responsible AI working group right at the beginning of all of this, and that's got pretty senior representation from our compliance our legal teams, our policy teams, risk, InfoSec, to all come together and actually really think carefully about our usage of AI, and they feed into all of the guidance that gets given to our employees, be that for public AI services or for Genie like we have quite strict guidelines around how people use Genie, and then any other AI use cases that that we're investigating.
But I think we have a Schroders AI academy that we launched about six months ago to give people that training, going from kind of fundamentals AI training all the way through to how to build with LLMs and some of the more engineering focused sides of things, and we've had pretty good uptake on it. And I think about half the company has gone and completed one of the pathways, which I hope people have found helpful. But I really don't think there's a substitute for just using it, so I think the most important thing that we did is just giving people a safe way to access it. Because if we hadn't, I'm sure they'd have found their own way.
Sean: 100%. And, you know, as a technology company, we have that problem in spades, right? So as a technology company, our people are often interested in whatever the new technology is, and want to use it right? So putting guardrails in place, training people so that they understand how to use the tool, understand the implications of using these open versions of the tool versus using something that is in a closed ecosystem. We've taken exactly the same approach as you have. All our data is internal and doesn't go outside of the organisation. So keeping those guardrails in place, explaining to people how they are able to use the tool, how they can facilitate, business benefit, how they can get more efficient in their day job. That's already super important, but I agree, the only real way to find out how you're going to use it and find out what the best way for you to use it is to get out and try and use it right and use it in in your everyday work.
You mentioned about the source of data and where you get information from, we're making sure that whenever we produce anything that goes out to a client, it's gone through some kind of checker to make sure that it is not entirely generated by AI, to make sure that we have actually produced a piece of work that is a CGI generated piece of work, rather than something that was just generated by a tool. We think it's quite important to kind of balance that use of the tool with the expert view that our clients are asking for in the production of data that we give them. We think that's a really important kind of barrier to the overuse, or the irresponsible use of AI. And I'm sure you are doing a similar thing in Schroders in keeping the way that people can use those tools, the way that they can give information out to third parties is fairly tightly controlled.
Charlotte: Yeah, I think the way we try to encourage people to use these tools and everything that's built into Genie is that if you put a document into Genie, or you point it to some files in one of your folders, and you ask some questions, it's always giving you the source material of each component of its answer to the question. So it might pull information from five different places in order to answer that question, but you can see those five different components and the source material behind each of them. So it makes it really easy for the human in the loop to check it. Because very, very clearly, the user of these AI tools is still responsible for the output and making sure that it's correct before it goes to a client or it gets used in any other part of the company, or a decision-making process, or anything like that. So I think part of it is also just making it really easy for the user to do the right thing.
I do think it's important as well, and we've been having a lot of conversations with clients and doing surveys, to understand what do our clients feel about the use of AI, and where would they be happy for it, and where would they not be happy. And in some anecdotal conversations, I think to some extent, I've been a little bit surprised about how happy clients are for us to use AI in certain processes, because ultimately, I think they just want the best service. They want the best information. They want us to be making the best possible decisions based on the insights that we have, and they want us to be giving them a really good client experience. And to the extent that we can use AI in areas that will kind of add value and hopefully improve their experience in the end, I think people are relatively permissive about that. They definitely wouldn't want us to run into any examples where people felt like actually this service that is really high touch and very thoughtful, and it has actually just all been farmed out, to a computer, and it hasn't maintained the levels that they expect. I don't think anyone wants that, but we are trying really hard to understand how our clients feel about this and making sure that we don't overstep.
Sean: And I think that's a really important distinction. I mean we've found the same that when we've talked to clients about how we want to use AI tooling in your operation, or we want to use it in the development of products or services for you, they're pretty much universally very open to the idea. But I think one of the things that's really key is making them aware that that's what you're doing. You know it's not using those AI tools to pretend in some way that you're doing something that you're not able to do. It's about making your clients understand the journey you're going on, to use these tools to make their output, the output that you give them better, to make their processes better, to make their bottom line better at the end of the day.
So I think it's really interesting. I think it's fascinating how quickly we've got to where we are. A couple of years that these models have been around now, and already they're in fairly wide usage. We do a lot of work in the public sector, traditionally a fairly conservative one of our industries that we work with. But even those organisations are often they're at the forefront of implementation of AI now, and they are certainly happy that we are using AI tools to help them to deliver better outcomes for their projects and programs.
Thanks, Charlotte, that's been a really interesting discussion. I think that the future of this is going to be really interesting over the next few years, as these models improve and the usage of them gets wider and wider. Think we're already seeing some really great efficiencies and benefits out of them. I think, as you rightly said, the next stage is what business benefit are we getting out of them, what's the revenue generation possibilities and how we can use them to have whole new sets of new businesses going forward in the future.
Thank you all for listening to this podcast today. I hope you found it interesting. You can find the rest of the series of podcasts on our website, cgi.com, or wherever you get your podcasts from. Thanks very much.
[END OF AUDIO]