- Introductions and what is agentic AI?
-
Fred Miskawi (00:00)
Hello everyone and welcome to today's podcast on AI agents and the future of AI. My name is Fred Miskawi. I lead our global AI innovation expert services, which includes our AI-led development acceleration program, and I'll be your host for this session today. I'm joined by two amazing CGIers, Andy Donoher and Cheryl Allebrand.
Andy, a little bit of an introduction.
Andrew Donaher (00:22)
Hi everyone, thanks Fred. Pleasure to be here with you and Cheryl as always. My name's Andy Donaher. I'm the Vice-President of AI and Data Analytics for CGI Canada based in Vancouver, but it’s a privilege working with all the folks here on our global team quite a bit. So, thanks for having me, Fred.
Fred Miskawi (00:38)
And Cheryl?
Cheryl Allebrand (00:38)
My name is Cheryl Allebrand. I work in our AI and Automation practice here in London. I'm part of the UK and Australia team and have been working with language-based AI for about bit more than seven years now, so it's pretty exciting time for me. I really enjoy getting to work with our global team.
Fred Miskawi (00:56)
And great to have both of you on this podcast. We've got a lot to talk about around the topic of agentic AI. So, to kick things off, let's give you maybe a little bit of background. Agentic AI represents kind of this transformative approach to artificial intelligence or maybe this new layer of abstraction over what we've been seeing over the past two years, where agents are designed to operate with a certain degree of autonomy.
And if we really take a look at definitions, from our perspective, AI agents are intelligent, semi-autonomous systems that perceive, decide and act. And I think the important word here is acting, tooling. The most capable agents can learn, adapt and produce dynamic outputs. So, that's a topic we're going to be covering today in this podcast. I'm going to ask you a question, Andy.
Can you share your thoughts on exactly what agentic AI is?
Andrew Donaher (01:48)
Well, thanks Fred. I guess I'm first on the spot here today. That's great. Agentic AI is the ability for agents to coordinate their work and to execute on a goal with a variety of tasks. And that's really the key difference. Like if you think about the difference between RPA (robotic process automation) and generative AI, and the evolution of that, that's where we've been talking to clients a lot about.
In the traditional sort of RPA world when we talk about automation, you're looking at interacting with interfaces and following sort of, for lack of a better word, fragile processes where if things change, it can be extremely difficult to manage them. And then we get into generative AI, and we were able to use it to do small tasks and sort of coordinate mini chatbots, but now we're able to step back and say, “Okay, I need to be able to analyze data from all of these databases or work through my entire supply chain and execute my planning, and we can orchestrate that across various agents.” And so that's the real difference, it’s being able to give it that goal and a general framework of a set of tasks and it can achieve the goal and that's really the evolution.
Cheryl, did you want to enhance that or correct me anywhere along the way there?
Cheryl Allebrand (02:57)
Absolutely not correct you, but I will agree with you that having come out of the conversational AI world, we have so much more freedom in a way. It comes with much less work and a much better experience for people using it. So, we have a lot more capabilities now that we're able to build in this agentic way than we did when we first started using LLMs (large language models) with the chatbots. So, it's just a really exciting time.
- Deployment considerations for AI agent ecosystems
-
Fred Miskawi (03:24)
And on that topic, Cheryl, I do have a follow-up question. Given the fact that these frameworks are growing in capacity and capability very quickly, how do we ensure that we still have transparency over what's happening, and we keep human in the loop or on the loop?
Cheryl Allebrand (03:38)
That is an important part to build in. And I think that that's the thing that people do need to remember. Even though we're going with this agentic approach where it can take a lot more initiative itself, where it can carry on across steps, we still need to have people as part of these processes in part because no matter how much testing you do, the results aren't exactly replicable, right? So, it, one, puts a lot more onus on our testing and also, in a way, on our proofs of concept, right? Where we go in and just because something's meant to be able to do something, we'll find out that it doesn't necessarily or it doesn't with a bit more work.
I can't remember who it was who said it, but basically any well-designed technology is going to feel like magic. And I think that that's part of the experience people had with LLMs early on was just, “Wow, this feels like magic.” And now when it comes to injecting these into business processes, we can still end up at that place, but we need to make sure that we have people checking our magic.
Fred Miskawi (04:41)
Going into that kind of the same theme, Stan Lee mentioned this well repeated quote of, “With great power comes great responsibility.” And these frameworks are bringing a lot of capabilities we didn't have before. The ability to automate frameworks, processes and procedures that were very difficult to automate in the past where humans had to be involved.
With that kind of power, comes a certain amount of responsibility in deploying that power in a way that saves, secure and trustworthy. And I think that brings us to the topic of deployment considerations, Cheryl, as we're looking into deploying that power in the hands of our IT friends?
Cheryl Allebrand (05:19)
I think I mentioned the testing end of things. And importantly, bringing the people along on the journey. It's not just about the human-centered design piece and the training piece.
I think that we need to help people understand how the agents think, “think” in quotation marks, so that they understand how to work with them and how to check. Because as we've noticed, a lot of people, when they are coming from that mental model of having gone to Google to get an answer and being able to skim around it and know that these results that are coming back came from a specific place.
You do need to build that into your responses as well when you're building it from the chat perspective. But also, there's that onus on us now to check the work that I think people had forgotten a little bit as Google got better at its search results. So, really just teaching people how to be able to do what's being asked of them in terms of the checking
- Quality, transparency and governing a human-agent ecosystem
-
Fred Miskawi (06:16)
And you're putting the finger on a very key concept in this space, and that's quality. A lot of technical debt can be generated by this technology, at least in its current state. It will improve over time, but quality is going to be front and center in anything that we do in this space this year and moving forward. Andy, you've been involved in shaping our approach of this technology at CGI. Can you help us and walk through a little bit of how we're integrating agentic AI and maybe talk about a little bit of our human-agent partnership management framework.
Andrew Donaher (06:49)
I think one of the important things that I really want to emphasize is that agentic AI, it helps us to do more, better and faster. Because what it's helping us to do is remove the non-value add work and it allows the humans to focus on the value add and creative work.
So if you're looking at doing an agentic AI solution for supply chain management, like we're working on with one of our clients right now, and what it enables us to do is increase the breadth of the data from various systems and expedite that ability to pull data together, to expedite the development cycles across, to expedite the requirements gathering. And by doing that, they have more time for thought, more time for creativity, And as Cheryl just mentioned, more time for evaluating the testing.
How many times do we get caught in cycles where you're focused on decreasing your COGS (cost of goods sold) for your supply chain by optimizing your inventory balances or your logistics management? And we get caught so much working through the data management or working through the integrations or working through the development of the model, that we often don't have enough time for as much testing as we want or the project elongates.
And so really, what we're doing is we're focusing on what's that business value we're going after. And that's where this always human in the loop, across the entire SDLC (software development life cycle) and even in the operational execution components within the business, is super important to be able to create these agents.
And it's not one ring to rule them all, right? Reference to Lord of the Rings, and I've been saying this one for a while. That's why creating specialized agents is kind of like people. You can't be good at everything. We're all pretty spectacular at some things maybe, but you can't be good at everything. So, training these agents and giving them directions in one area helps them to get better at that. And that's really where we're seeing the value and to do more, better, faster, and help focus on that business value.
Because everything you do is going to involve AI and agents moving forward, which it's just going to be a part of it. It's not AI to solve the problem. It's solve the problem and use the AI to help you do that.
Fred Miskawi (08:53)
Thank you, Andy. It's like more, better, faster, more, better, faster, more, better, faster. I try to say it three times fast. I'm originally from France, so it's a little difficult for me, but more value, right? Better value, better quality and faster, delivering that value to market quicker. I love that, more, better, faster.
And part of that, and I think part of what we've learned internally as well is, our roles are changing. They're evolving as this new technology is being deployed. New processes are being automated, which is why that human-agent partnership management framework comes into play that sets guardrails and goals for that management of this ecosystem that includes both humans and agents. Because that's the way we're going to start seeing it as we're looking at the future of AI.
We're going to see these agents as entities that are working in tandem with humans producing value. And as part of that partnership framework, the idea is to continue to keep transparency and visibility over what's happening within the ecosystem. And what we're seeing with this technology is amazing amount of value, but it requires oversight and a certain amount of maturity to understand how to do it safely, securely and with trust.
And Cheryl, before we move on, is there anything from the deployment side that you think companies should consider when integrating such a framework?
Cheryl Allebrand (10:13)
I love that we tend to have a framework for everything because it helps you not forget all of these important bits, because there are so many different perspectives. One of the things that we really emphasize for companies early on was that they really refine and define their AI governance.
So, it's a little bit of a give and take between ensuring that that company has the right structure in place and that they already have a bit of maturity around AI, but also then, that they're working with a company that really understands all of the pitfalls and all the considerations and that has all the checklists and frameworks.
Fred Miskawi (10:49)
Working with a company as a partner, right? The important concept of partnership.
Cheryl Allebrand (10:54)
I mean, there's a lot of DIY kind of available with this, and that's part of the fun, and that's what I think what's exciting people. But all of the basic rules still apply. And I think that people forget that sometimes, that you can't just go at something.
Andrew Donaher (11:11)
That really is the key. People, as they start to use these things and understand them better, start to ask more questions about it. And I think that, when you talk about the DIY, you talk about the partnerships, that's absolutely critical.
And talking about the existing processes that you have around risk mitigation, the existing processes and rules and regulations that you have around data privacy, those all apply today. From that perspective, this is not net new. There's some tweaks to make to it, but those foundations are to be leveraged.
You can do a lot yourself and maybe because we've, at CGI, done a few of these in different places, we can help you see around a few corners. But I can't emphasize enough that people start and move and realize the benefit in a risk-mitigated way. But don't just stand still, because you're going to fall behind.
Cheryl Allebrand (12:01)
Wouldn't you say that that data piece not only is it still important, but I feel like it's of growing importance. Because one of the things that I love about working with these types of generative AI and whatnot, is that you're able to use so much more data than ever before. Previously, there was a lot of curation that went into the data that companies used. It was put into a system. It was selected. It was extracted.
And then now, we're able to use generative AI in order to pull in and pull from so much more data. These agents will be able to cast a much wider net in terms of data and then, as well, in terms of their training data. We don't always know what went into that either. In fact, I would dare say, we don't know for the most part what's gone into that.
Yes, we can do a lot in terms of using it as well to curate that data and make sure that personally identifiable information isn't part of it and all of those things, but it's still another step. It's still something that you need to do even if you're using the new technology in order to enable that.
Fred Miskawi (13:06)
And Cheryl, given your deep expertise in this space and data quality, what would you recommend to the audience to look into as it relates to leveraging generative AI to improve data quality that you're feeding into these processes that we have and are deploying?
Cheryl Allebrand (13:23)
That's a really good question. I mean, I know some of the steps that we take to make sure that we use it properly in terms of data cleansing, like looking at the data and then employing it in the cleansing work so that it makes it actually feasible in order to use the new data source that wasn't really worthwhile or available to us because it took so much time and effort in order to prepare it.
It is truly just to consider the source of the data. Cconsider the regulations that might be in play. But to Andy's point from before, a lot of it is just sticking with the basics, making sure you're still covering the same steps even though you might be executing on them in a different way.
Fred Miskawi (14:12)
Thank you, Cheryl. Andy, what about you? What would you recommend?
Andrew Donaher (14:13)
I think what Cheryl said is so on point. When I'm working with clients or internal or friends or colleagues in the industry and they say, “No, no, no. We have to get our data all sorted out first.” I'm like, “That's never going to happen.” This is a start to start dependency. And I can't tell you how many times I've seen data governance programs fail because their goal is to sort out all their data before they start actually delivering something.
And this is the exact same thing. If you're doing a supply chain initiative for optimizing your inventory balances, a call center modernization, a finance process improvement, if you're building a real-time text to SQL (sequel) engine for analysis across your organization, all of this is start to start.
And we were just working with two different clients. For example, one, we were building an agentic AI framework to help their analysts to query their enterprise data warehouses in real time to create the reports, the analysis and the SQL and then create the modular and create recommendations on the fly in real time across all their days.
Well, as we did that, as Cheryl pointed out, data quality is kind of important. And the repeatability of that is kind of important. We actually used the agents to go in and profile all the data for which they didn't actually have definitions or an understanding of what was being used. And 80% of the time, the agents, when they created the definitions and then recommended it to the data stewards, the data stewards were like, “Yeah, yeah, that's right. Cool. Move forward.”
With another organization, the same thing. But we extended it and said, “Well, let's have the AI agents actually create the DQ rules on the fly and then apply them after the humans approve them.” And again, it was fantastic, and it just helps to accelerate that cycle. I can't please, please, if you're out there listening to this, it's a start to start dependency to talk to my project management friends.
Fred Miskawi (17:02)
I've got an AI question for both of you and it's going to be interesting. We talk a lot about agentic AI. Is it really going to be solving all of our clients’ issues, problems? How do you perceive it?
I'm going to be going over to you, Cheryl.
- The value of agentic AI
-
Cheryl Allebrand (17:16)
I think that moving forward it will be able to do more than we are able to do today. As we've seen, the technology has just been improving so rapidly. At the same time, we can do a lot today, but it takes actual work.
And I think that when everybody got their first chance to play with ChatGPT and they just asked a question, and they got a long detailed answer that was quite usually phrased in a way that was quite compelling. Everybody just thought, “Okay, well. This is something I don't need to do any work. I can just ask it all of my questions and it will answer all of my questions.”
Now moving away from the personal productivity end of things, and as a lot of people have simultaneously become somewhat disillusioned with some of the answers, but again, still developing at a rapid pace, into the agentic world, a lot of what we're doing then is trying to build it into existing business processes.
And that's going to give lots of efficiencies and going back to the more faster, more better, more whatever, more, better, faster. I'm going to need to practice that one. Moving back into that, I don't think that anybody, when they're actually implementing it, is going to feel that it's no work to integrate things into their existing business processes.
And then of course, moving beyond that, we don't necessarily need to replicate the same business processes. So, whenever we're looking at something, we need to look at what would be the transformative change? Are we able to do that using the existing technologies today? If yes, great, let's re-envision and it will solve things that we weren't previously able to solve.
Now if we're not quite able to do something transformative yet, there's a lot we can do to alleviate the burden on people and to get to more, better, faster.
Fred Miskawi (19:07)
And Andy, in the more, better, faster toolbox, how does it fit in all these tools that we have to our disposal?
Andrew Donaher (19:12)
So, I'm going to say something that might be a little controversial, but I've been saying it for about two years now is stop talking about AI. It's a tool in the toolbox. Tools are getting pretty cool. Tools are getting lots of fun. Tools are getting really better to help us with more, better, faster.
But when we start conversations with the clients, it's what financial lever are you pulling on? Are you trying to drive revenue and are you trying to create an enhanced customer lifetime value experience where you're using customer retention, communication optimization? Are you looking at improving your COGS or your SG&A (selling, general and administrative expenses) operationally? Are you looking at asset efficiency with predictive asset maintenance?
And like I said earlier, AI and maybe agentic AI, most probably agentic AI is going to be involved in that throughout the entire SDLC and in the operational components for people to use after but stop talking about it. The most important part to focus on is, how can we improve that asset efficiency through predictive asset maintenance? How can we drive down your COGS by improving your inventory balances and your logistics optimization? Those are the things we need to stay laser focused on and the agentic AI bit or the traditional AI bit or ABC tool in the toolbox is just a tool in the toolbox. Like, I can't emphasize that enough.
Cheryl Allebrand (20:22)
Same. I've been applauding you with what you're saying. I agree. I don't want to talk to anyone about AI specifically. I want to talk to them about their goals, their needs and then we'll talk about AI where it fits in.
Fred Miskawi (20:25)
And we've circled the concept quite a bit in this podcast. That's ROI, making sure we have the business case. We don't automate something that doesn't need to be automated and looking at these frameworks as tools in the toolbox. More, better, faster toolbox.
Andy, Cheryl, amazing insight. We definitely have at least another hour worth of discussions on this topic, but we're going to be adjourning for today into this podcast. And as we're looking forward, we've talked a lot about the approach that we have. What is agentic AI? We've talked a lot about as well some of the deployment considerations that need to be taken into account.
The way I look at this is, it's a little bit like an alien technology that we all discovered two years ago and we're all figuring out how to best leverage it. Meanwhile, that alien technology is evolving very quickly, almost daily. So, we have to evolve with it as we move forward.
And that's agentic AI. So, remember the key to success lies in responsible implementation and thoughtful integration. Thank you for listening and stay tuned for our next discussion on the future of AI. Thank you, everyone