- AI’s role in social impact and well-being
-
Helen Fang
In ESG for the social aspects, I know you both have touched on this a bit already earlier, but what are some additional examples of where you've seen AI have the potential or actually be used already to accelerate positive outcomes in our communities?
Diane Gutiw
This is one of my favorite topics, Helen, and it's the use of AI to improve outcomes for social care, for health globally. If we look at the potential of AI, it's not just doing forecasting or answering questions on what happened. So, if we look at the opioid crisis, for example, it's not just saying who is affected, where are they affected, when are they affected, and what's likely to happen in the future based on the information we have. But the power of AI is, we can say, how can we impact and make a difference for citizens? At what point of an addiction cycle can we make a change in someone's life?
And we lean on all of the data from past events, from other sectors that are relevant globally to say when could we intervene and what intervention could we take and really personalize a lot of the decisions we're making about how we're dealing with this crisis? We can take that over to, “We're doing some work in the UK on safeguarding children. How can we better identify children at risk before a family is in crisis?”
It's coming out of real crises and traumas that have happened with families in the UK and globally. And it's again, how can we identify a family at risk and save that family before there is a crisis or a trauma for the betterment of that family? We've seen this in the education sector where we've worked with different pre or secondary organizations, educational organizations to say what are the factors of K-12 education that are impacting a positive outcome in post-secondary education or in trades or in a positive employment status? How can we better enable kids in the elementary programs to do better in high school and then in high school to do better in the world when they're leaving school?
And again, these are problems that we have so much information we've invested in the collection of data and operational systems for over 50 years now that we can use to analyze and start to see patterns using AI and not just generative AI, traditional AI, to be able to see these patterns and identify opportunities to improve outcomes.
So, when we look at the population that's going to be retiring, the more complex needs of their healthcare system and long-term care and other programs to support citizens, I think we now have a fantastic set of tools that can provide more personalized services, everything from retail to social care to healthcare to be able to improve that. Andy, the example that you gave with MIS and the solution was a really great example of using AI.
- AI in workplace productivity and employee experience
-
Andrew Donaher
Yeah, I mentioned the one in the UK too around council housing that was similar, but one that is around just general use in office and knowledge management. So, there's a great paper written a little over a year ago now, by MIT called generative AI at work. It was one of the first papers written that was an actual scientific study of the use of generative AI at work. They studied, I believe it was just over 5,675 call center agents in a global software company. And yes, there is an improvement in productivity. I believe 14% was the number they found. But one of the particular aspects of that study that really struck me, and I don't think there's been enough research on it to date, it said that people that used generative AI for this particular solution also had an increase in engagement and an increase in employee satisfaction. I found that super interesting.
And then here at CGI, we built something internally that we call BidGen. It supports us in our business development activities and responding to requests for proposals and creating documents and information for clients. It's a generative AI solution that we built here.
We're using it to try and improve win rates. We're trying to use it to improve our delivery for customers. And yes, we're using it to help improve productivity of our partners here at CGI. But one of the things that was super impactful for me is when we released this, I actually had people reach out and say, “Thank you.” They said thank you because they were saving so much time, whether it was evenings or weekends or removing that non-value add time, it actually decreased the amount of stress in their life. All the hunting and pecking and searching and phoning, to be able to surface the information they needed in an effective and efficient way so that they could then use that information objectively and analyze it, human in the loop, responsible AI, but then focus on value-add activities by removing the non-value add activities, that increases their engagement, their enjoyment, their satisfaction, decrease their amount of stress, decrease the amount of time they spent on those non-value add things, increasing efficiency inside the organization. And to me, I think that that's an example of a win-win type scenario that I don't think we're emphasizing enough to be honest with you.
- AI-driven healthcare innovations
-
Diane Gutiw
Yeah, and with the amount of people that are going to be retiring, the increased capacity and demand and resource shortage, I think we're going to hit. This is a fantastic tool to improve the value of work that people are doing without increasing the amount of time they need to put in. So, that's a great example.
I can throw one more out there which I think is a really fun one, which is getting off the earth and into space. CGI were involved in a really interesting project with the Canadian Space Agency in addressing deep space travel and the need for disconnected healthcare and advanced expert advice while in space. So, we developed a module, the Connected Care Module, which provides integrated AI solutions and deep space telemedicine when you are not able to quickly connect with an expert.
And while that's really fun to talk about space, the practical social aspect of that is, that same module is being tested and deployed for rural healthcare and extending telemedicine. And the difference in that one, similar to what Andy's saying in increasing the quality of life, is people now, anything with telemedicine is reducing the need to leave your home and your family to get medical support. This is just increasing the services that would be available in remote, which was originally designed for deep space but has great application and applicability on Earth. So, kind of a fun one to talk about. There's a great video up on cgi.com on that example.
In addition to rural healthcare where health can really be impacted is the ability to see things that are really hard to see with the human eye. A good example of that would be work that we're doing with Helsinki University Hospital on a solution called Head AI. And what this is doing with human in the loop, it's allowing the radiologist to do the first read on brain imaging and then the AI is then brought in to do a second read. And they're seeing things and patterns that are really hard to detect with the human eye. It's helping validate what the radiologist is finding, as well as helping find things that are really hard to see or minute changes and improving the quality over time as the tool is providing that assistant expert advice to a radiologist.
The potential of this, not only for brain bleeds, which are largely fatal if not detected early, and we're seeing a 98% detection in brain bleeds and that is going up with the fine tuning of the models, but the potential of this technology is to do things like reduce the age of screening for different cancers because we are able to see minute changes much, much sooner. We'd be able to detect the potential for life-changing diseases like cancer and provide intervention and treatment protocols and a much better patient outcome. And I think that's really where in healthcare, in diagnoses, in treatment protocols, in personalized medicine, we're seeing huge advances in what we're doing with AI.
Helen Fang
Thanks so much, Andy and Diane. Those are such great examples. And I think it's really important that we've covered the range from AI for good, from the individual level all the way up to, Diane, your AI and space and healthcare. I'll say anecdotally to Andy's point about how BidGen has helped people focus on the things that really add value. I also have access to the internal enterprise versions of a lot of these GenAI tools. And I think for me also it's reduced a lot of parts of my work that used to be very stressful and repetitive. And it's given me a lot more energy to work on really interesting topics and things that are new and exciting, and I think that's also something we're measuring internally. So, in the UK, for example, they've heard back from their neurodiverse community that they have, about improvements and how connected those colleagues feel to each other and their communication levels. So, we've really been wanting to also measure and talk about those effects of the AI tools.
- AI governance and literacy: A critical component for responsible AI adoption
-
Helen Fang
So, on the G, for governance, we got a really interesting question the other day, Diane. So, how do you think companies can address the introduction of new AI tools and the speed at which the technology is evolving? For example, we've seen this most recently with DeepSeek, that this also relates to how companies on a practical level think about AI governance.
Diane Gutiw
Yeah, it's a topic that I think both Andy and myself and our colleagues globally are hearing from our clients every day. They're realizing that we can't just generate small pilots continuously and adopt tools without some thoughtfulness around that.
So, first of all, to define, what is the scope of AI governance? It's providing a framework to help make decisions on the types of tools, the types of technology, the types of uses, as well as helping organizations understand where it makes sense in investing and what the benefits they're going to receive and staying ahead of this rapid evolution. It's really the foundation we're sliding underneath all of these technologies which are coming out at rapid pace.
If most organizations, CGI included, set up AI governance committees or councils, as this isn't really just one role, we need to look at all aspects. You need to look at security, privacy, legal and legislative requirements, compliance guidelines, as well as what makes sense in your existing ecosystem of technologies. So, you need folks from your IT and CIO organizations, as well as experts that understand AI and having a team of experts that are able to work very quickly and nimbly in understanding new technologies that come out, technologies and AI capabilities being built into existing tools and what makes sense to use, what is compliant in different jurisdictional guidelines. So, it's a big area around decision-making for organizations as well as in governments in how we adopt these technologies and use them for all of the things we've been talking about in this podcast.
If you look at the example of DeepSeek, this is a model that surprised a lot of people. It came out very quickly. It was open source so it was very accessible and available. And organizations need to look at this from those multiple lenses. On the security side, what potential impacts are there both for the use of data as well as the ingestion and retention of information that's being shared with the model. You need to look at the legal terms of how this information is going to be reused or used and shared to make sure that you're comfortable with that. And then you need to see, does this fit into your technical ecosystem and what's the benefit to your resources in using the tool?
So, it's a good example on how you need to have a model available to be able to make decisions, to have conversations and be prepared. The other thing that we've learned is risk mitigation is really important. CGI has our own risk matrix that we use for internal solutions and external solutions, but it can't answer all of the questions because some of these tools, it's hard to narrow down what they're going to be used for because the general purpose of AI tools are so incredibly vast in their capabilities, so there also needs to be hand in hand with that a terms of use so that organizations are informing their staff on how to use the tools and what not to do.
And that comes with that, and I know this topic is close to your heart, Helen. AI literacy is critical, making sure that there is a really good understanding across your organization of the power and potential of these tools, the best practices for their use, as well as the pitfalls and the challenges.
And those are all of the things that fall under AI governance and why they're important, is making sure that we are getting the best benefit out of the investment in AI, that there's an understanding of what their compliance requirements are, and that organizations are able to communicate that across their organization. So, that's really where I see the benefit in building out an AI governance structure.
Helen Fang
Thanks, Diane. I think you raised a couple of really good points there. Yeah, so the AI literacy topic and same for governance. I think we've seen from our own experience and from working with our public and private sector clients that it's not a topic that fits very neatly under one team. So, like AI literacy doesn't belong to just a learning and development team. AI governance doesn't belong just to the data privacy team or a security team. It's really this coalition of stakeholders at all levels to understand what using AI means for the organization, where maybe some of the risks, who needs to understand what aspects. And having this in place is actually an accelerator for seeing the benefits out of AI to be able to really adopt it quickly, to make sure your people are enabled.
We've spoken, for example, and presented at the European Commission AI office on the literacy topic. And we are helping a lot of our clients with these areas too, so that they can really get the most out of their AI tools and adoption.
A follow-up question: So, a lot of the companies you work with operate across multiple regions, including our own. Diane, do you have some best practices around handling AI data sovereignty and governance and decision-making while still keeping in mind the different regional regulations?
Diane Gutiw
Thanks, Helen. Yeah, that's probably the biggest challenge we've had in being a global organization is how do we come up with an approach that we're able to provide oversight and guidance on? You know, in a multi-regional, multi-jurisdictional ecosystem, our decision and what we often advise our clients is to come up with one best practice rule. And at CGI, we're striving to exceed the most stringent legislation compliance guidelines globally.
And then, that is the bar that we're working within having one process for the development and integration of AI for internal and external purposes that is monitoring and managing risk throughout, from project initiation through to operations and beyond, making sure that the AI solution remains relevant, remains accurate and we're having oversight into the types of advice that it's given, is really critical.
So, that's the approach that we took. It definitely is a challenge, both staying on top of a rapidly changing legislative ecosystem. At the same time, we've got rapidly evolving AI technology and capabilities and solutions that are coming out. And then, the third area that we really need to stay on top of is future-proofing, making sure whatever guidelines we have, you know, have some longevity and we're anticipating what's coming next. Staying really closely tied to our vendor partners, the hyperscalers, it has really helped in that process and understanding what we need to stay on top of.
- Major takeaways for 2025
-
Helen Fang
Thanks, Diane. So for the closing question, I know it's pretty tough to make generalizations as so much is different by region and industry and the topic, as Diane, you just mentioned, it’s evolving so quickly, but if you have one major takeaway around this topic, AI for good, or more general, where do you see this going in 2025? Andy?
Andrew Donaher
I see this accelerating, to be honest with you. I think that as I mentioned, people are starting to understand that this is a win-win benefit situation where the more that they do that's helpful to optimize organizational performance and people's productivity and asset efficiency and asset utilization and carbon footprints, the better it is for their organizational performance. And so, I think that the opportunities in front of us around generative AI, agentic AI, traditional AI, quantum computing, Microsoft was able to basically create a new element and element combination in a week to help decrease the amount of lithium used in a lithium battery by using quantum computing and evaluating 32 million potential element combinations and that could give us a decrease in up to 70 % use of lithium and a lithium battery. These are the types of things that are in front of us, and I think it's just a great opportunity and a great time that I feel very, very optimistic about.
Helen Fang
Thanks, Andy and Diane. Any last takeaways?
Diane Gutiw
Yeah, my one thing I would add is where we've seen gaps and where we're hearing gaps from clients such as, “I need the quality to improve to be able to use this for decision making.” We're watching industries fill those gaps very quickly. And I think that's really where agentic AI, which is the buzzword of 2025 so far, is coming out of, it’s building agents that can automate some of the things that were making people more resistant and bringing this into a business context.
So 2025, in my mind, there's three things. First is going to be, agentic AI is going to take off and we're going to have models that with a single prompt can serve multiple functions autonomously with oversight of humans and then provide advice. I think the second thing that we're going to see is a lot of these pilots and early exploration of AI move into production and become operationalized. And the third, and I've said it before, is it's not AI that's going to take your job. It's people that know how to work with AI and know how to leverage that for benefits that are going to be in super high demand in 2025 and onwards.
Helen Fang
Thanks so much, Diane and Andy, for your time. It was a really great conversation with you both.