CGI’s From AI to ROI podcast series features expert discussions on how AI drives change across organizations and how to achieve trusted outcomes. In this episode, host Helen Fang is joined by CGI experts Dr. Diane Gutiw and Andy Donaher to explore the topic of AI for good across three dimensions: environmental, social, and governance. Diane and Andy share insights and real-world examples of how AI helps to address societal challenges, improve organizational performance, and enhance well-being. This episode is part 2 of 2. Listen to part one here.

For the latest information on CGI’s responsible use of AI principles and practices, read our 2024 Responsible AI Report, or visit Responsible AI on cgi.com.

Key takeaways from the episode


1. AI is transforming social care and health outcomes by enabling early intervention.

AI is being used to address critical social issues, such as the opioid crisis and child protection. By analyzing historical data and identifying intervention points, AI can help prevent crises before they occur. Similarly, AI is being applied in education to improve student outcomes by identifying key success factors at an early stage.

2. AI can not only increase efficiency but also reduce stress in the workplace.

AI-powered knowledge management tools, such as CGI’s BidGen, reduce stress and improve productivity by minimizing time spent on repetitive, low-value tasks. Research has shown that generative AI not only increases efficiency but also enhances employee engagement and satisfaction, making AI a key driver of workplace transformation.

3. AI-driven solutions are improving healthcare accessibility and diagnostics.

AI is enabling remote healthcare solutions, such as CGI’s Connected Care Module, originally developed for deep space travel but now being tested and deployed for rural healthcare and extending telemedicine. Additionally, AI-assisted radiology solutions, like Helsinki University Hospital’s Head AI, improve early detection of critical conditions such as brain bleeds, with the potential to expand early screening for cancer and other life-threatening diseases.

4. AI governance and literacy are critical to responsible and effective AI adoption.

As AI tools evolve rapidly, AI governance structures and approaches help ensure responsible AI use, data security and compliance with jurisdictional regulations while maximizing AI’s benefits. AI literacy is also crucial for ensuring a strong AI ROI.

5. The rapid advancement and acceleration of technologies, including quantum computing and agentic AI, can create win-win situations.

Advanced technologies such as generative AI, agentic AI, traditional AI and quantum computing are being applied in innovative ways to drive both business performance and societal benefits—a win-win outcome. For example, quantum computing has enabled the discovery of new materials that could reduce lithium usage in batteries. As AI capabilities evolve, organizations that can leverage these technologies effectively will gain a competitive edge.


Discover how AI helps enterprises and government organizations achieve tangible and trusted outcomes.

Also, visit our AI home page for insights, resources, and news on AI-driven strategies.

Read the transcript 

AI’s role in social impact and well-being
AI in workplace productivity and employee experience

Andrew Donaher

Yeah, I mentioned the one in the UK too around council housing that was similar, but one that is around just general use in office and knowledge management. So, there's a great paper written a little over a year ago now, by MIT called generative AI at work. It was one of the first papers written that was an actual scientific study of the use of generative AI at work. They studied, I believe it was just over 5,675 call center agents in a global software company. And yes, there is an improvement in productivity. I believe 14% was the number they found. But one of the particular aspects of that study that really struck me, and I don't think there's been enough research on it to date, it said that people that used generative AI for this particular solution also had an increase in engagement and an increase in employee satisfaction. I found that super interesting.

And then here at CGI, we built something internally that we call BidGen. It supports us in our business development activities and responding to requests for proposals and creating documents and information for clients. It's a generative AI solution that we built here.

We're using it to try and improve win rates. We're trying to use it to improve our delivery for customers. And yes, we're using it to help improve productivity of our partners here at CGI. But one of the things that was super impactful for me is when we released this, I actually had people reach out and say, “Thank you.” They said thank you because they were saving so much time, whether it was evenings or weekends or removing that non-value add time, it actually decreased the amount of stress in their life. All the hunting and pecking and searching and phoning, to be able to surface the information they needed in an effective and efficient way so that they could then use that information objectively and analyze it, human in the loop, responsible AI, but then focus on value-add activities by removing the non-value add activities, that increases their engagement, their enjoyment, their satisfaction, decrease their amount of stress, decrease the amount of time they spent on those non-value add things, increasing efficiency inside the organization. And to me, I think that that's an example of a win-win type scenario that I don't think we're emphasizing enough to be honest with you.

AI-driven healthcare innovations

Diane Gutiw

Yeah, and with the amount of people that are going to be retiring, the increased capacity and demand and resource shortage, I think we're going to hit. This is a fantastic tool to improve the value of work that people are doing without increasing the amount of time they need to put in. So, that's a great example.

I can throw one more out there which I think is a really fun one, which is getting off the earth and into space. CGI were involved in a really interesting project with the Canadian Space Agency in addressing deep space travel and the need for disconnected healthcare and advanced expert advice while in space. So, we developed a module, the Connected Care Module, which provides integrated AI solutions and deep space telemedicine when you are not able to quickly connect with an expert.

And while that's really fun to talk about space, the practical social aspect of that is, that same module is being tested and deployed for rural healthcare and extending telemedicine. And the difference in that one, similar to what Andy's saying in increasing the quality of life, is people now, anything with telemedicine is reducing the need to leave your home and your family to get medical support. This is just increasing the services that would be available in remote, which was originally designed for deep space but has great application and applicability on Earth. So, kind of a fun one to talk about. There's a great video up on cgi.com on that example.

In addition to rural healthcare where health can really be impacted is the ability to see things that are really hard to see with the human eye. A good example of that would be work that we're doing with Helsinki University Hospital on a solution called Head AI. And what this is doing with human in the loop, it's allowing the radiologist to do the first read on brain imaging and then the AI is then brought in to do a second read. And they're seeing things and patterns that are really hard to detect with the human eye. It's helping validate what the radiologist is finding, as well as helping find things that are really hard to see or minute changes and improving the quality over time as the tool is providing that assistant expert advice to a radiologist.

The potential of this, not only for brain bleeds, which are largely fatal if not detected early, and we're seeing a 98% detection in brain bleeds and that is going up with the fine tuning of the models, but the potential of this technology is to do things like reduce the age of screening for different cancers because we are able to see minute changes much, much sooner. We'd be able to detect the potential for life-changing diseases like cancer and provide intervention and treatment protocols and a much better patient outcome. And I think that's really where in healthcare, in diagnoses, in treatment protocols, in personalized medicine, we're seeing huge advances in what we're doing with AI.

Helen Fang

Thanks so much, Andy and Diane. Those are such great examples. And I think it's really important that we've covered the range from AI for good, from the individual level all the way up to, Diane, your AI and space and healthcare. I'll say anecdotally to Andy's point about how BidGen has helped people focus on the things that really add value. I also have access to the internal enterprise versions of a lot of these GenAI tools. And I think for me also it's reduced a lot of parts of my work that used to be very stressful and repetitive. And it's given me a lot more energy to work on really interesting topics and things that are new and exciting, and I think that's also something we're measuring internally. So, in the UK, for example, they've heard back from their neurodiverse community that they have, about improvements and how connected those colleagues feel to each other and their communication levels. So, we've really been wanting to also measure and talk about those effects of the AI tools.

AI governance and literacy: A critical component for responsible AI adoption

Helen Fang

So, on the G, for governance, we got a really interesting question the other day, Diane. So, how do you think companies can address the introduction of new AI tools and the speed at which the technology is evolving? For example, we've seen this most recently with DeepSeek, that this also relates to how companies on a practical level think about AI governance.

Diane Gutiw

Yeah, it's a topic that I think both Andy and myself and our colleagues globally are hearing from our clients every day. They're realizing that we can't just generate small pilots continuously and adopt tools without some thoughtfulness around that.

So, first of all, to define, what is the scope of AI governance? It's providing a framework to help make decisions on the types of tools, the types of technology, the types of uses, as well as helping organizations understand where it makes sense in investing and what the benefits they're going to receive and staying ahead of this rapid evolution. It's really the foundation we're sliding underneath all of these technologies which are coming out at rapid pace.

If most organizations, CGI included, set up AI governance committees or councils, as this isn't really just one role, we need to look at all aspects. You need to look at security, privacy, legal and legislative requirements, compliance guidelines, as well as what makes sense in your existing ecosystem of technologies. So, you need folks from your IT and CIO organizations, as well as experts that understand AI and having a team of experts that are able to work very quickly and nimbly in understanding new technologies that come out, technologies and AI capabilities being built into existing tools and what makes sense to use, what is compliant in different jurisdictional guidelines. So, it's a big area around decision-making for organizations as well as in governments in how we adopt these technologies and use them for all of the things we've been talking about in this podcast.

If you look at the example of DeepSeek, this is a model that surprised a lot of people. It came out very quickly. It was open source so it was very accessible and available. And organizations need to look at this from those multiple lenses. On the security side, what potential impacts are there both for the use of data as well as the ingestion and retention of information that's being shared with the model. You need to look at the legal terms of how this information is going to be reused or used and shared to make sure that you're comfortable with that. And then you need to see, does this fit into your technical ecosystem and what's the benefit to your resources in using the tool?

So, it's a good example on how you need to have a model available to be able to make decisions, to have conversations and be prepared. The other thing that we've learned is risk mitigation is really important. CGI has our own risk matrix that we use for internal solutions and external solutions, but it can't answer all of the questions because some of these tools, it's hard to narrow down what they're going to be used for because the general purpose of AI tools are so incredibly vast in their capabilities, so there also needs to be hand in hand with that a terms of use so that organizations are informing their staff on how to use the tools and what not to do.

And that comes with that, and I know this topic is close to your heart, Helen. AI literacy is critical, making sure that there is a really good understanding across your organization of the power and potential of these tools, the best practices for their use, as well as the pitfalls and the challenges.

And those are all of the things that fall under AI governance and why they're important, is making sure that we are getting the best benefit out of the investment in AI, that there's an understanding of what their compliance requirements are, and that organizations are able to communicate that across their organization. So, that's really where I see the benefit in building out an AI governance structure.

Helen Fang

Thanks, Diane. I think you raised a couple of really good points there. Yeah, so the AI literacy topic and same for governance. I think we've seen from our own experience and from working with our public and private sector clients that it's not a topic that fits very neatly under one team. So, like AI literacy doesn't belong to just a learning and development team. AI governance doesn't belong just to the data privacy team or a security team. It's really this coalition of stakeholders at all levels to understand what using AI means for the organization, where maybe some of the risks, who needs to understand what aspects. And having this in place is actually an accelerator for seeing the benefits out of AI to be able to really adopt it quickly, to make sure your people are enabled.

We've spoken, for example, and presented at the European Commission AI office on the literacy topic. And we are helping a lot of our clients with these areas too, so that they can really get the most out of their AI tools and adoption.

A follow-up question: So, a lot of the companies you work with operate across multiple regions, including our own. Diane, do you have some best practices around handling AI data sovereignty and governance and decision-making while still keeping in mind the different regional regulations?

Diane Gutiw

Thanks, Helen. Yeah, that's probably the biggest challenge we've had in being a global organization is how do we come up with an approach that we're able to provide oversight and guidance on? You know, in a multi-regional, multi-jurisdictional ecosystem, our decision and what we often advise our clients is to come up with one best practice rule. And at CGI, we're striving to exceed the most stringent legislation compliance guidelines globally.

And then, that is the bar that we're working within having one process for the development and integration of AI for internal and external purposes that is monitoring and managing risk throughout, from project initiation through to operations and beyond, making sure that the AI solution remains relevant, remains accurate and we're having oversight into the types of advice that it's given, is really critical.

So, that's the approach that we took. It definitely is a challenge, both staying on top of a rapidly changing legislative ecosystem. At the same time, we've got rapidly evolving AI technology and capabilities and solutions that are coming out. And then, the third area that we really need to stay on top of is future-proofing, making sure whatever guidelines we have, you know, have some longevity and we're anticipating what's coming next. Staying really closely tied to our vendor partners, the hyperscalers, it has really helped in that process and understanding what we need to stay on top of.

Major takeaways for 2025

Helen Fang

Thanks, Diane. So for the closing question, I know it's pretty tough to make generalizations as so much is different by region and industry and the topic, as Diane, you just mentioned, it’s evolving so quickly, but if you have one major takeaway around this topic, AI for good, or more general, where do you see this going in 2025? Andy?

Andrew Donaher

I see this accelerating, to be honest with you. I think that as I mentioned, people are starting to understand that this is a win-win benefit situation where the more that they do that's helpful to optimize organizational performance and people's productivity and asset efficiency and asset utilization and carbon footprints, the better it is for their organizational performance. And so, I think that the opportunities in front of us around generative AI, agentic AI, traditional AI, quantum computing, Microsoft was able to basically create a new element and element combination in a week to help decrease the amount of lithium used in a lithium battery by using quantum computing and evaluating 32 million potential element combinations and that could give us a decrease in up to 70 % use of lithium and a lithium battery. These are the types of things that are in front of us, and I think it's just a great opportunity and a great time that I feel very, very optimistic about.

Helen Fang

Thanks, Andy and Diane. Any last takeaways?

Diane Gutiw

Yeah, my one thing I would add is where we've seen gaps and where we're hearing gaps from clients such as, “I need the quality to improve to be able to use this for decision making.” We're watching industries fill those gaps very quickly. And I think that's really where agentic AI, which is the buzzword of 2025 so far, is coming out of, it’s building agents that can automate some of the things that were making people more resistant and bringing this into a business context.

So 2025, in my mind, there's three things. First is going to be, agentic AI is going to take off and we're going to have models that with a single prompt can serve multiple functions autonomously with oversight of humans and then provide advice. I think the second thing that we're going to see is a lot of these pilots and early exploration of AI move into production and become operationalized. And the third, and I've said it before, is it's not AI that's going to take your job. It's people that know how to work with AI and know how to leverage that for benefits that are going to be in super high demand in 2025 and onwards.

Helen Fang

Thanks so much, Diane and Andy, for your time. It was a really great conversation with you both.