What is the EU AI Act?

The EU AI Act entered into force on August 1, 2024. Its goal is to foster responsible AI development and deployment. It contains provisions to make sure that AI systems used in the EU are safe, transparent, fair and environmentally friendly. Similar to the General Data Protection Regulation (GDPR), the EU AI Act's impact reaches far beyond Europe: any organization that develops or deploys systems that can be used in the EU will have to comply. 

This is the most comprehensive legislation to be released to date on AI, but we are aware that there is more to come. As a global firm with clients worldwide, we are tracking ongoing global legislation and developments to ensure that our own risk matrix and internal governance and other processes adapt to and comply with evolving regulatory requirements. We are also dedicated to supporting our clients in aligning with these high standards through strategic guidance and tailored solutions.

Read more about the EU AI Act on the EU’s website.

What does this mean right now for your organization?

While there is a relatively high level of awareness around the EU AI Act, many organizations are still uncertain about the concrete operational implications and details.  

Especially in larger organizations, both public and private sector, with more complex systems and processes, it is critical to take the EU AI Act into consideration now, so that best practices and compliance measures can be embedded from the beginning. 

Our participation in the EU AI Pact and Pledge – and how you can benefit 

The European Commission (EC) has set up the EU AI Pact, a framework for frontrunners to directly engage with the Commission and a wider network to prepare for the AI Act’s implementation. 

The goals for the EU AI Pact are to build a common understanding of the EU AI Act objectives, take concrete actions to understand, adapt, and prepare for the future implementation of the EU AI Act, to share knowledge and increase visibility and credibility of safeguards in place to demonstrate trustworthy AI, and to build additional trust in AI technologies. We plan to engage in further workshops and exchanges with the EC and its network, bringing our own best practices and learnings to this effort and back to our clients. 

CGI has signed the EU AI Act Pledge as part of our wider strategic engagement with the EU AI Pact. 

agreement being signed

How we can help with the EU AI Act

While many companies have reviewed the EU AI Act on a legal basis or have a high-level understanding of what the requirements are, our expertise and strength is translating it on a practical level for our clients. 

Our strategic advisory services, both for the EU AI Act and on responsible use of AI, are provided in the context of our deep client relationships and industry knowledge, and in knowing the governance, process and technology changes, and change management that follow. The AI best practices we advise on and implement for our clients are the same that we follow in our internal governance and processes.

Our differentiators include:

Practical approach to responsible AI, for ourselves and our clients 

We believe trusted AI is a business imperative. We use our Responsible AI Principles & Framework in our own operations as well as to help clients move from AI exploration to implementation. We updated this framework to align with EU AI Act requirements. 

CGI’s business solutions, developed with and for our clients 

Our intellectual property-based solutions portfolio, including our AI-powered intelligent solutions, is governed by our Responsible AI Framework and AI Risk Matrix. 

AI, cybersecurity and risk advisory services, anchored in operational strength 

Our AI Advisory services connect business strategy, operating models, responsible AI, and human-centric transformation and change to help clients build the foundation for EU AI Act compliance.

Additionally, our cybersecurity and risk advisory services are anchored in our capabilities in providing end-to-end solutions, from diagnosis, solution identification and design, to control implementation and checks. These services are rooted in successful outcomes evolved from decades of securing critical systems in complex environments – many of which will fall under the high-risk scope in the EU AI Act. 

Learn more about our AI and cybersecurity advisory services. 

Cases in point

Following are examples of how CGI combines local proximity, regulatory and industry knowledge with global resources to help clients achieve compliance objectives:

  • Developing risk models for clients in the Netherlands for publishing algorithms in a public database, as prescribed in the EU AI Act's transparency obligations.
  • Enhancing AI transparency for Dutch governments to increase employee competency in dealing with the ethical implications of systems under development.
  • Updating risk and impact analyses for a French public entity by changing the way it carries out risk and impact analyses to take into account specific features of systems that use AI. 
  • Conducting technical audits for a French energy company, including technical audits on AI systems and how AI is used and the technical foundations by cloud providers to host these systems.
  • Developing medical software in line with evolving EU regulations. In the medical area in Finland, much of our software portfolio falls under high-risk application areas under the EU AI Act – such as for electronic medical record systems (OMNI360), emergency medical care management and medical device management, where new requirements can be easily integrated and checked against our existing standards.
  • Developing an EU AI Act-compliant digital companion to assist German judges in searching and structuring court files and related data, for example by proposing similar cases and showing relevant laws and regulations.
     

Advancing AI governance

We actively engage in AI governance initiatives and discussions with regulatory authorities, such as:

  • Signing Canada’s Voluntary Code of Conduct for Artificial Intelligence

  • In the UK, advising industry regulators on responsible AI, participating in dialogue in advance of UK central government AI legislation, and responding to consultations on AI regulation issued by 14 UK industry regulators required by the Department for Science, Innovation and Technology (DSIT) to define policy for their industries

  • In the U.S., assisting the State of Georgia in researching and recommending AI legislation aimed at fostering a robust AI-driven economy while ensuring the safety and security of citizens against inherent technology risks

Helping clients achieve trusted and tangible outcomes

We recognize that while CGI provides comprehensive advisory and implementation services, the ultimate responsibility for compliance with legislation such as the EU AI Act rests with our clients themselves. Our role is to enable our clients to develop and execute their chosen strategies and solutions effectively, ensuring they remain fully in control of their compliance journey. This collaborative approach ensures that our clients can leverage CGI’s expertise while maintaining full decision-making authority. Together, we strive to build resilient institutions that meet the highest standards of regulatory expectations. 

AI analyst at computer