Alex Woodward

Alex Woodward

Senior Vice President - Consulting Delivery, Cyber Security, CGI in the UK & Australia

There's lots of exciting discourse on AI and large language models, and the Open Worldwide Application Security Project (OWASP) team have been doing a great job articulating some of the security challenges and benefits around adopting AI as a tool in organisations.

 

What is an LLM?

If you haven’t already come across the term Large Language Model or LLM; an LLM is based on a type of Deep Learning Artificial Intelligence using deep neural networks called transformer models. These transformer models process data sequentially, such as in text and images. They work by statistically analysing relationships in the input data to recognise patterns and generate human-readable output such as pictures and text. This is the sort of technology products like ChatGPT and Microsoft Copilot are built on.

 

What are the risks of AI?

While many of the considerations for information security and data privacy when adopting AI are largely the same as other new technologies, there are a couple of nuances to be aware of. As organisations rapidly adopt AI, the number of incidents is rapidly increasing1, and policymakers are taking a keen interest in what happens next, with many regimes already launching AI codes of good practice or legal instruments2.

Security, especially information security or cyber, is all about understanding and managing risks. It’s typically not possible or economical to try to eradicate them completely, so taking some time to truly understand what your exposure is can reap benefits downstream. This analysis allows you to mitigate them to a level you as an organisation are happy to tolerate. It’s no different for AI. There are potentially significant benefits to the organisation from leveraging this technology, but like most things, there are pitfalls if ill-considered or rushed through.

 

AI security – a balancing act

It is a balance though. You can’t just ban the use of AI; there has to be enough freedom to safely experiment and identify those benefits or competitive advantage will be lost in delay. Staff are likely already experimenting with AI in their daily lives, which may be at home but it’s also frequently through corporate IT, potentially compounding any ‘Shadow IT’ issues they already have. Typically, the initial reaction to this is to block the websites being used, such as ChatGPT, Bard etc., but ironically that pushes the users towards more esoteric, less mainstream LLMs, which increases the risk of something going wrong.

 

How to operate safely

It's better to provide a safer place and some guardrails to staff:

  • Assess the risk of using AI and LLMs in the context of what you’d like to use it for and the data you’d like to use it with.
  • Protect your business by implementing sandboxes for members of your team to experiment in, with appropriate controls and policy supporting them.
  • Operate with confidence through regular reviews of both the technology at play and the regulatory landscape as this is a particularly rapidly evolving field.

There is some great guidance out in the public domain to support organisations on their AI cyber journey, including from NCSC3. The OWASP team has produced the “OWASP Top 10 for LLM Applications,” which aims to describe LLM-related vulnerabilities in the context of organisations building applications and plug-ins based on LLM technologies.

I’d highly recommend reading the full document, which you can find here. Meanwhile, we’ve summarised it into this handy infographic:

CGI's Summary of OWASP Top 10 for LLM Applications

 

How can we help?

At CGI, we help our clients manage complex security challenges with a business focused approach – protecting what is most valuable to them. Please feel free to get in touch if you’d like to discuss the topics in this blog or find out how we can help you with your security challenges.

 


Sources

Stanford University Artificial Intelligence Index Report 2023

Position of the European Parliament on harmonised rules on artificial intelligence and amending Regulations

3 NCSC Intelligent Security Tools

About this author

Alex Woodward

Alex Woodward

Senior Vice President - Consulting Delivery, Cyber Security, CGI in the UK & Australia

Alex leads the delivery of the Security Operations element of CGI’s UK cyber practice with responsibility for Security Operations Advisory, Managed Security Services and Penetration testing functions.