Cybersecurity

Cybersecurity strategies to mitigate AI risks

By:
insight featured image

This is the third article in our series on cybersecurity risks in AI adoption based on the whitepaper developed by our colleagues in Grant Thornton US which you can download in full.

The use of artificial intelligence continues to spread at a staggering speed. Companies worldwide have adopted and implemented AI, in solutions that are reshaping industries through improved efficiency, productivity and decision-making. However, many organisations have integrated AI into their business processes more quickly than they have updated security strategies and protocols. Your risk, technology and cybersecurity leaders must find, understand and mitigate these exposures.

Download Grant Thornton's whitepaper 'Control AI cybersecurity risks'

To mitigate the cybersecurity risks in new AI solutions, organisations should review and update their existing cybersecurity programme to safeguard data and systems from inadvertent mistakes and malicious attempts.

At a minimum, organisations should consider the following when building security and privacy practices in the age of AI:

Data governance

Use effective data governance to help ensure that data is properly classified, protected and managed throughout its life cycle is critical in the effective use of AI technologies. Implement secure data management and governance practices to help prevent model poisoning attacks, protect data security, maintain data hygiene and ensure accurate outputs. Good governance can include:

  • Policies and procedures: Review and amend existing policies and procedures to define the necessary AI-specific security requirements, designate roles to oversee the AI operations and ensure implementation of the security guidelines.
  • Roles and responsibilities: Establish policies with roles and responsibilities for governing data, along with defining requirements for documenting data provenance, handling, maintenance and disposal.
  • Data quality assessments: Regular data quality assessments help identify and remove potentially malicious data from training datasets in a timely manner.
  • Data validation: Data validation techniques like hashing can help ensure that training data is valid and consistent.
  • Identity and access management: Identity and access management policies can help define who has access to data, with appropriate access controls to help prevent unauthorised modifications.
  • Acceptable data use: Acceptable data use policies can outline what data can be used and how. Each data classification (such as public, internal, confidential or highly confidential) should include its uses and restrictions pertaining to AI technologies. Policies should also include procedures for users to follow if they find a restricted data type in an AI solution.

Threat-modelling

Conduct threat-modelling exercises to help identify potential security threats to AI systems and assess their impact. Some common threats to models include data breaches, unauthorised access to systems and data, adversarial attacks and AI model bias. When you model threats and impacts, you can identify a structured approach with proactive measures to mitigate risks. Consider the following activities as part of your threat modelling:

  • Criticality: Document the business functions and objectives of each AI-driven solution, and how they relate to the criticality of your organisation’s operations. This helps you to establish a baseline for criticality, making controls commensurate with the criticality of the AI application and determining the thoroughness of the threat model.
  • Connections: Identify the AI platforms, solutions, components, technologies and hardware, including the data inputs, processing algorithms, and outputs. This will assist in identifying the logic, critical processing paths and core execution flow of the AI that will feed into the threat model and help edify the organisation on the AI application.
  • Boundaries: Define system boundaries by creating a high-level architecture diagram, including components like data storage, processing, user access and communication channels. This will help you understand the AI application’s data and activity footprint, threat actors and dependencies.
  • Data characteristics: Define the flows, classifications and sensitivity for the data that the AI technology will use and the expected output. This will help determine the controls and restrictions that will apply to data flows, as you might need to pseudonymise, anonymise or prohibit certain types of data.
  • Threats: Identify potential threats for your business and technologies, like data breaches, adversarial attacks and model manipulation.
  • Impacts: Assess the potential impact of identified threats, and assign a risk level based on vulnerability, exploitability and potential damage.
  • Mitigation: Develop and implement mitigation strategies and countermeasures to combat the identified threats, including technical measures like encryption, access controls or robustness testing, along with non-technical measures like employee training, policies or third-party audits.
  • Adaptation: Review and update the threat model on an ongoing basis as new threats emerge or as the system evolves.

Access controls

To control access to your AI infrastructure, including your data and models, establish appropriate identity and access management policies with technical controls like authentication and authorisation mechanisms. To define the policies and controls you need to consider:

  • Who should have access to what AI systems, data or functionality?
  • How and when should access be re-evaluated, and by whom?
  • What type of logging, reporting and alerts should be in place?
  • If we use AI with access to real data that may contain personal data or other sensitive information, what access controls do we need, especially as related to the data annotation process?

Reassess and update policies and technical controls periodically, to align with the evolving AI landscape and emerging threat types, ensuring that your security posture remains robust and adaptable.

Encryption and steganography

Encryption is a technique that can help protect the confidentiality and integrity of AI training data, source code and models. You might need to encrypt input data or training data, in-transit and at-rest, depending on the source. Encryption and version control can also help mitigate the risk of unauthorised changes to AI source code. Source code encryption is especially important when AI solutions can make decisions with potentially significant impact. To protect and track AI models or training data, you can use steganographic techniques such as:

  • Watermarking that inserts a digital signature into a file, or the output of an AI solution, to identify when a proprietary AI is being used to generate an output.
  • Radioactive data that makes a slight modification to a file, or training data, to identify when an organisation is using the training data. For instance, radioactive data can help you protect your public data against unauthorised use in the training of AI models.

End-point security, or user and entity behaviour analytics

End points (like laptops, workstations and mobile devices) act as primary gateways for accessing and interacting with AI systems. Historically, they have been a principal attack vector for malicious actors seeking to exploit vulnerabilities. With AI-augmented attacks on the horizon, end-point devices warrant special consideration as part of the potential attack surface.

User Entity and Behaviour Analytics (UEBA) enabled end-point security solutions can help detect early signs of AI misuse and abuse by malicious actors. UEBA is known for its capability to detect suspicious activity by using an observed baseline of behaviour, rather than a set of predefined patterns or rules.

Vulnerability management

AI systems can be vulnerable at many levels, like the infrastructure running the AI, the components used to build the AI or the coded logic of the AI itself. These vulnerabilities can pose significant risks to the security, privacy and integrity of the AI systems, and you need to address them through appropriate measures.

Ensure to:

  • Regularly apply software updates and patching to keep all software and firmware components of the AI infrastructure up to date.
  • Conduct frequent assessments of AI infrastructure components, including hardware, software and data, to identify and remediate vulnerabilities in a timely manner.
  • Conduct periodic penetration tests on the AI solutions and functionality.

These measures will ensure that patches on infrastructure are working as intended, access controls are operating effectively and there is no exploitable logic within the AI itself.

Security awareness

To improve resilience against threats and safeguard sensitive data, you need to foster a culture of security awareness. As with the advent of any new technology, you must ensure that all users understand the appropriate uses and the risks posed by AI technologies, and that security training materials are kept up to date with the rapidly evolving threat landscape. Read Grant Thornton's whitepaper for more information on what board member and executives, users, system engineers and developers need to know about the risks posed AI.

Immediate actions for security teams

Your security team needs to select and design the right mitigation strategies to define a clear roadmap, with prioritised milestones and timelines for execution. The whitepaper discusses in detail some important security questions to help your team take the next steps in updating their cybersecurity strategies to mitigate AI risks. 

How Grant Thornton can help

In today's increasingly complex cybersecurity landscape, organisations must prioritise a proactive approach to risk mitigation. By fostering a culture of security awareness and regularly updating security training materials, organisations can empower their employees to identify and prevent potential threats.

By partnering with Grant Thornton, organisations can gain access to a team of experienced cybersecurity professionals who are dedicated to helping organisations protect their valuable assets and navigate the rapidly evolving cybersecurity landscape with confidence, while staying compliant with current and upcoming regulations. For more information on our service offerings, get in touch with our cybersecurity leaders and data protection leaders.