-
Aviation Advisory
Our dedicated Aviation Advisory team bring best-in-class expertise across modelling, lease management, financial accounting and transaction execution as well as technical services completed by certified engineers.
-
Business Risk Services
Our Business Risk Services team deliver practical and pragmatic solutions that support clients in growing and protecting the inherent value of their businesses.
-
Consulting
Our Consulting team guarantees quick turnarounds and superior results delivered on a range of services.
-
Deal Advisory
Our experienced Deal Advisory team has provided a range of transaction, valuation, deal advisory and restructuring services to clients for the past two decades.
-
Financial Accounting and Advisory
Our FAAS team designs and implements creative solutions for organisations expanding into new markets or undertaking functional financial transformations.
-
Forensic Accounting
Our Forensic and Investigation Services team have targeted solutions to solve difficult challenges - making the difference between finding the truth or being left in the dark.
-
Restructuring
Grant Thornton is Ireland’s leading provider of insolvency and corporate recovery solutions.
-
Risk Advisory
Our Risk Advisory team delivers innovative solutions and strategic insights for the Financial Services sector, addressing disruptive forces, regulatory changes, and emerging trends to enhance risk management and foster competitive advantage.
-
Sustainability Advisory
Our Sustainability Advisory team works with clients to accelerate their sustainability journey through innovative and pragmatic solutions.
-
Corporate Accounting and Outsourcing
At Grant Thornton we have extensive knowledge and experience in providing tailored solutions to our clients, whether on a short-term or long-term basis.
-
Financial Services Audit
Our Financial Services Audit team offers expertise and knowledge along with a horizontal approach to solving clients’ problems and queries.
-
Global Statutory Audit
Our Global Statutory Audit team ensures your statutory audit process follows a well-defined project plan, with no surprises, to maintain compliance across multiple jurisdictions. We invest time to understand your finance function and develop bespoke solutions built on the premise of central effort to remove duplication.
-
Pension Audit
The Grant Thornton Pension Audit team has vast experience in managing schemes and preparing annual reports on them for clients.
-
Corporate Tax
Our Corporate Tax team is made up of more than 40 highly experienced senior partners and directors who work directly with a wide range of domestic and international clients; covering Corporation Tax, Company Secretarial, Employer Solutions, Global Mobility and Tax Incentives.
-
Financial Services Tax
The Grant Thornton team is made up of experts who are fully up to date in terms of changing and evolving tax legislation. This is combined with industry expertise and an in-depth knowledge of the evolving financial services regulatory landscape.
-
Indirect Tax Advisory & Compliance
Grant Thornton’s team of indirect tax specialists helps a range of clients across a variety of sectors including pharmaceuticals, financial services, construction and property and food to navigate these complexities.
-
International Tax
We develop close relationships with clients in order to gain a deep understanding of their businesses to ensure they make the right operational decisions. The wrong decision on how a company sells into a new market or establishes a new subsidiary can have major tax implications.
-
Private Client
Grant Thornton’s Private Client Services team can advise you on all areas of financial, pension, investment, succession and inheritance planning. We understand that each individual’s circumstances are different to the next and we tailor our services to suit your specific needs.
AI and its impact on raising cybersecurity risks
This is the first article in our series on cybersecurity risks in AI adoption based on the whitepaper developed by our colleagues in Grant Thornton US which you can download in full.
Artificial Intelligence (AI) needs no introduction, having managed to rapidly creep into all aspects of life. While in business, AI is creating a plethora of new potential opportunities and efficiencies, it is also presenting new challenges, including in the area of cybersecurity. So how exactly has AI impacted cybersecurity and what are the key risks that can be identified in its present form?
Download Grant Thornton's whitepaper 'Control AI cybersecurity risks'
As AI tools become more prevalent and sophisticated, they can be used to pose significant cybersecurity risks. Threats, such as vulnerability exploitation, malicious software, collusion, extortion and phishing are heightened as AI can be used to evade detection. Traditional security approaches and countermeasures are not sufficient to protect against new and increased risks, such as:
- Incorrect or biased outputs,
- Vulnerabilities in AI-generated code,
- Copyright or licensing violations,
- Reputation or brand impact,
- Outsourcing or losing human oversight for decisions, and
- Compliance violations.
Implementing AI technologies in your organisation can introduce five primary cybersecurity risks. This article sets out the five key risks and their mitigating factors:
1. Data breaches and misuse
Data breaches pose a significant cybersecurity risk for AI platforms that store and process vast amounts of confidential or sensitive data such as personal data, financial data and health records.
Several risk factors can contribute to data breaches in AI platforms. Internally, AI instances that process and analyse data can be vulnerable due to weak security protocols, insufficient encryption, inadequate monitoring, external telemetry, and insufficient access controls. Externally, AI solutions and platforms can be vulnerable to various security risks and can be targets for data theft, especially if the data used to interact with these platforms is recorded or stored.
The risk of misuse and data loss has increased due to the unfettered availability of Generative AI (GenAI). The risk is especially high for IT, engineering, development and even security staff, who may want to use GenAI to expedite their daily tasks or to experiment with new technology. They can inadvertently feed sensitive data through browser extensions, APIs or directly to the GenAI platform. Without an enterprise-sanctioned solution and an acceptable use policy, there is potential that employees could cause an accidental data breach, and commit to terms of use that do not comply with company policy and data protection regulatory requirements.
2. Adversarial attacks
Adversarial attacks manipulate input data to cause errors or misclassification, bypassing security measures and controlling the decision-making process of AI systems. There are several forms of adversarial attacks, and two of the most common types are evasion attacks and model extraction attacks.
Evasion attacks try to design inputs that evade detection by the AI system's defences and allow attackers to achieve their goals (e.g., bypassing security measures or generating false results). Since the inputs appear to be legitimate to the AI system, these attacks can produce incorrect or unexpected outputs without triggering any detection or alerts. Model extraction attacks attempt to steal a trained AI model from an organisation and use it for malicious purposes.
The impact of adversarial attacks vary by use case and industry, but they can include:
- Errors or misclassifications in the output for medical diagnostics, where adversarial attacks can misdiagnose cases and potentially cause improper treatment. In the context of automated vehicles, such attacks could include lane elimination attacks or fake lane attack. These involve manipulation of road markings so that automated cars incorrectly interpret the markings, causing road accidents when using autopilot. Similar attacks can be carried out on traffic signs.
- Decision-making manipulations that could coerce a system into divulging sensitive information or performing unauthorised actions
3. Malware and ransomware
Malware and ransomware have been amongst the greatest threats to IT systems for many years, and the use of AI can make the risks even more prevalent. AI can be used to generate new variants of malware at a quicker rate, for lower costs and with less required expertise by the attacker. The risks for any solution include:
- Disruption of services, caused by encrypting data or overloading networks to prevent legitimate access.
- Hijacking resources for crypto mining or a botnet attack.
- Exploiting publicly available AI platforms to gain access to your network and cause harm.
4. Vulnerabilities in AI infrastructure
As with any software, AI solutions rely on components of software, hardware and networking - all of which can be subject to attack. In addition to traditional attack vectors, AI can be targeted through cloud-based AI services, Graphic Processing Units (GPUs) and Tensor Processing Units (TPUs).
GPUs and TPUs contain specialised processors that accelerate AI workloads, and they can introduce new attack vectors. Design flaws in processors and other hardware can affect a range of products.
AI solutions are also built upon and integrated with other components that can fall victim to more traditional attacks. Compromises to the software stack can trigger denial of service, allow unauthorised access to sensitive data or gain access to your internal network.
5. Model poisoning
Adversarial attacks target AI models or systems in their production environment, whereas model poisoning attacks target AI models in the development or testing environments. In model poisoning, attackers introduce malicious data into the training data to influence the output — sometimes creating a significant deviation in behaviour from the AI model.
For example, after a successful model poison attack, an AI model may produce incorrect or biased predictions, leading to inaccurate or unfair decision making. Some organisations are investing in training closed Large Language Model (LLM) AI to solve specific problems with their internal or specialised data. Without appropriate security controls and measures in place, these applications are vulnerable to model poisoning attacks. In addition to this, many LLMs are not compliant with existing data privacy regulations and therefore there is an inherent risk for organisations that leverage them.
Model poisoning attacks can be challenging to detect, because the poisoned data can be innocuous to the human eye. There are additional detection complexities introduced for AI solutions that leverage open-source or external components, which the majority of solutions today do.
Download Grant Thornton's whitepaper 'Control AI cybersecurity risks'
How Grant Thornton can help
As companies adapt their business strategies for new AI capabilities, they must also adapt their risk management strategies. Cybersecurity, data privacy, and effective controls are critical to the successful business use of AI.
At Grant Thornton, our cybersecurity team have extensive experience in advising organisations on a wide range of risk and technology matters, keeping organisations at the forefront in managing their risks from current and emerging technology.
For more information on our service offering, get in touch with our cybersecurity leaders.
Get the latest insights, events and guidance for Cybersecurity, straight to your inbox.