Article

Safeguarding Society: Financial Crash Lessons Shaping AI's Future

Shane O'Neill
By:
Woman looking at digital screens with her arms folded
The 2008 financial crash shone a light on the major failure in corporate governance across the banking sector. The pursuit of growth, combined with looser oversight and governance structure without robust risk management processes and controls, ultimately led to a spectacular collapse, whose effects we are still feeling to this day.
Contents

Since then, we have seen the balance swing in the opposite direction, with stricter regulatory regimes and the introduction of measures like the Individual Accountability Framework in Ireland to ensure that executives could be held responsible in the future, and the establishment of the Irish Banking Culture Board to promote ethical behaviour within the pillar banks themselves.

The expansion of regulatory emphasis and the EU’s AI Act

This new emphasis on regulation stretches far beyond these shores and now also reverberates across multiple sectors, as the EU’s Artificial Intelligence (AI) Act, which came into effect on 1 August 2024, goes to show. In fact, the lessons learnt from the 2008 financial crash are highly evident in the legislation in terms of embedding responsibility, governance frameworks, and risk management at the heart of how organisations roll out AI systems.

In light of the potential seismic impact of artificial intelligence on society, it is vital that robust safeguards are laid as part of the foundation for its rollout. Its effect will be felt far beyond the jobs market and will influence our experience of essential services such as banking and education.

Should an algorithm decide in seconds whether you can get a mortgage or determine your score in an exam without any human oversight? Where companies are looking to introduce automated processes, the same controls for non-computer processes need to be adopted.

Given the fact that, for many people, artificial intelligence seemed to suddenly appear out of nowhere last year, the foresight shown by the European Commission in 2021, when a regulatory framework for AI was first considered, should be applauded.

Core objectives and requirements of the AI Act

In essence, the legislation is aimed at ensuring that AI systems in use in Europe are transparent, non-discriminatory, and ultimately safe. It sets out to achieve that by establishing different levels of risk for AI systems; laying down transparency requirements so that any content generated by AI is labelled as such; requiring a national authority to be put in place so that the public has a means of filing complaints; and ultimately drawing clear red lines for AI systems that are automatically outright banned.

Organisations are now obliged to conduct risk assessments of their AI systems and rank their level of risk according to four different categories set out in the AI Act. For example, solutions where the potential risk to an individual and their privacy is deemed to outweigh any perceived benefit are classified as ‘unacceptable risk’ and use of them is strictly prohibited. Examples of this include social scoring systems or untargeted scraping in facial recognition systems.

‘High risk’ AI systems, such as solutions designed for recruitment or medical assessment, are subject to strict requirements across a variety of areas ranging from data governance and cybersecurity to technical documentation and human oversight.

Implementation timeline and consequences of non-compliance

While the AI Act will be fully phased over 36 months, its key obligations must be in place within the next two years. This may sound like a long period of time, but the clock is already ticking, and organisations have to make crucial decisions at an early stage. If they don’t move quickly, they risk facing a mammoth task of potentially trying to retrofit their technology closer to the deadline because they hadn’t given adequate consideration to what was required.

As we have seen with GDPR and the huge fines handed down by the Irish Data Protection Commission, there will be real consequences for organisations that do not comply with the legislation. They will be potentially on the hook for a financial penalty of €35 million or seven per cent of their global turnover, whichever is higher.

Learning from the financial sector: The need for governance frameworks

With the risk of this scale of fines, organisations developing AI systems will need to take a leaf out of the book of financial institutions who have learnt their lesson from the global financial crisis. The key immediate and foundational step that they must take is to develop a governance framework that establishes how AI is being used within your organisation, puts clear processes and responsibilities in place, and also ensures that executives in risk functions have full visibility.

Again in terms of culture, there is a huge investment required to promote the right behaviours internally and train staff on the appropriate use of these systems and related safeguarding requirements. This is a particularly important point considering how quickly the likes of GenAI tools such as ChatGPT are being rolled out and deployed in organisations.

The importance of education and transparency in AI governance

This education is particularly important at senior levels of an organisation, where executives will need to understand nuances such as the difference between public and private large language models and the varying risks that come with them.

Much like the financial crisis, not having robust governance in place has the potential to cause huge damage to a company’s reputation. A lack of transparency, unintended bias built into algorithms, or a lack of robust protection for sensitive data within AI systems can all ultimately tarnish a blue-chip corporate and incur significant financial penalties.

At its core, AI may be made up of ones and zeros, but we should never forget its potential impact on society. As a result, there has to be a human-centred approach to governance sitting at its heart.

Contact us
Learn more about how our Technology Consulting solutions can help you
Visit our Technology Consulting page
Learn more about how our Technology Consulting solutions can help you