There will be real consequences for organisations that fail to comply with new artificial intelligence regulations, writes Grant Thornton consulting partner Shane O’Neill
The 2008 financial crash shone a light on the major failure in corporate governance across the banking sector.
The pursuit of growth, combined with looser oversight and governance structure without robust risk management processes and controls ultimately led to a spectacular collapse, the effects of which we are still feeling to this day.
Since then, we have seen the balance swing in the opposite direction, with stricter regulatory regimes and the introduction of measures like the Individual Accountability Framework in Ireland to ensure that executives could be held responsible in the future, and the establishment of the Irish Banking Culture Board to promote ethical behaviour within the pillar banks themselves.
This new emphasis on regulation stretches far beyond these shores and now also reverberates across multiple sectors, as the EU’s Artificial Intelligence (AI) Act which came into effect in August goes to show.
In fact, the lessons learnt from the 2008 financial crash are highly evident in the legislation in terms of embedding responsibility, governance frameworks and risk management at the heart of how organisations roll out AI systems.
In light of the potential seismic impact of artificial intelligence on society, it is vital that robust safeguards are laid as part of the foundation for its rollout.
Its effect will be felt far beyond the jobs market and will influence our experience of essential services such as banking and education.
Should an algorithm decide in seconds whether you can get a mortgage, or determine your score in an exam without any human oversight?
Where companies are looking to introduce automated processes, the same controls for noncomputer processes need to be adopted. In essence, the legislation is aimed at ensuring that AI systems in use in Europe are transparent, nondiscriminatory and ultimately safe.
It sets out to achieve that by establishing different levels of risk for AI systems; laying down transparency requirements so that any content generated by AI is labelled as such; requiring a national authority to be put in place so that the public has a means of filing complaints; and ultimately drawing clear red lines for AI systems that are automatically banned outright.
Organisations are now obliged to conduct risk assessments of their AI systems and rank their level of risk according to four different categories set out in the AI Act.
For example, solutions where the potential risk to an individual and their privacy is deemed to outweigh any perceived benefit are classified as ‘unacceptable risk’ and use of them is strictly prohibited.
Examples of this include social scoring systems or untargeted scraping in facial recognition systems.
‘High risk’ AI systems, such as solutions designed for recruitment or medical assessment, are subject to strict requirements across a variety of areas ranging from data governance and cybersecurity to technical documentation and human oversight.
While the AI Act will be fully phased over 36 months, its key obligations must be in place within the next two years. As we have seen with GDPR and the huge fines handed down by the Irish Data Protection Commission, there will be real consequences for organisations that do not comply with the legislation.
They will be potentially on the hook for a financial penalty of €35m or 6% of their global turnover, whichever is higher.
With the risk of fines of this scale, organisations developing AI systems will need to take a leaf out of the book of financial institutions who have learnt their lesson from the global crisis.
The key immediate and foundational step that they must take is to develop a governance framework which establishes how AI is being used within an organisation, put clear processes and responsibilities in place, and also ensure that executives in risk functions have full visibility.
Again in terms of culture, there is a huge investment required to promote the right behaviours internally and train staff on the appropriate use of these systems and related safeguarding requirements.
This is a particularly important point considering how quickly the likes of GenAI tools such as ChatGPT are being rolled out and deployed in organisations.
This education is particularly important at senior levels of an organisation, where executives will need to understand nuances such as the difference between public and private large language models, and the varying risks that come with them.

Not having robust governance in place has the potential to cause huge damage to a company’s reputation.
A lack of transparency, unintended bias built into algorithms or a lack of robust protection for sensitive data within AI systems can all ultimately tarnish a blue chip corporate and incur significant financial penalties.
At its core, AI may be made up of ones and zeros, but we should never forget its potential impact on society.
As a result, there has to be a human centred approach to governance sitting at its heart.
Shane O’Neill is technology & digital consulting partner at Grant Thornton Ireland