BLI.Tools

EU AI Act On The Horizon: Five Key Considerations To Remain Aware Of

main learnings for business, and what does the future of AI regulation look like?

Date: 8 August 2023

Authors: Sean Musch, AI & Partners, and Michael Borrelli, AI & Partners

Introduction

On 21 April 2021, the EU AI Act was proposed by the European Commission (“EC”). January 2024 marks the provisional time for the start of the two-year transition plan.

It may not be an occasion that you mark with cake and candles, but for those of us in the artificial intelligence (“AI”) world it is an important milestone and a moment to reflect on the huge changes that this landmark legislation is set to create.

So what have we learned over the past two years?

The EU AI Act sets to make the European bloc a leader in AI policy and give citizens expansive rights when interacting with AI systems, such as those bestowed by Article 114 of the Treaty of the Functioning of the European Union (“TFEU”), but its implementation looks set to be – and may be – a long and challenging process.

When it comes into force in 2024, many businesses may struggle to understand what is required and how they need to change. Regulators are likely to respond by providing guidance and collaborating with businesses to help them understand what is expected of them. By early-2026, the gloves may come off and we enter a new era of enforcement with multi-million euro fines issued to international companies.

Over the past two years firms have been developing a much more sophisticated understanding of the field of AI. Many now understand the benefits of looking after their AI, including measures that demonstrate their transparency and explainability, such as AI management programmes. Companies have also stepped up their investment in this area and it is now a board-level issue at many organisations.

But while understanding of the EU AI Act has steadily improved, key issues around its interpretation and application are far from settled. The past two years have illustrated that AI in Europe is a dynamic area requiring companies to be nimble, understand the EU and local context and have robust strategies in place.

At a critical juncture for the EU AI Act, we round up some of the big considerations firms should be aware of from the first two years of its proposal to help companies tackle the next five years – and beyond:

Be willing to adjust

When the EU AI Act was proposed many companies assumed that they could create an AI policy, file it away and then largely forget about it. In reality, the EU AI Act requires much more than that.

We see new AI applications, beyond generative AI (e.g. ChatGPT), emerging, new guidance being issued on a national and global level and potential law suits emerging in the distance. It means that companies need to be ready to continually adapt their policies and governance.

One clear example of how the EU AI Act is evolving is in the area of governance, risk and compliance (“GRC”). Prior to the EU AI Act’s proposal, there remained a lack of understanding for companies in the governance and risk management elements of the approach to compliance. Specifically, in how they make decisions about an AI system, including whether it is appropriate to use AI in each context and how to identify, document and manage risks associated with the system.

Many firms relied on antiquated or non-applicable GRC arrangements to manage AI systems. Others rely on generic risk management standards as the practical means basis to design, develop and deploy AI systems into the production environment.

That all changed in April 2021 when the European Union (“EU”) adopted their risk management approach to AI. With the EU AI Act expected to pass in mid- or late-2023 and come into force in 2024, European standardisation organisations are set to publish the detailed standards that firms will need to implement the EU AI Act on the same timeframe. For the past two years firms have faced much uncertainty about how to safely and securely build AI systems, and compliance costs are set to increase exponentially depending on the risk classification of an AI system.

For the past two years, the EU AI Act has been going through the legislative process to set the world’s first regulation for AI. During June 2023, the triforce of the European Council, Commission and Parliament voted on the EU AI Act to progress to the next stage of the process, setting the stage for final negotiations on the legislative text to commence in July 2023. The new arrangement addresses two key issues raised in the EU AI Act proposal: deploying trustworthy and protecting individuals’ fundamental rights.

The European Commission, together with the member states and AI authorities including the proposed European AI Board, will proceed to assess, approve and implement the EU AI Act.

It is unclear how long this process will take, but we are hopeful that European legislative bodies will reach decision on the EU AI Act’s final provisions within the next few weeks (if not before).  Other jurisdictions have also indicated that they will closely mirror (or reference) the EU AI Act in their domestic legislative frameworks.

Even if the EU AI Act is passed it will likely come under legal scrutiny by multiple parties, across both the public and private sectors. This serves to illustrate how even EU AI Act issues that can be settled will and are likely to change. We expect this to continue as AI becomes widely adopted in the forthcoming years.

As such, firms need to similarly keep an eye on how the AI landscape is evolving and be nimble in adapting their own policies and governance.

Take account of the wider context

When the EU AI Act was first proposed, many business leaders hoped it would provide a consistent and harmonised approach to AI across Europe.

In many ways, it has achieved this. Businesses now have more clarity about what is expected of them, particularly when it comes to being transparent around how AI systems are used and what governance processes are needed to support their use.

But there is still a great deal of nuance in the way that different national AI authorities are interpreting the draft EU AI Act, handling deployment oversight and identify emerging themes. It means that we are seeing many different ‘flavours’ of EU AI Act across the bloc.

For instance, local regulators in Spain regularly are considering issuing their own guidance on specific EU AI Act topics. While in other Member States, the regulators are looking to take a more hands-off approach, seeking guidance at European level from the AI Board.

Different regulators also have different priorities for the coming years. For some, the regulators are focused on the use of biometric data by employers. Whereas for others, the regulator is looking at AI usage in certain use cases for specific industries. And, at a European level, the authorities are prioritising compliance with the rules on AI officers.

These potentially nuances in local EU AI Act interpretation mean that companies need to work with advisers who have a good grasp of the distinct local context of each European jurisdiction as well as broader EU initiatives in order to understand what is expected of them.

Spotlight on the UK

The UK has sought to chart its own course on AI since Brexit.

While the UK is keeping the EU AI Act as its AI law since its exit from the European Union, the British government is planning to set its own course on how certain rights and principles work. We expect these changes to be an evolution rather than ripping up of the EU AI Act rulebook given that the government has recognised the importance of maintaining synchronisation with the EU. The UK AI Office and Digital Information Bill is currently before Parliament and should pass during 2024.

You can find more information on what they are planning, including potential approaches, here.

Have a strategy for compliance

EU AI Act fines can be eye wateringly large if companies fail to comply with the rules. Penalties depend on the global annual turnover of the undertaking, which may encompass the group of companies to which the entity that violates the EU AI Act belongs. Enforcement orders can also impact business operations, business models, customer trust and company reputation.

This is reason enough for companies to take their AI obligations seriously, but it is likely that investigations in one jurisdiction can potentially spiral into a much larger and more material business issue.

High profile investigations can inspire regulators in other countries to launch their own inquiries against a company. While a potentially growth of class actions across the EU also means that a judgement for non-compliance in one jurisdiction can trigger potentially costly and time-consuming group litigation against a company in several other countries. In practice, historical activity from the CJEU has shown that the mere fact that a company has violated the EU AI Act may not automatically award damages to an individual concerned. It will have to demonstrate the infringement, the material or non-material damage resulting from that infringement and a causal link between the damage and the infringement.

That makes it vital to have an effective cross-border strategy for dealing with an investigation as early as possible. Sometimes fines or judgements for non-compliance can be avoided all together by effective engagement with the regulator at an early stage. But whatever the outcome, we always advise our clients to get a grip on an investigation and game plan how it might spread to other countries, as early in the process as possible.

Balance your opposing requirements

EU AI Act has ensured that AI issues rise up the agenda for all business leaders, but AI does not operate in a silo.

Firms must meet all their AI requirements yet balance this against the other demands being made on them by industry specific legislation and even their own employees.

This is not always easy for clients. For instance, some firms are under pressure to design, develop and deploy AI systems based on short-term commercial priorities, but this can be at odds with EU AI Act guidance from local regulators on its use.

That tension between the EU AI Act and other business requirements is only likely to increase with the developments of digital laws coming out of the EU over the next few years, including the Digital Services Act. Accordingly, we advise clients to take a sensible risk based approach to AI that balances the sometimes conflicting needs of different legislation and stakeholders.

Look to other countries to see how AI governance evolves

EU AI Act is not just a potential landmark piece of legislation within the EU, its effect is likely to be felt worldwide.

Lawmakers across the globe are closely watching the design and implementation of the EU AI Act. While there are ongoing debates around the cost of compliance and the applicability of a risk-based approach in some countries, the EU’s adoption of the EU AI Act has prompted many countries to rethink their approach to AI.

In the EU, numerous states are set to implement stringent AI guidelines from 2024 onwards. The EU AI Act is also influencing AI laws and reforms in countries such as the United Kingdom, United States of America, Philippines and China. In APAC, lawmakers have taken inspiration from the EU AI Act principles of human agency & oversight, privacy and data governance and transparency, and have increased fines to levels similar or higher to those in the General Data Protection Regulation (“GDPR”), an analogous regulation for data protection. Across the region, EU AI Act-style legislation looks set to enter in force in 2024. Further AI reforms are under way. You can subscribe to more developments on the EU AI Act at AI & Partners.

More broadly, EU AI Act’s potential success means that the EU is potentially seen as a trailblazer when it comes to the protection of individuals’ fundamental rights. This is a trend that will only increase as the EU brings in a suite of digital reforms in the coming years, including the Data Governance Act. This package of measures aims to keep the EU at the cutting-edge of digital rights.

AI issues are only becoming more important as we live more of our life online and as new developments come to the fore. So we believe that business leaders across the world to need to continue to keep an eye on how AI issues are evolving in the EU if they want to understand how their own market may change in the next five to ten years.

The past two years have illustrated that artificial intelligence, as both a product and process, in Europe is an increasingly dynamic area requiring firms to be nimble, understand the EU and local context and have robust strategies in place.”

Michael Charles Borrelli, Co-CEO/COO, AI & Partners

Skip to content