BLI.Tools

Impact of EU AI Act on Banking Sector

The impact of the European Union Artificial Intelligence Act on the banking sector: Individuals’ fundamental rights, conflicts of laws and Brexit

Date: 23 August 2023


Sean Much, Co-CEO/CFO, AI & Partners, Sean has an extensive background in the
entertainment industry (e.g. film and art), and has a specialism in design. Alongside this,
Sean has more than a decade of experience in the professional services sector, including
holding the position of a tech accountant for 5 years. Sean knows about auditing and has
helped with an IPO on the New York stock exchange. As well as being a compliance expert,
he has deep expertise in implementation aspects of audit & assurance engagements, and
has been working with the largest global tech MNEs over the past 5 years.


Michael Charles Borrelli, Co-CEO/COO, AI & Partners, Michael Charles Borrelli is a
highly experienced financial services professional with over 10 years of experience. He has
held executive positions in compliance, regulation, management consulting and operations
for institutional financial services firms, consulted for FCA-regulated firms on strategic
planning, regulatory compliance and operational efficiency. In 2020, Michael set-up the
operations model and infrastructure for a crypto-asset exchange provider, and has been
actively engaged in the Web 3.0 and AI communities over the last 4 years. He currently
advises a host of AI, Web3, DLT and FinTech companies.


Charles Kerrigan, Partner, CMS UK, Charles Kerrigan is a specialist in emerging
technologies including crypto, digital assets, Web3 and AI. He works on corporate finance
and venture capital transactions in crypto, tokenisation, NFTs, Web3 and DeFi. He works on
consulting projects on blockchain and AI for public bodies, policymakers, standards
institutions, and corporations.


The Blockchain Industry in the UK Landscape Overview names Charles as a “leading
influencer in blockchain”. He is part of teams working on investing and setting standards for
emtech in the UK, Europe and the US. At CMS he is a Partner in a team that covers
emtech. He has roles on the advisory boards of AI and crypto firms and of trade bodies. He
is the Editor and co-author of Artificial Intelligence Law and Regulation (Edward Elgar, 2022).
He is the Contributing Editor of AI, Machine Learning & Big Data (GLI, 2023). He was listed
in The Lawyer Hot 100 2022 of “most daring, innovative and creative lawyers”.


Abstract


This academic article offers a comprehensive exploration of the potential impact of the
proposed European Union (“EU”) Artificial Intelligence (“AI”) AI Act (the “EU AI Act” or “Act”)
on the banking sector. The article delves into the Act’s provisions and their implications for
the banking industry, scrutinizing its potential to reshape operational practices, risk
management strategies, and customer interactions within financial institutions. By critically

analysing the Act’s alignment with the sector’s existing regulatory framework and its capacity
to address the evolving challenges faced by the banking domain, the article sheds light on its
prospective role in shaping responsible AI integration in the sector. Additionally, the article
investigates the challenges and benefits associated with the potential application of the Act,
considering both ethical considerations and operational efficiencies. Overall, this article
contributes to the ongoing discourse surrounding AI regulation by providing insights into the
transformative possibilities that the proposed EU AI Act might introduce to the banking
sector, offering a framework for anticipating the changes and considerations that could
define the future of AI in financial services.


Key words: Artificial intelligence, EU AI Act, AI, Banking, Risk, Fundamental Rights


Introduction


The convergence of artificial intelligence (AI) and regulatory frameworks has emerged as a
pivotal juncture in contemporary discourse, prompting a critical examination of the potential
implications for various industries. In the midst of this transformative landscape, the
proposed EU AI Act has garnered significant attention for its ambitious aim to regulate AI
technologies comprehensively. This academic article is dedicated to exploring the potential
impact of the proposed EU AI Act on the banking sector, delving into the potential
ramifications, opportunities, and challenges that the Act’s provisions might introduce.
The Context of AI and Regulatory Imperatives
The exponential growth of AI technologies has necessitated a regulatory framework that
strikes a balance between innovation and ethical considerations. The financial services
industry, particularly the banking sector, stands at the forefront of this transformation, as AI-
driven solutions redefine operational paradigms, customer interactions, and risk
management strategies. As AI technologies become increasingly integrated into the core
functions of financial institutions, ethical concerns surrounding transparency, accountability,
and consumer protection have gained prominence.


The Proposed EU AI Act: A Regulatory Endeavor


Amidst these dynamic developments, the proposed EU AI Act emerges as a regulatory
endeavour that envisions AI technologies aligned with ethical standards, societal values, and
the safeguarding of fundamental rights. The Act’s multifaceted approach, characterized by
risk-based classification, transparency mandates, and mechanisms for accountability,
underscores the European Union’s commitment to fostering the responsible integration of AI
within its jurisdiction. While rooted in the EU’s legal framework, the Act’s potential impact
reverberates beyond its borders, influencing global standards and practices in AI
governance.


The Banking Sector: Nexus of AI and Regulation


The banking sector occupies a pivotal position at the intersection of AI innovation and
regulatory compliance. AI applications within this domain encompass a wide spectrum, from
algorithmic trading and fraud detection to personalized customer experiences through
chatbots. As financial institutions leverage AI-driven capabilities to streamline operations and
enhance customer services, the potential implications of the proposed EU AI Act within this
sector warrant careful analysis.


Research Problem and Objectives


This article embarks on a thorough exploration of the potential impact of the proposed EU AI
Act on the banking sector. The research problem centres on comprehending how the Act’s
provisions may intersect with the unique dynamics of the banking industry, evaluating its

capacity to reshape operational paradigms, enhance risk management strategies, and
influence customer engagement practices. Furthermore, the article seeks to identify the
potential challenges and benefits that the Act’s application may introduce, examining both
ethical considerations and operational efficiencies.


Structure of the Article


The article is structured to provide an in-depth analysis of the potential impact of the
proposed EU AI Act on the banking sector. Following this introduction, the subsequent
sections will delve into the key provisions of the Act and their implications for the banking
domain. This will be succeeded by an exploration of the challenges and opportunities
presented by the Act’s potential application, taking into account the ethical dimensions and
operational considerations specific to the banking sector. Additionally, the article will employ
case studies and practical examples to illustrate the potential transformative scenarios that
the Act might bring to AI-driven banking applications. The concluding section will synthesize
the findings, shedding light on the broader implications of the proposed EU AI Act within the
global AI governance landscape and its potential to shape the future of responsible AI
integration in the banking sector.


To be specific, as the proposed EU AI Act addresses the ethical dimensions of AI integration,
its potential implications for the banking sector are of paramount importance. This article
contributes to the ongoing discourse by offering an in-depth analysis of the proposed Act’s
potential impact on banking, unravelling its provisions, potential challenges, and
transformative prospects within the context of AI-driven financial services.


Key changes from the EU AI Act


The proposed EU AI Act introduces a series of fundamental changes that are poised to
reverberate across the banking sector, potentially reshaping its operational landscape and
ethical considerations. This section dissects the primary changes encapsulated within the
Act and their anticipated implications for the banking sector’s engagement with AI
technologies.


Risk-Based Classification System


A central innovation of the proposed EU AI Act is the introduction of a risk-based
classification system that categorizes AI applications into low, intermediate, and high-risk
categories. This risk-based approach underscores the Act’s intent to tailor regulatory
requirements to the potential societal impact of AI applications. Within the banking sector,
this classification system could entail a recalibration of risk management strategies,
especially for high-risk applications such as credit decision-making algorithms. The Act’s
classification system could prompt financial institutions to undertake more comprehensive
due diligence processes while integrating AI applications that align with their risk tolerance
levels.


Transparency and Accountability Mandates


The Act places a paramount emphasis on transparency and accountability, aiming to
demystify the decision-making processes of AI systems. The provision requiring clear
explanations for AI-generated decisions is particularly salient for the banking sector. As AI
technologies are increasingly involved in credit assessments, investment recommendations,
and fraud detection, the Act’s mandate could prompt financial institutions to develop
explainable AI models. This shift could enhance consumer trust and enable regulatory
bodies to scrutinize AI-driven financial decisions more effectively.
Ethical Considerations and Bias Mitigation

The proposed EU AI Act addresses ethical concerns by forbidding AI practices that pose
significant risks to individuals’ fundamental rights and dignities. Within the banking sector,
this could catalyse an ethical awakening as financial institutions grapple with the ethical
implications of AI applications, such as ensuring that lending algorithms are devoid of
biases. The Act’s push for bias mitigation aligns with the banking industry’s commitment to
fair lending practices and could prompt institutions to invest in AI models that are not only
accurate but also equitable.


Oversight and Enforcement Mechanisms


The Act introduces a robust framework for oversight and enforcement, mandating the
establishment of national competent authorities and the creation of the European Artificial
Intelligence Board. Within the banking sector, this could amplify regulatory scrutiny on AI-
driven financial services, encouraging greater adherence to ethical and operational
standards. Financial institutions may need to ensure that their AI applications meet the Act’s
requirements, thus enhancing transparency and promoting ethical AI integration.
Given the above, the proposed EU AI Act presents several key changes that are poised to
impact the banking sector’s engagement with AI technologies. The introduction of a risk-
based classification system, emphasis on transparency and accountability, ethical
considerations, and the reinforcement of oversight mechanisms could collectively reshape
the sector’s approach to AI integration. These changes, while driven by the Act’s ethical
objectives, hold the potential to influence operational paradigms and regulatory adherence
within the banking industry, ushering in an era of more responsible and transparent AI-driven
financial services.


Which regulations win: the EU AI Act or financial crime and
terrorism prevention regulations?


The symbiotic relationship between technological advancements and regulatory frameworks
has fostered a complex landscape where innovation and compliance often intersect. In the
context of the proposed EU AI Act and the banking sector’s engagement with AI
technologies, an intriguing dichotomy emerges between the Act’s ethical imperatives and the
established financial crime and terrorism prevention regulations. This section dissects this
complex interplay, examining how the Act’s provisions may align, clash, or harmonize with
the existing regulatory framework aimed at combating financial crimes and terrorism
financing within the banking sector.


Divergent Objectives: Ethical Imperatives vs. Crime Prevention


At the heart of the discourse lies a tension between the ethical imperatives of the EU AI Act
and the pragmatic necessity of financial crime and terrorism prevention regulations. The EU
AI Act, driven by a commitment to uphold fundamental rights, human dignity, and ethical AI
integration, emphasizes transparency, accountability, and bias mitigation. Conversely,
financial crime and terrorism prevention regulations prioritize detecting and mitigating
threats, necessitating stringent measures to ensure the integrity of financial systems and
curb illicit activities.


Transparency and Detection Conflict


The Act’s emphasis on transparency and accountability, while crucial for building trust and
ensuring ethical AI development, presents challenges when weighed against the covert
nature of financial crimes and terrorism financing. AI-driven algorithms used to detect and
prevent financial crimes often rely on complex models that may not be readily explainable to
avoid alerting criminals. The Act’s mandate for clear explanations of AI-generated decisions

could inadvertently undermine the effectiveness of anti-fraud and anti-money laundering
measures, providing adversaries with insights into detection mechanisms.


Ethical AI vs. Predictive Policing


The ethical considerations championed by the EU AI Act can clash with the deployment of AI
technologies for predictive policing against financial crimes. Predictive policing leverages AI
algorithms to identify patterns and predict future criminal activities. However, this practice
raises ethical concerns, as it could disproportionately target specific demographics, thereby
infringing upon human rights. The Act’s emphasis on equity and bias mitigation may lead to
tensions with predictive policing approaches, prompting financial institutions to navigate the
fine line between ethical AI and efficient crime prevention.


Bias Mitigation and Enhanced Surveillance


Bias mitigation, a cornerstone of the EU AI Act, aligns with ethical AI development, striving to
eliminate disparities and ensure fairness. However, within the realm of financial crime
prevention, bias mitigation could inadvertently hinder the detection of suspicious activities.
Enhanced surveillance AI models may rely on historical data, potentially highlighting patterns
that could appear biased. Balancing the Act’s ethical underpinnings with the need for
comprehensive crime prevention might necessitate recalibrating AI models to avoid
inadvertently disregarding potential risks.


Operational Burdens and Effectiveness


The potential impacts of the EU AI Act on the banking sector must be evaluated alongside
the operational challenges and regulatory burdens faced by financial institutions. While the
Act introduces ethical AI imperatives, it also demands adherence to new operational
standards, potentially straining resources and diverting attention from existing compliance
measures. The effectiveness of financial crime and terrorism prevention regulations could be
influenced by the banking sector’s capacity to simultaneously address the Act’s provisions.
Harmonizing Ethical AI and Crime Prevention


While the EU AI Act and financial crime and terrorism prevention regulations may initially
appear in tension, opportunities for harmonization exist. AI technologies can enhance the
efficiency and effectiveness of anti-money laundering and fraud detection processes,
bolstering crime prevention efforts. By developing AI models that align with both the Act’s
ethical mandates and crime prevention goals, financial institutions can harness technology to
achieve dual objectives.


Data Sharing and Collaborative Governance


Collaborative governance mechanisms could provide a platform for reconciling the ethical AI
principles of the Act with financial crime and terrorism prevention regulations. Collaborative
data sharing platforms, where financial institutions pool anonymized data for AI model
training, could aid in developing effective detection mechanisms without compromising
privacy or transparency. These platforms could foster cooperation between regulatory bodies
and financial institutions, facilitating a more holistic approach to both ethical AI development
and crime prevention.


Navigating the Intersection


In the complex milieu of AI-driven innovations and regulatory compliance, the interplay
between the proposed EU AI Act and financial crime and terrorism prevention regulations
evokes multifaceted challenges and potential synergies. While the Act’s ethical imperatives
emphasize transparency, accountability, and bias mitigation, financial crime and terrorism
prevention regulations prioritize detecting and preventing illicit activities. Striking a balance

between these divergent objectives requires strategic recalibration of AI models,
collaborative governance mechanisms, and a nuanced approach that leverages technology
to achieve both ethical AI development and effective crime prevention. This confluence of
ethical and pragmatic considerations underscores the dynamic nature of AI governance
within the banking sector, where the potential harmonization of the EU AI Act and financial
crime prevention measures holds the promise of responsible, transparent, and secure AI
integration.


End of banking secrecy/increased protection for users
from harms caused by artificial intelligence: what is the
difference?


The proposed EU AI Act introduces a distinct dichotomy within the banking sector,
juxtaposing the potential end of banking secrecy with the heightened protection of users
from harms resulting from artificial intelligence (AI) systems. This section delves into the
contrasting implications of these two facets, exploring how the Act’s provisions may herald
the transformation of longstanding practices while concurrently safeguarding users against
the potential risks and biases embedded in AI technologies.


End of Banking Secrecy: A Paradigm Shift


The proposed EU AI Act’s potential impact on banking secrecy is a prominent departure
from conventional norms, bearing far-reaching implications for financial institutions and
clients alike. Historically, banking secrecy has constituted an integral tenet of financial
services, safeguarding client information and transaction details from external scrutiny. The
Act, driven by transparency and accountability mandates, could disrupt this entrenched
practice by necessitating a degree of AI-driven transparency that might challenge traditional
confidentiality norms. As AI algorithms analyze customer data for risk assessment, credit
scoring, and fraud detection, the imperative for clear explanations of AI-generated decisions
could expose financial institutions to a level of transparency previously uncharted in the
realm of banking secrecy.


Balancing Transparency and Confidentiality


While the proposed EU AI Act emphasizes the importance of transparent AI decision-
making, it confronts a delicate balance with the confidential nature of banking transactions.
Financial institutions must navigate the challenge of enhancing transparency without
compromising clients’ sensitive financial information. Striking this equilibrium requires the
development of AI models that provide explainable decisions without disclosing proprietary
data. Additionally, this transformation demands the renegotiation of the implicit contract
between financial institutions and clients, where the latter’s expectation of privacy may
require adjustment to accommodate the Act’s transparency requirements.
Increased Protection for Users from AI Harms


Conversely, the Act’s provisions offer a distinct advantage for banking clients through
heightened protection against harms resulting from AI technologies. The financial sector’s
increasing reliance on AI algorithms, from credit assessments to investment
recommendations, carries inherent risks of biased decision-making, lack of accountability,
and algorithmic opacity. The Act’s emphasis on safeguarding fundamental rights, mitigating
biases, and ensuring transparency aligns with the broader aim of protecting users from
potential AI-related harms.


Ethical AI vs. User Protection

The Act’s mandate to ensure ethical AI development resonates with the imperative to shield
users from AI-driven harms. As AI models play an ever-expanding role in financial services,
biases embedded in these algorithms could perpetuate existing inequalities or inadvertently
exclude certain groups. The Act’s focus on equity and bias mitigation intersects with the user
protection objective, prompting financial institutions to develop AI models that are not only
ethically aligned but also devoid of discriminatory biases. This alignment substantiates the
Act’s overarching goal of ensuring that AI-driven financial decisions do not infringe upon
users’ rights or exacerbate societal disparities.


The Role of Explainability in User Protection


A key avenue through which the Act enhances user protection is by demanding clear
explanations of AI-generated decisions. This requirement empowers users to understand
and challenge the outcomes of AI algorithms, ensuring that their interests are upheld. In the
context of financial services, this provision is particularly relevant as users increasingly rely
on AI-driven decisions for credit access, investment choices, and financial planning. The
Act’s focus on explainability aligns with the broader user protection framework, enabling
clients to comprehend and contest potentially detrimental AI-generated decisions.


Navigating the Trade-off


The tension between the potential end of banking secrecy and the enhanced protection of
users from AI harms presents financial institutions with a challenging trade-off. While the
Act’s transparency and accountability mandates may challenge the age-old tenets of
banking secrecy, they simultaneously offer users greater protection from the opacity and
biases of AI technologies. Financial institutions must navigate this intricate balance,
recalibrating their practices to align with evolving ethical and user protection imperatives
while preserving the trust and confidentiality clients expect.


The Path Forward


In conclusion, the proposed EU AI Act’s potential impact on the banking sector
encompasses both the potential end of banking secrecy and the heightened protection of
users from AI-related harms. The Act’s transparency and accountability provisions could
herald a paradigm shift by disrupting the long-standing practice of banking secrecy.
Simultaneously, the Act’s ethical underpinnings and emphasis on user protection align with
the broader movement to mitigate AI biases and ensure transparent, accountable, and
equitable financial services. As financial institutions navigate this intricate juncture, they face
the challenge of striking a balance between transparency and confidentiality, while
embracing AI technologies that enhance user protection without compromising fundamental
rights. This nuanced navigation reflects the complex evolution of AI governance within the
banking sector, underscoring the imperative of ethical AI integration in the era of increased
transparency and user-centric financial services.
Brexit


The looming shadow of Brexit adds a layer of complexity to the potential impact of the
proposed EU AI Act on the banking sector. The United Kingdom’s withdrawal from the
European Union has introduced a unique regulatory landscape that necessitates an
exploration of how the Act’s provisions could intersect with the post-Brexit banking
environment.


Regulatory Divergence


The divergence created by Brexit calls into question the extent to which the proposed EU AI
Act’s provisions will apply to the banking sector in the United Kingdom. While the Act is

conceived within the framework of the European Union’s regulatory objectives, the UK’s
autonomy post-Brexit affords it the latitude to develop its AI governance framework. This
regulatory divergence could result in distinct AI-related norms and requirements for UK-
based financial institutions, potentially influencing their approach to transparency,
accountability, and user protection.


Cross-Border Implications


The cross-border nature of the banking sector raises questions about how the proposed EU
AI Act’s provisions may interact with UK-based financial institutions’ operations that span EU
member states. The Act’s mandates, such as those related to risk classification and
transparency, may necessitate varying degrees of compliance for financial institutions with
both EU and UK operations. This could lead to operational complexities as institutions
navigate the Act’s requirements within a cross-border context.


Global Competitiveness


Brexit has positioned the UK as an independent global actor, free to establish its regulatory
norms in alignment with its strategic goals. The proposed EU AI Act, while emblematic of the
EU’s commitment to ethical AI development, may trigger a nuanced analysis of how the UK’s
AI governance framework aligns or diverges with global standards. Financial institutions
operating in the UK may consider the interplay between the Act’s provisions, the UK’s
regulatory approach, and international AI standards to maintain their global competitiveness.
International Alignment


In the wake of Brexit, the potential emergence of distinct AI regulatory regimes in the EU and
the UK prompts contemplation of the broader implications for international alignment.
Financial institutions with a global presence, including those in the banking sector, face the
challenge of navigating a heterogeneous landscape of AI regulations. The Act’s impact on
the banking sector could thus extend beyond EU borders, influencing global discussions and
standards for responsible AI governance.


Navigating the Uncertainties


The intersection of the proposed EU AI Act and the post-Brexit regulatory environment
introduces uncertainties for the banking sector. Financial institutions must grapple with the
intricacies of compliance with both EU and UK AI regulations, while also considering their
international obligations. The interplay between the Act’s provisions, UK-specific regulations,
and global AI governance standards will shape the banking sector’s AI integration journey in
the aftermath of Brexit.


A dynamic layer of uncertainty


In the context of the proposed EU AI Act’s potential impact on the banking sector, Brexit
introduces a dynamic layer of uncertainty. The regulatory divergence created by the UK’s
exit from the EU challenges the harmonization of AI governance across borders. Financial
institutions must anticipate the complexities of compliance within a cross-border context and
discern the synergies or disparities between the Act’s provisions and the UK’s AI regulatory
approach. As the banking sector navigates this intricate landscape, it remains essential to
consider the nuanced interplay of AI regulations, Brexit-induced shifts, and the broader
dynamics of responsible AI integration in the post-Brexit era.


Conclusion


The examination of the potential impact of the proposed EU AI Act on the banking sector
underscores the intricate interplay between regulatory imperatives, ethical considerations,
and operational paradigms. The Act’s provisions introduce a dynamic shift in the banking

landscape, heralding the convergence of AI technologies with ethical principles and
transparency mandates. As financial institutions navigate the multifaceted landscape,
several key insights emerge.


The Act’s provisions, including risk-based classification, transparency mandates, and ethical
considerations, collectively redefine the AI integration journey within the banking sector.
Financial institutions must balance the Act’s ethical imperatives with the pragmatic
necessities of financial crime prevention and user protection. The alignment between ethical
AI development and user-centric protection illustrates the potential synergy between
responsible AI integration and the banking sector’s commitment to safeguarding
fundamental rights.


The complexity of the banking sector’s engagement with the proposed EU AI Act extends
beyond regulatory compliance. The Act’s potential end of banking secrecy, juxtaposed with
enhanced user protection, presents financial institutions with a delicate equilibrium to strike.
The transparent AI decision-making envisaged by the Act requires a recalibration of
traditional practices while preserving the confidentiality and trust that clients expect.
Additionally, the evolving regulatory landscape, including Brexit’s implications, introduces a
layer of uncertainty that financial institutions must navigate. The Act’s impact extends
beyond the EU’s borders, influencing global AI governance discussions and standards.
In conclusion, the proposed EU AI Act’s potential impact on the banking sector encapsulates
a transformative journey toward responsible AI integration. While challenges and
complexities abound, the Act’s provisions align with the banking sector’s evolving ethical
commitments and user protection imperatives. As financial institutions grapple with the
intricate balance between transparency, operational efficiency, and confidentiality, the Act’s
vision of ethical AI within the banking sector represents a cornerstone in the broader
movement toward responsible, transparent, and equitable AI integration. The banking
sector’s trajectory in the era of the proposed EU AI Act will be defined by its ability to
harmonize these imperatives while fostering innovation, trust, and ethical considerations.


Acknowledgments
We extend our sincere gratitude to CMS UK for their invaluable support, guidance, and
assistance throughout the process of producing this academic article. Their expertise and
insights have been instrumental in shaping the content and ensuring its accuracy. We are
truly appreciative of their commitment to excellence and their dedication to fostering
knowledge and collaboration within the legal and academic realms.
References
Birmingham, U. o. (2021, August). Retrieved from
https://deliverypdf.ssrn.com/delivery.php?ID=22210008811700212407600110801810009902
80560190490350531260270861011260901021270000671020590370011080610381
08083000127017011094006000058075009004127115000097073090112051006009
013125098100125094082084004120075010
Commission, E. (2023, June). A European approach to artificial intelligence. Retrieved from
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-
intelligence

Commission, E. (2023, June). Regulatory framework proposal on artificial intelligence.
Retrieved from https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-
ai
Ebers, M., Hoch, V. R., Rosenkranz, F., Ruschemeier, H., & and Steinrotter, B. (2021). The
European Commission’s Proposal for an Artificial Intelligence Act—A Critical
Assessment by Members of the Robotics and AI Law Society (RAILS). The Impact of
Artificial Intelligence on Law. Retrieved from https://www.mdpi.com/2571-8800/4/4/43
Parliament, E. (2021, June). Retrieved from 2021/0106(COD): Artificial Intelligence Act:
https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2021/0106(
COD)&l=en
Parliament, E. (2023, June). Retrieved from Artificial intelligence act:
https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)69879
2_EN.pdf
Parliament, E. (2023, June). EU AI Act: first regulation on artificial intelligence. Retrieved
from
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-
ai-act-first-regulation-on-artificial-intelligence
Veale, M., & Zuiderveen, B. F. (2021). Demystifying the draft EU AI Act. Computer Law
Review International.

Skip to content