The Artificial Intelligence Act is a landmark law


The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.
It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.
The regulation establishes obligations for AI based on its potential risks and level of impact.
Banned applicationsThe new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
Law enforcement exemptionsThe use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations.
“Real-time” RBI can only be deployed if strict safeguards are met, e.g.
its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation.
Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack.
Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.
Obligations for high-risk systemsClear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law).
Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g.
healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g.
influencing elections).
Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight.
Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.
Transparency requirementsGeneral-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training.
The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.
Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.
Measures to support innovation and SMEsRegulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.
QuotesDuring the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency.
Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.
The AI Office will now be set up to support companies to start complying with the rules before they enter into force.
We ensured that human beings and European values are at the very centre of AI’s development”.
Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered.
We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies.
However, much work lies ahead that goes beyond the AI Act itself.
AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare.
The AI Act is a starting point for a new model of governance built around technology.
We must now focus on putting this law into practice”.
Next stepsThe regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure).
The law also needs to be formally endorsed by the Council.
It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of

MEPs approved the regulation with 523 votes in favor, 46 against, and 49 abstentions, as agreed upon during negotiations with member states in December 2023.

In addition to encouraging innovation and positioning Europe as a leader in the field, it seeks to safeguard democracy, environmental sustainability, fundamental rights, and the rule of law from high-risk AI. According to its potential risks and degree of impact, the regulation lays out obligations for AI.

applications that were blocked.

Certain AI applications that pose a threat to citizens’ rights are prohibited by the new regulations. These include biometric categorization systems that rely on delicate features and the aimless harvesting of face images from CCTV footage or the internet in order to build databases for facial recognition. AI that manipulates human behavior or takes advantage of people’s vulnerabilities will also be prohibited, as will social scoring, predictive policing (when it is based solely on profiling or assessing characteristics), emotion recognition in the workplace and in schools, and AI that scores high on social scores.

exemptions for law enforcement.

In general, law enforcement is not allowed to use biometric identification systems (RBI) unless certain circumstances are met, such as those that are specifically listed and restricted. Only when stringent security measures are fulfilled can “real-time” RBI be implemented, e.g. G. Its use is constrained by time, space, and unique judicial or administrative authorization that must come first. A focused search for a missing person or stopping a terrorist attack are two examples of such applications. It is deemed a high-risk use case to use such systems post-facto, or “post-remote RBI,” which necessitates legal authorization connected to a criminal offense.

requirements for risky systems.

Other high-risk AI systems are also expected to have clear obligations (because of their significant potential harm to democracy, the environment, human rights, safety, and health). Among the high-risk applications of AI are vital infrastructure, employment, education and career training, and crucial commercial and public services (e.g. g. healthcare, banking), a few law enforcement, migration, and border management systems, justice, and democratic processes (e.g. G. impacting electoral processes). Such systems need to be accurate and transparent, maintain use logs, evaluate and minimize risks, and provide human oversight. The public will have the ability to file complaints regarding AI systems and request justification for decisions made that impact their rights and are based on high-risk AI systems.

obligations for transparency.

General-purpose AI (GPAI) systems and the GPAI models they are based on need to adhere to certain transparency standards, such as publishing comprehensive summaries of the training content and complying with EU copyright law. Performing model evaluations, identifying and reducing systemic risks, and reporting on incidents are among the extra requirements that will apply to the more potent GPAI models that might present systemic risks.

Furthermore, content that has been altered or synthesized—also known as “deepfakes”—must be identified as such.

actions to assist SMEs and innovation.

To develop and train innovative AI before it is put on the market, it will be necessary to establish regulatory sandboxes and real-world testing at the national level, making them accessible to SMEs and start-ups.


The first legally binding legislation on artificial intelligence has finally been passed worldwide, with the goals of lowering risks, fostering opportunity, preventing discrimination, and promoting transparency, according to Internal Market Committee co-rapporteur Brando Benifei (SandD, Italy), who made this statement during Tuesday’s plenary session. Parliament has made it possible for Europe to outlaw unjust AI practices while safeguarding citizens’ and workers’ rights. The AI Office will now be established in order to assist businesses in beginning to adhere to the regulations prior to their implementation. We made sure that the development of AI has human values and humanity at its core.

Dragos Tudorache, co-rapporteur for the Civil Liberties Committee (Renew, Romania), stated: “The EU has delivered.”. We have connected the idea of AI to the core principles that underpin our civilizations. Still, there is a lot of work to be done that goes beyond the AI Act. Artificial intelligence (AI) will force us to reconsider the social contract that underpins our democracies, as well as our approaches to labor markets, education, and warfare. A new governance model centered on technology has its foundation in the AI Act. Now, we need to concentrate on implementing this law.

The next actions.

It is anticipated that the regulation will be finally adopted before the end of the legislative session (through the so-called corrigendum procedure), though it is still subject to a final lawyer-linguist check. The Council must formally approve the law as well.

Twenty days after it is published in the official journal, it will go into effect. It will then be fully applicable 24 months after that date, with the following exceptions: codes of practice will become effective nine months after the date of entry into force; general-purpose AI rules, including governance, will become effective 12 months after that date; and obligations for high-risk systems will become effective after 36 months.


The Artificial Intelligence Act directly addresses the recommendations made by citizens at the Conference on the Future of Europe (COFE). More specifically, it addresses proposals 12(10), 33(5), and 37 (3) on improving citizens’ access to information, including that of people with disabilities, and 35 on promoting digital innovation, (3) while maintaining human oversight, and (8) on the trustworthy and responsible use of AI, setting safeguards and ensuring transparency.

Leave a Reply

scroll to top