On 8 December 2023, after 36 hours of negotiation, an agreement was reached between the European Parliament and the European Council to finalise a legislative text regulating the European Artificial Intelligence Act (AIA).
The Spanish Presidency has played an instrumental role in forging this agreement, including the contribution of the Spanish Socialist Delegation, which, through Ibán García del Blanco, was involved in the negotiating process. Socialist MEP Ibán García del Blanco, as Parliament’s negotiator, represents the Legal Affairs Committee (JURI), with exclusive competence for articles 13 (transparency), 14 (human oversight), 52 (additional transparency measures), 69 (codes of conduct). These sections include regulatory activities in the areas of copyright, trade secrets, AI literacy and ethical principles.
The European Artificial Intelligence Act uses the definition of AI established by the OECD agreement, although it includes nuances to accommodate changes in the near future. The Parliament’s regulation does not cover military or defence uses, or national security issues, as these are not EU competences.
Parliamentary legislation focuses primarily on the notion of risk. Safeguards are created to ensure that the placing on the market, putting into service or use of AI does not constitute risks to the safety, health or fundamental rights of Union citizens. Such safeguards have been extended to also include protections against risks to democracy, the rule of law or environmental protection.
In the area of employment, companies are required to inform workers if they are being exposed to AI programmes. Depending on the labour market sector, regulations may be more or less restrictive.
The new law stipulates cases in which the use of AI systems is strictly prohibited, but also includes exceptions in justified cases, while strictly adhering to strict conditions. Prohibited applications include ‘real-time’ Remote Biometric Identification (RBI) systems, except in cases of identification of victims of kidnapping, human trafficking, sexual exploitation, or missing persons. The use of RBI will also be allowed to prevent terrorist activities, and to locate suspected criminals if the offence committed could be punishable by at least 4 years imprisonment. In any case, and regardless of these exceptions, judicial authorisation will always be required to execute RBI protocols, and a fundamental rights impact assessment will be required.
According to the new law, it will be strictly prohibited to use AI for purposes that violate citizens’ fundamental rights. Therefore, such technology may not be used to track facial images on social media or the internet, nor may emotion recognition systems be used in the workplace or education.
The use of predictive policing systems, as well as biometric profiling systems based on political, religious or philosophical ideology, trade union membership, sexual orientation or race, is also prohibited. Where the law permits the use of AI, measures to respect existing copyright laws must be observed, adhering to codes of good practice. An AI office will be established, incorporated into the European Commission but with its own funds and professionals.
The law identifies applications of AI in markets for goods and services where there are high risks, including leisure goods (toys), medical devices and automobiles. It also identifies the provision of credit or insurance as a service where AI may have adverse effects. Other AI applications, such as personalised content recommendation on digital entertainment platforms, are not included in this high-risk category, as they are already regulated by the DSA.
Specific measures to govern the use of generative AI are also mentioned, in order to prevent the adverse effects of deepfakes or the manipulation or generation of images, audio or video in general. Audiovisual media providers will have to declare the existence of AI-generated content, without this having to interfere with the use and enjoyment of the content.
In the event that the activity can be considered high risk, the supplier of goods or services has a legal obligation to declare this condition to the consumer or user. To counteract such high risks, a number of protocols are established for companies to comply with, including a risk assessment, good governance of the data used in the programme to avoid discrimination, transparency measures to provide all relevant information in a clear and accessible manner, cybersecurity measures, human supervision, and risk prevention and mitigation plans.
In the case of non-compliance or violation of the rules stipulated by the EU law on artificial intelligence, companies are exposed to fines of up to 35 million Euros, or 6.5% of their annual profits.
Looking ahead, and in order to optimise the effectiveness of the new measures, the aim will be to improve the knowledge of AI among the population and professionals in areas of work particularly exposed to these new technologies. Training schemes will be provided to improve risk assessment skills to ensure proper implementation of the new law. Consumers, on the other hand, will have the right to receive an explanation of how AI may affect them when consuming goods and services.