By Marco Antonio Tena
The European Union is developing a regulatory framework to address the deployment and risks associated with artificial intelligence (AI) systems. This is part of the European Union’s Artificial Intelligence Act (the “AI Act”), which could be finalized by the summer of 2025, with implementation shortly thereafter. According to a December 2023 European Commission press release, “Prohibited AI” systems will be phased out within six months after enactment, compliance with general AI governance obligations will be required after 12 months, and all rules, including obligations for high-hazard systems, will become effective between 24 and 36 months.
Who will be affected?
The AI Act will apply to public and private entities offering their AI systems in EU markets or using AI in a way that impacts people within the EU. This includes businesses and organizations outside the EU that interact with EU markets. Both AI developers and those implementing AI systems in the EU must ensure that they comply with the requirements of the AI Act.
Exemptions will be granted for prototyping and development activities prior to the market launch of an AI system and for military or national security purposes.
What will the AI Act require?
The AI Act introduces a risk-based approach with four levels:
- Unacceptable Risk/Prohibited AI: These are AI systems that are considered unsafe or infringe on fundamental rights, such as those that manipulate behavior, use biometric identification for law enforcement, or conduct emotional recognition in workplaces or educational settings. These systems will be banned in the EU.
- High Risk: This category includes AI systems that could impact security or fundamental rights, including those that manage critical infrastructure, medical devices, vehicles, or assess credit worthiness. Compliance will require several assessments, registration in public databases, risk management and transparency.
- Transparency Risk: AI systems that can potentially manipulate users, such as chatbots, will require specific disclosures to inform users that they are interacting with a machine.
- Minimal Risk: All other AI systems fall into this category.
Penalties for Non-Compliance
Violations could result in severe penalties:
- Up to €35 million or 7% of annual worldwide turnover (whichever is greater) for prohibited AI violations.
- Up to 15 million euros or 3% of annual worldwide turnover for most other violations.
- Up to 7.5 million euros or 1.5% of annual worldwide turnover for providing incorrect information.
It should be noted that the EU’s General Data Protection Regulation (GDPR) has additional notification requirements for automated decision making, and non-compliance can result in fines of up to €20 million or 4% of annual worldwide revenue, with potential risks of penalties combined with the AI Act.
Preparing for Compliance
If a business operates in the EU and uses AI, it is recommended to consider the four categories of AI systems defined in the AI Act. Keeping detailed records of the types of data used by your AI systems and their purposes will help ensure compliance with the transparency and notification requirements under both the AI Act and the GDPR.
Potential Impact in Mexico
The comprehensive regulatory framework being developed by the EU could serve as a model for other regions, including Mexico.
As Mexican companies interact with EU markets or seek to develop AI technologies, they may need to align their practices with the AI Act to avoid sanctions and ensure compliance. This could lead to changes in Mexico’s legal landscape around AI regulation and governance.