Will this historical step by the Council of the European Union lead to addressing security concerns and rebalancing democracy in the use of AI and Generative AI?
The recent approval of the Artificial Intelligence (AI) Act by the Council of the EU marks a significant milestone in shaping the future of AI regulation and governance globally.
The AI Act introduces a novel risk-based framework, wherein stringent regulations are applied to high-risk AI systems, such as those involved in cognitive behavioural manipulation and social scoring, while lower-risk systems face lighter transparency obligations. This approach ensures that the level of regulation is proportionate to the potential risks posed by the AI systems, thereby fostering innovation while mitigating potential harms.
The legislation outright bans certain AI applications, such as predictive policing based on profiling and categorization by biometric data, due to their unacceptable risks. By prohibiting these practices, the AI Act aims to safeguard fundamental rights and prevent discrimination and bias in AI-driven decision-making processes.
In addition to regulating high-risk AI systems, the Act also addresses general-purpose AI models, imposing varying degrees of requirements based on their risk levels. This nuanced approach recognizes that not all AI models pose the same level of risk and ensures that regulatory burdens are commensurate with the potential impact of the AI systems on individuals and society.
To effectively enforce these regulations, the Act establishes governance bodies such as an AI Office within the Commission. These bodies will be responsible for monitoring compliance with the AI Act, conducting risk assessments, and providing guidance to stakeholders on AI-related matters, thereby ensuring accountability and transparency in the governance of AI technologies.
Companies found in violation of the AI Act could face substantial fines based on their global annual turnover, with proportional penalties for SMEs and start-ups. These penalties serve as deterrents against non-compliance and incentivize organizations to prioritize the ethical and responsible development and deployment of AI systems.
High-risk AI systems are required to undergo a fundamental rights impact assessment before deployment, and transparency obligations ensure that individuals are informed when exposed to technologies such as emotion recognition systems. By promoting transparency and accountability, the AI Act seeks to uphold fundamental rights and ethical principles in the use of AI technologies.
The Act fosters innovation through AI regulatory sandboxes, providing a platform for testing and validating innovative AI systems under controlled, real-world conditions. By creating a conducive environment for innovation, the AI Act encourages the development of AI technologies that are safe, reliable, and beneficial to society.
The AI Act will soon be published in the EU’s Official Journal and will come into force 20 days thereafter. The regulation will start applying two years later, with certain provisions having specific exceptions.
This legislation is poised to address the capabilities of this legislation will bring, though a bit less ambitious than the original expectation.
Still, we need to recognise that the European Union reaching such a fundamental agreement is a significant milestone.