Understanding the GDPR and EU AI Act: Key Insights for Businesses

February 20, 2025

AI and Data Privacy Compliance
To date, the European Union (EU) has implemented two significant legislations impacting organisations both within and outside the EU: the General Data Protection Regulation (GDPR), which addresses the handling of personal data, and the Artificial Intelligence (AI) Act, which pertains to the development and deployment of artificial intelligence systems.
The AI Act regulates AI technologies based on risk levels, with stricter requirements for high-risk applications like biometric surveillance, credit scoring, hiring algorithms, and medical diagnostics. For businesses, developers, and innovators, understanding these regulations is key to staying compliant, minimising legal risks, and ensuring ethical AI development.
-
GDPR: The foundation of Data Protection
Enacted in 2018, the GDPR applies to any organisation that processes the personal data of individuals within the EU, irrespective of the company's location. This means businesses worldwide must comply if they handle EU residents' data.
Key GDPR Requirements for Businesses and AI developers
Lawful Basis for Data Processing[1]
-
- Businesses are required to establish a clear legal basis for collecting and processing personal data, which can include obtaining consent, fulfilling contractual obligations, complying with legal requirements, or pursuing legitimate interests. This ensures that data processing is transparent, lawful, and aligned with GDPR principles.
- Businesses must define a valid legal basis for collecting and processing personal data, such as obtaining explicit consent, meeting contractual obligations, adhering to legal mandates, or pursuing legitimate interests. This requirement ensures that data processing is conducted transparently, lawfully, and in accordance with GDPR principles.
Transparency and User Rights[2]
- Organisations must clearly inform individuals about the personal data they collect, how it is used, and whether automated systems, including those capable of decision-making or profiling, are involved in processing that data. GDPR requires transparency, particularly when data is obtained from third-party sources, ensuring individuals are aware of how their information is being handled. This includes providing details on the categories of data processed, the purposes of processing, and any significant effects of automated decision-making, in line with GDPR's broader principles of fairness and accountability.
Automated Decision-Making[7] and AI
- It should be noted that any AI application that processes personal data must comply with the GDPR to avoid significant regulatory penalties, which can reach up to €20 million or 4% of global annual turnover, whichever is higher. Ensuring alignment with GDPR is crucial to mitigate legal and financial risks.
The EU AI Act: Regulating AI Risks and Compliance
Enacted in 2024, the EU AI Act introduces a regulatory framework governing the development, deployment, and use of AI systems within the EU. In contrast to the GDPR, which centres on individual rights, the AI Act categorises AI systems according to their risk levels and imposes corresponding obligations.
Key Requirements for Businesses, AI Developers, and Innovators
-
AI Risk Classification System[11]
- Unacceptable Risk AI - Banned (e.g., AI for mass surveillance, social scoring).
- High Risk AI[12] - Strict regulations apply (e.g., AI used in finance, recruitment, healthcare).
- Limited and Minimal Risk AI – Fewer compliance obligations but transparency is still required.
-
High-Risk AI Compliance Obligations[13]
- AI systems in critical areas (e.g., hiring, legal decisions, finance) must have human oversight mechanisms.
- Developers must ensure bias detection and mitigation to prevent discrimination.
- AI must be transparent and explainable, allowing users to understand its decisions.
-
Transparency and Disclosure[14]
- Users must be clearly informed when they are interacting with AI, such as chatbots or facial recognition systems.
- AI-generated content (deep fakes, synthetic media) must be clearly labelled.
-
Data and Model Governance[15]
- Businesses must keep detailed records of AI training data, biases, and system performance.
- Developers must validate datasets to prevent discrimination or privacy risks.
-
Enforcement and Penalties[16]
- Fines for non-compliance can reach €35 million or 7% of global revenue for severe violations.
It should be noted that AI developers and businesses must ensure compliance before deploying AI in high-risk environments to avoid regulatory issues and reputational risks.
Overlap Between GDPR and the EU AI Act: What Businesses Need to Know
While GDPR focuses on data privacy and the AI Act on AI risk management, businesses need to comply with both when AI systems handle personal data.
Area of Overlap |
GDPR |
EU AI ACT |
Lawful Processing |
Mandates that personal data must be processed lawfully. Requires a legal basis for processing personal data. Emphasises data accuracy and respecting data subject rights.[17] |
Requires that AI systems processing personal data must also comply with the GDPR. [18]Any AI system handling personal data of EU residents must adhere to GDPR principles, including having a lawful basis for processing and ensuring data accuracy.[19] |
Automated Decision-Making |
Addresses automated decision-making systems that have a significant effect on individuals. Article 22 provides protections for individuals when automated systems are used to make important decisions. |
Includes requirements for automated decision-making systems, such as the need for fairness, transparency, and contestability.[20] |
Data Protection Impact Assessments (DPIAs) and AI Risk Assessments (AIPAs) |
Requires DPIAs for high-risk processing activities[21] |
Requires AI Risk Assessments to classify AI systems based on risk levels.[22] Organisations must conduct both DPIAs and AIPAs when AI systems handle personal data, evaluating risks and aligning with legal requirements.[23] |
Transparency |
Requires organisations to provide clear and accessible information about how they process personal data.[24] |
Requires transparency about the functioning of AI systems, especially those that are high-risk.[25] Deployers of high-risk AI systems must inform individuals that they are subject to the use of such systems.[26] |
Data Governance |
Has requirements for data quality and accuracy. Stresses the importance of data minimisation and data protection by design and by default. Requires that personal data be processed in a way that ensures appropriate security and confidentiality.[27] |
Emphasises the quality of data used to train AI models and mandates that datasets should be free from bias.[28] Stresses the importance of data minimisation and data protection by design and by default requires that personal data be processed in a way that ensures appropriate security and confidentiality.[29] |
Specific Categories of Personal Data |
Gives specific protection to special categories of personal data and includes genetic and biometric data[30] as types of personal data that merit specific protection. Specifies conditions for processing of personal data relating to criminal convictions and offenses.[31] |
Prohibits biometric categorisation systems that use biometric data to infer sensitive information such as political opinions, religious beliefs, or sexual orientation.[32] Specifies conditions for processing of personal data relating to criminal convictions and offenses.[33] |
Human Oversight |
Gives data subjects the right to human intervention in automated processing.[34] |
Emphasises human oversight, especially for high-risk AI systems[35] |
Data Minimisation |
Emphasises the principle of data minimisation, ensuring that personal data is adequate, relevant, and limited to what is necessary for the purposes for which they are processed.[36] |
Emphasises the principle of data minimisation, ensuring that personal data is adequate, relevant, and limited to what is necessary for the purposes for which they are processed.[37] |
Governance and Compliance
|
Calls for strong governance frameworks to manage compliance risks, including policies and procedures[38] |
Calls for strong governance frameworks to manage compliance risks, including policies and procedures to ensure lawful data processing and ethical AI development. The AI Act includes provisions for regulatory sandboxes to test AI systems in a controlled environment.[39] |
Implications for Businesses:
- If your AI system processes personal data, both GDPR and the AI Act apply.
- Automated decision-making must be fair, transparent, and contestable.
- Strong governance frameworks are needed to manage compliance risks.
Compliance Strategies for AI-Driven Businesses
To meet GDPR and AI Act requirements, organisations and entities should:
-
Conduct Data and AI Impact Assessments
- Privacy Impact Assessments for GDPR compliance.[40]
- AI Risk Assessments to classify AI under the AI Act.[41]
-
Implement Human Oversight for High-Risk AI
- Ensure AI decisions affecting individuals have a human-in-the-loop.[42]
- Use explainable AI (XAI) techniques to improve transparency.
-
Strengthen Data Governance & Documentation
- Maintain Records of Processing Activities.[43]
- Document AI training data, biases, and performance.[44]
- Regularly audit AI for fairness, accuracy, and privacy compliance.
Disclaimer: This publication is for informational purposes only and does not constitute legal or financial advice. Please consult with a qualified legal or financial advisor before making any decisions based on the information provided in this article.
[1] Article 6, GDPR
[2] Articles 12–14, GDPR
[3] Article 15, GDPR
[4]Article 16, GDPR
[5] Article 17, GDPR
[6] Article 18, GDPR
[7] Article 22, GDPR
[8] Articles 24–32, GDPR
[9]Article 37, GDPR
[10] Articles 44–50, GDPR
[11] Article 5, EU AI Act
[12] Article 6, EU AI Act
[13] Article 14, EU AI Act
[14] Articles 52–53, EU AI Act
[15] Articles 10–11, EU AI Act
[16] Article 99, EU AI Act
[17] Article 6 GDPR
[18] Article 3, EU AI Act
[19] Article 10(5), EU AI ACT
[20]Articles 14, 52(1), & 53,EU AI ACT
[21] Article 35 GDPR
[22] Article 6,EU AI ACT
[23] Articles 9 & 29, EU AI ACT
[24] Article 12-14,GDPR
[25] Article 13, EU AI ACT
[26] Article 52(1)-(3), EU AI ACT
[27] Article 5,GDPR
[28] Article 10(2), EU AI ACT
[29] Articles 10(3)(4) & 28,EU AI ACT
[30] Article 9 GDPR
[31] Article10 GDPR
[32] Article 5, EU AI ACT
[33] Articles 10(5) & 83, EU AI ACT
[34] Article 22(3) GDPR
[35] Article 14,EU AI ACT
[36] Article 5(1)(c)
[37] Article 10(3), EU AI ACT
[38] Articles 24–30, GDPR
[39] Articles 53-54, EU AI ACT
[40] Article 35, GDPR
[41] Article 6, EU AI Act
[42] Article 14, EU AI Act
[43] Article 30, GDPR
[44] Article 32, GDPR & Article 11, EU AI Act
Comments