AI and RGPD: how to proceed to ensure a compliant use of data?
Artificial intelligence (AI) is a revolutionary technology that offers numerous opportunities for businesses in a variety of fields. However, its use raises complex legal issues. One of the key aspects of these issues concerns the protection of personal data. AI involves the collection of a large amount of data to function effectively.
So, to comply with the regulation, it is crucial to know the rules to be respected to ensure the preservation of individuals’ rights.
AI : Lack of specific regulations
At present, there are no specific regulations governing Artificial Intelligence. Instead, the rules governing data protection in general apply, notably the GDPR, whose provisions must be respected when using AI.
However, European players seem keen to define clear rules to ensure ethical, responsible, and appropriate use of this emerging technology. A draft European regulation on AI is currently under discussion. Proposed in April 2021 by the European Commission, the AI Act aims to establish a harmonized framework for the use of AI within the EU. Various categories of AI systems are proposed based on the seriousness of the risks that can be generated, and specific obligations are introduced for each category, concerning data use, system operation and supplier accountability.
Some national authorities have also put in place a number of resources to which players considering AI can refer to ensure compliant use of data. In France, for example, a self-assessment guide is available, designed to highlight data protection issues and help players to assess the risks associated with the system they are planning to use, right from the design stage.
IA and GDPR : The main rules to comply with
- Comply with the fundamental principles of the GDPR: To ensure the compliant collection and processing of AI data, it is important to ensure compliance with the fundamental principles of the GDPR. These principles include limiting the collection and use of data to what is strictly necessary, retaining data for a limited period and implementing appropriate security measures to prevent unauthorized access.
- Identify the legal basis: According to the GDPR, there must be a legal basis to process personal data. The use of AI may require different legal basis, such as the individual’s consent, the performance of a contract, compliance with a legal obligation or the legitimate interest pursued by the controller. It is essential to determine the appropriate legal basis for each specific use of AI to comply with the requirements of the GDPR.
- Ensuring transparency and informing individuals: the opacity of AI systems can be a source of risk for data subjects. To avoid this, the GDPR emphasizes transparency and informing individuals about the collection and processing of their personal data. It is therefore important to provide clear and comprehensible information when using AI, particularly on the purposes of processing, the categories of data collected, the recipients of the data and the rights of individuals. Privacy notices must be adapted to reflect the specificities of AI use.
- Managing rights requests: The rights of individuals introduced by the GDPR are of the utmost importance in the context of AI. They aim to give individuals control over their personal data and ensure responsible and ethical use of AI while respecting their privacy. Data subjects must be informed of their rights and how to exercise them.
- Define the data retention period: The GDPR establishes clear principles regarding the retention period of personal data. When using AI, it is important to define upstream data retention policies that comply with GDPR obligations. Data must be retained for a limited period and deleted once it is no longer required for the original purpose for which it was collected.
- Conduct a data protection impact assessment: Where the use of AI poses a substantial risk to the rights and freedoms of individuals, a data protection impact assessment (“DPIA”) must be conducted. This assessment aims to identify and mitigate potential risks to people’s privacy. It must be conducted upstream of any AI deployment and must involve an in-depth analysis of technical and legal aspects.
- Comply with rules on profiling and automated decisions: AI often involves profiling and automated decision-making activities. The GDPR establishes specific rules for these practices. It is important to make AI decision-making processes transparent and understandable, providing clear explanations of the existence of such processing, the underlying logic, and the intended consequences of the processing for the natural person concerned.
- Ensuring compliance for data transfers outside the European Union: When personal data is transferred outside the EU, the GDPR imposes certain conditions to ensure an adequate level of privacy protection. The use of AI may involve data transfers to non-EU countries, requiring special attention to ensure compliance with GDPR requirements. Appropriate transfer mechanisms, such as standard contractual clauses or binding corporate rules, must be used when the transfer involves a country that does not benefit from an adequacy decision, to ensure a level of protection equivalent to that offered within the European Union.
- Securing data: Data security is a major concern in the context of AI. Organizations must implement appropriate technical & organizational security measures to protect personal data from unauthorized access, loss, or leakage (or any other form of data breach). Strong security policies and procedures must be put in place, considering the particularities of AI, such as securing the algorithms and models used.
While the use of AI offers many opportunities, it also raises challenges when it comes to protecting personal data. Organizations must consider the requirements of the GDPR right from the design stage of their AI systems. By complying with these rules, they can enjoy the benefits of AI while ensuring that individual rights are adequately protected.