Scope
Last updated
Last updated
The objective of this paper is to analyse the main requirements applicable to AI systems in the GDPR; in the rules on discrimination, and in the AI Act. All these legislations have different scopes, therefore, we clarify here in which scenarios this book stays relevant.
The term “AI system” will refer to the definition given in the AI Act. According to this text, an Artificial Intelligence System is a “system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts”.
This definition of the original draft from the Commission was larger, and has been narrowed down by the Parliament to provide a "sufficiently clear criteria". The Annex I where the EU Commission had originally listed algorithmic methods falling under the definition of AI systems has been entirely deleted. Instead, new Recitals 6a and 6b have been added to clarify the definition. Recital 6a clarifies what is to be understood as machine learning approaches, while Recital 6b focuses on knowledge-based approaches. All-in-all, the two definitions encompass supervised, unsupervised and reinforcement learning (including deep learning with neural networks), statistical techniques for learning and inference (including for instance logistic regression, Bayesian estimation), search and optimisation methods.
Some remarks can be made on this definition. Firstly, the AI Act gives a closed definition of AI systems. Systems which do not use a machine learning or knowledge-based approaches will not be considered as AI systems. Interestingly, the reference to statistical approaches has been deleted during the text negotiation. Secondly, the definition of the AI Act is more specific than the US definition of AI. According to the introduced in June 2020, the term “artificial intelligence” refers to a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments”. As a result, in the US definition, the algorithmic method is irrelevant. Finally, the term “AI system” does not cover the hardware it sits on. However, this doesn’t mean that hardware should be simply left aside. Indeed, Recital 51 of the AI Act mentions that “providers of high-risk AI systems [must take] into account as appropriate the underlying ICT infrastructure”. The white paper on artificial intelligence also calls to caution about the hardware when mentioning that the infrastructure layer is one of the three layers of trust of AI systems.
This paper is applicable to companies who put their AI system on the EU market or use the outcome of an AI system in the EU. As a result, if personal data is processed, the GDPR will almost always be applicable since it applies to the processing of the personal data of data subjects located in the EU for the offering of goods or services in the EU (GDPR, Article 3). Discrimination rules are also territorially applicable since they can be invoked by any EU citizen. As a result, this paper is addressed to companies producing AI systems for the EU market and companies using AI systems in the EU. For instance, in the case of a Chinese IT company providing an AI model to a German bank, both companies would benefit from reading the present paper.