Abstract
The challenges of fairness, robustness and transparency are not new in artificial intelligence. Countless papers and technical projects tackle those issues. However, very little has been made to link technical breakthroughs with legal requirements. This gap creates legal uncertainties among AI providers and users. In this paper, we assess if and how technical solutions can help comply with legal requirements. For instance, we find out that:
Only randomization and generalization methods can anonymize a training dataset. Personal data can be protected in AI systems with differential privacy or multiparty computation.
The GDPR’s requirement of explainability can be legally ensured even in deep neural networks with surrogate algorithms such as LIME or SHapLey. However, it is unsure whether those algorithms will be sufficient to comply with the interpretability requirement set in the AI Act.
Fairness doesn't have one uniform mathematical definition. It can be measured by at least three different metrics: disparate impact, equalized odds and equality of opportunity. The theorem of impossibility tells us that those metrics are fundamentally incompatible, and that a balance must be struck between these formulas. We find that bias can be mitigated technically in AI systems using disparate impact removers and reweighing algorithms. However, we also find that technical methods are insufficient and taking into how humans will use these algorithms in real-world scenarios is essential.
Robustness is not legally ensured when using so-called certification algorithms with constraint methods. Here again, we find out that holistic security approaches are preferable.
Artificial intelligence significantly improves every aspect of our life: in healthcare, finance, marketing, the video game industry, car industry, law enforcement, and even in the military industry. AI systems are increasingly part of our daily life: they organize our working day, drive vehicles and suggest a song we might like. Although this technology makes our life easier, it can also do harm. On March 18, 2018, Elaine Herzberg was hit by a uber self-driving car and died. The AI system identified the threat as a “bicycle” 2.6 seconds before the crash. AI systems can also cause material damage to property and even immaterial damage: for instance, the loss of privacy, or the inability to access employment. Our fundamental rights in the European Union such as the right to life, to human dignity, to non-discrimination, to the protection of personal data and private life as well as the right to be protected as a consumer can be affected by AI systems.
This Gitbook is addressed to legal and IT professionals who wish to make sure that the AI system they are developing is compliant with existing European regulations. The European regulations under the scope of this paper are the GDPR, the rules on discrimination and the AI Act. Furthermore, this notebook provides practical technical solutions which help comply with legal requirements. These solutions can be found on a Google Colab, built on top of the Compas dataset.
This notebook is based on this original paper.
Last updated